Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SSH issue after running devsec.hardening.ssh_hardening role #854

Open
jobetinfosec opened this issue Mar 5, 2025 · 8 comments
Open

SSH issue after running devsec.hardening.ssh_hardening role #854

jobetinfosec opened this issue Mar 5, 2025 · 8 comments

Comments

@jobetinfosec
Copy link

I ran this role against a fresh installed Ubuntu 24.04 server, and the end, the following error showed up:

fatal: [domain.tld]: FAILED! => {"changed": false, "msg": "Unable to start service ssh: Job for ssh.service failed because the control process exited with error code.\nSee \"systemctl status ssh.service\" and \"journalctl -xeu ssh.service\" for details.\n"}

Via a dashboard console, I managed to log as root user and check logs:

fatal: chroot ("/run/sshd"): No such file or directory [preauth]

How may I fix this?

@schurzi
Copy link
Contributor

schurzi commented Mar 6, 2025

Hey @jobetinfosec, we would appreciate if you use the provided template for reporting Issues.

Which version of our collection are you using? Since this is a bug, that was fixed in 10.0.0 (more specific #784) it should not happen anymore.

@jobetinfosec
Copy link
Author

Hi @schurzi
I'm using devsec.hardening ver. 10.3.0

@schurzi
Copy link
Contributor

schurzi commented Mar 6, 2025

interesting. What does the task Ensure privilege separation directory exists report in your Ansible output?

@jobetinfosec
Copy link
Author

TASK [devsec.hardening.ssh_hardening : Ensure privilege separation directory exists]
ok: [test]

@jobetinfosec
Copy link
Author

I think I found the culprit...
When I ran the playbook the first time, I only ran an apt update command, and the SSH error came out.
Now I ran also apt upgrade and no more SSH errors... for God's sake...

@schurzi
Copy link
Contributor

schurzi commented Mar 8, 2025

I am glad you solved the issue for your case. I consider failures that lead to an inaccessible server very serious, so I'd like to understand how you arrived at this problem. I tried several ways to replicate this issue with my test servers. I could not reproduce this problem. Can you describe a bit more clearly how I can trigger this problem?

@jobetinfosec
Copy link
Author

Hi @schurzi
First of all I ran the devsec scripts against an Ubuntu server running 24.04 release.
The first time, I ran the scripts without updating anything on target server, and I've gotten a missing auditd package warning.
After running apt update the fatal: chroot ("/run/sshd"): No such file or directory [preauth] error showed up.
The third time after running apt update and apt upgrade the scripts ran successfully.

@jobetinfosec
Copy link
Author

Hi @schurzi

However, testing it again on another server this time using an Ansible playbook, a further issue came out...

Mar 06 16:05:46 test systemd[1]: ssh.service: Found left-over process 853 (sshd) in control group while starting unit.>
Mar 06 16:05:46 test systemd[1]: ssh.service: This usually indicates unclean termination of a previous run, or service>
Mar 06 16:05:46 test sshd[15968]: error: Bind to port 22 on 0.0.0.0 failed: Address already in use.
Mar 06 16:05:46 test sshd[15968]: fatal: Cannot bind any address.
Mar 06 16:05:46 test systemd[1]: ssh.service: Main process exited, code=exited, status=255/EXCEPTION
Subject: Unit process exited
Defined-By: systemd
Support: http://www.ubuntu.com/support

An ExecStart= process belonging to unit ssh.service has exited.

The process' exit code is 'exited' and its exit status is 255.
Mar 06 16:05:46 test systemd[1]: ssh.service: Failed with result 'exit-code'.
Subject: Unit failed
Defined-By: systemd
Support: http://www.ubuntu.com/support

The unit ssh.service has entered the 'failed' state with result 'exit-code'.
Mar 06 16:05:46 test systemd[1]: ssh.service: Unit process 853 (sshd) remains running after unit stopped.
Mar 06 16:05:46 test systemd[1]: Failed to start ssh.service - OpenBSD Secure Shell server.
Subject: A start job for unit ssh.service has failed
Defined-By: systemd
Support: http://www.ubuntu.com/support

A start job for unit ssh.service has finished with a failure.

The job identifier is 2221 and the job result is failed.

The Ansible playbook I used, simply updates and upgrades system packages, add 3 sudo users and installs a few basic packages:

- certbot
- composer
- curl
- git
- htop
- net-tools
- python3-pip
- screen
- supervisor
- tree
- unzip
- vim     
- whois
- zip

Any idea?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants