So what happens if ssh (IIRC correctly in typical configurations it depends on network to start) fails to start at boot? You can't even login at failsave console. What does this actually buy us over sudo or su? Sure you avoid a setuid binary but instead you are now running a network service (even though only connected to a socket) with root priveledges.
Linux consoles (the ttys that appear over local display or remote-access KVM, or the ttyS* devices that appear over serial ports and IPMI SoL) do not use sudo or su. Those consoles use a program like `getty`, or a window-manager; all those programs are non-suid programs that are started as root.
Your system should have a root password set, for logins via console.
As far as I'm concerned, I use setuid/sudo for auditing. At this point, I don't really do multi-user/multi service boxes. Almost everything I have that's multi-tenant at this point is k8s and you can just use kubectl endpoint instead of ssh. But if you're allowed to log in, you're allowed to setuid to root. So for a k8s box, that's the platform infra team and access to the services on top is through the k8s permissions provider.
For the platform infra teams, if you just need something like metrics and logs, that's already off box. If you need to trigger some job or workflow, you can use the pipeline.
But when someone does log in and do root stuff, I want to have an audit log.
I actually can't think of a single box I own where someone with a login doesn't also have root for everything.
Obviously, I understand the services doing setuid thing, but in the case of services, you generally have systemd doing setuid to drop permissions instead of the other way around.
If you have access to the bootloadet, you can still set systems.unit=emergency.target, or init=/bin/bash, or rd.break=pre-pivot, or boot into a live-cd environment. All of the normal emergency options work.
For less fatal emergencies, I don't see anything that would tie this instance of sshd to tge network.