Defaulting to degraded is a bad default. Mounting a btrfs device array degraded is extremely risky, and the device not booting means you'll actually notice and take action.
Md devices do "degraded" by default and it seems fine. Indeed, I believe this is the default operation of all other multi-device systems, but of course I cannot verify this claim. I dislike all features that by default prevent from booting my system up.
The annoying part of this is that if you do reboot the system, it will never end up responding to a ping, meaning you need to visit the host yourself. In practice it might even have other drives you could configure remotely to replace the broken device. I use md's hot spare support routinely: an extra drive is available, should any of the drives of the independent raids fail.
Granted, md also has decent good monitoring options with mdadm or just cat /proc/mdstat.
As someone else pointed out: how is that different from losing a disk while running? Do you want the file system to stop working or become read-only if one disk is lost while running too? I think the bahviour should be the same on boot as while running.
The usual reason I've seen RAID 1 used for the OS drive is -so- it still boots if it loses one.
Not doing so is especially upsetting when you discover you forgot to flip the setting only when a drive fails with the machine in question several hours' drive away (standalone remote servers like that tend not to have console access).
I think 'refusing to boot' is probably the right default for a workstation, but on the whole I think I'd prefer that to be a default set by the workstation distro installer rather than the filesystem.
That sounds like the right default then. If you're doing a home install, you get that extra little bit of protection. If you're doing a professional remote server deployment, you should be a responsible adult understanding the choices - and run with scrubbing, smart and monitoring for failures.
"Will my RAID configuration designed so my system can still boot even if it loses a drive not actually let it still boot if it loses a drive?" is not a question that I think is fair to expect sysadmins to realise they need to ask.
Complete Principle of Least Surprise violation given everything else's (that I'm aware of) RAID1 setups will still boot fine.
Also said monitoring should then notify you of an unexpected reboot and/or a dropped out disk, which you can then remediate in a planned fashion.
If this was a new concept then defaulting all the safety knobs to max would seem pretty reasonable to me, but it's an established concept with established uses and expectations - a server distro's installer should not be defaulting to 'cause an unnecessary outage and require unplanned physical maintenance to remediate it.'
> and the device not booting means you'll actually notice and take action.
How often do people reboot their systems? IMO, if it's running (without going into "read-only filesystem" mode) with X disks, it should boot with the same X disks. Otherwise, it might be running fine for a long time, and it arbitrarily not coming back in case of a power failure (when otherwise it would be running fine) is an unnecessary trap.
This option is only needed of you can mount the filesystem, but only degraded/to. If you're in that situation, you can remount easily. If not, that's not the solution you need.
Add "degraded" to default mount options. Solved.