That's a beheviour of quorum systems - majority voting. It guarantees no inconsistent writes in the event of a network partition, where each half of the replicas are workig fine and can talk to each other, but are getting no response from the other half.
But if you can reliably confirm that all but one nodes have "failed", for a suitably robust definition of failed, that's a different scenario. This means even though you can't communicate with a failed node in the normal way, you are able to get confirmation that the node cannot respond to normal messages to any other nodes or clients, and something (maybe controlling the node, or software on the node itself) guarantees to prevent those responses, until the node goes through a recovery and reintegration process.
Some ways this is done are using remote-controlled power, remote-controlled reboot, or reconfiguring the network switches to cut off the node. Just to ensure it can't come back and carry on responding as if nothing happened except a temporary delay. There's some subtlety to doing this robustly: Consider a response packet that got onto the network before the cut off event, but is delayed a long time inside the network due to a queue or fault.
After reliable "failure" confirmation, you can shrink the quorum size dynamically in response, even down to a single node, and then resume forward progress.
But if you can reliably confirm that all but one nodes have "failed", for a suitably robust definition of failed, that's a different scenario. This means even though you can't communicate with a failed node in the normal way, you are able to get confirmation that the node cannot respond to normal messages to any other nodes or clients, and something (maybe controlling the node, or software on the node itself) guarantees to prevent those responses, until the node goes through a recovery and reintegration process.
Some ways this is done are using remote-controlled power, remote-controlled reboot, or reconfiguring the network switches to cut off the node. Just to ensure it can't come back and carry on responding as if nothing happened except a temporary delay. There's some subtlety to doing this robustly: Consider a response packet that got onto the network before the cut off event, but is delayed a long time inside the network due to a queue or fault.
After reliable "failure" confirmation, you can shrink the quorum size dynamically in response, even down to a single node, and then resume forward progress.