It's notable that your second link has a screenshot for 24(!) NVMe SSDs totalling 9 terabytes, but the aggregate performance is 2.4M IOPS and 9.3 GB/s for reads. In other words, just 100K/400MB per individual SSD, which is very low these days.
For comparison, a single 1 TB consumer SSD can deliver comparable numbers (lower IOPS but higher throughput).
If I plugged 24 consumer SSDs into a box, I would expect over 30M IOPS and near the memory bus limit for throughput (>50 GB/s).
There's a quadrant of the market which is poorly served by the Cloud model of elastic compute: persistent local SSDs across shutdown and restart.
Elastic compute means you want to be able to treat compute hardware as fungible. Persistent local storage makes that a lot harder because the Cloud provider wants to hand out that compute to someone else after shutdown, so the local storage needs to be wiped.
So you either get ephemeral local SSDs (and have to handle rebuild on restart yourself) or network-attached SSDs with much higher reliability and persistence, but a fraction of the performance.
Active instances can be migrated, of course, with sufficient cleverness in the I/O stack.
- VMs with SSDs can (in general -- there are exceptions for things like GPUs and exceptionally large instances) live migrate with contents preserved.
- GCE supports a timeboxed "restart in place" feature where the VM stays in limbo ("REPAIRING") for some amount of time waiting for the host to return to service: https://cloud.google.com/compute/docs/instances/host-mainten.... This mostly only applies to transient failures like power-loss beyond battery/generator sustaining thresholds, software crashes, etc.
- There is a related feature, also controlled by the `--discard-local-ssd=` flag, which allows preservation of local SSD data on a customer initiated VM stop.
I should've aimed for more clarity in my original comment -- the first link is to locally attached storage. The second is network attached storage (what the GP was likely referring to, but not what is described in the article).
Persistent Disk is not backed by single devices (even for a single NVMe attachment), but by multiple redundant copies spread across power and network failure domains. Those volumes will survive the failure of the VM to which they are attached as well as the failure of any individual volume or host.
There are also multiple Persistent Disk (https://cloud.google.com/persistent-disk) offerings that are backed by SSDs over the network.
(I'm an engineer on GCE. I work directly on the physical hardware that backs our virtualization platform.)