This would depend on how worn they are. Here's an article describing a test a YouTuber did[1] that I watched some time ago. The worn drives did not fare that well, while the fresh ones did ok. Those were TLC drives though, for QLC I expect the result is overall much worse.
I remember that post. Typical Tom's quality (or lack there of).
The only insight you can gleam from that is that bad flash is bad, and worn bad flash is even worse, and that's frankly a stretch given the lack of sample sizes or a control group.
The reality is that its non trivial to determine data retention/resilience in a powered off state, at least as it pertains to a coming to a useful and reasonably accurate generalism of "X characteristics/features result in poor data retention/endurance when powered off in Y types of devices," and being able to provide the receipts to back that up. There are far more variables than most people realize going on under the hood with flash and how different controllers and drives are architected(hardware) and programmed(firmware). Thermal management is a huge factor that is often overlooked or misunderstood and that has substantial impact on flash endurance (and performance). I could go into more specifics if interested (storage at scale/speed is my bread and butter), but this post is long enough.
All that said, the general mantra remains true: more layers per cell generally means data per cell is more fragile/sensitive, but that's generally in the context of write cycle endurance.
First time I hear such negativity about tomshardware but the only time I actually looked at one of their tests in detail was with their series that tests for burn-in for consumer OLED TVs and displays. But the other reviews I glances at in that contexts looked pretty solid from a casual glance
Can you elaborate wrt the reason for your critique considering they're pretty much just testing from the perspective of the consumer? I thought their explicit goal is not to provide highly technical analysis and niche preferences but instead look at it for John Doe that's thinking about buying X, and what it would mean for his usecases. From my mental model of that perspective, they're reporting was pretty spot on and not shoddy, but I'm not an expert on the topic
As someone that I read Tom's since it was ran by Thomas, I found the quality of the articles a lot lower than almost 30 years ago. I don't remember when I stopped checking it daily, but I guess it is over 15 years ago.
Maybe the quality looks good to you, but maybe you don't know what it used to be 25 years ago to compare to. Maybe it is a problem of wrong baseline.
The article I linked to is basically just a very basic retelling of the video by some YouTuber. I decided to link to it as I prefer linking to text sources rather than videos.
The video isn't perfect, but I thought it had some interesting data points regardless.
> they're pretty much just testing from the perspective of the consumer
Yes. that's their schtick. Do just enough so that the average non-tech literate user doesn't know any better. And if you're just a casual consumer/reader, It's fine. Not great, not even necessarily accurate, but most of their readership don't know enough to know any better (and that's on purpose). I don't believe their intentionally misleading people. Rather - simply put - It's evident the amount of fucks they give regarding accuracy, veracity, depth, and journalism in general is decidedly less than their competition.
If you're trying to gain an actual technical insight with any real depth or technical merit, toms is absolutely not the place to go. Compare to ServeTheHome (servers, networking, storage, and other homelab and enterprise space related stuff), GN (gaming focused), RTings.com (Displays and peripherals), to name a few to see the night and difference between people that know what they're talking about and strive to be accurate and frame things in the right context, and compare that with what Toms does.
Again, depends on what the user is looking for, but Toms is catering to the casual crowd, aka people who don't know any better and aren't gonna look any deeper. Which is totally fine, but it's absolutely not a source for nuance, insight, depth, rigor, or anything like that.
The article in question[0] is actually a great example of this. They found a youtube video of someone buying white-label drives, with no control to compare it to, nor further analysis to confirm that the 4 drives in question actually all had the same design, firmware, controller, and/or NAND flash underneath (absolutely not a given with bargain bin white label flash, which these were, and it can make a big difference). I'm not trying to hate on the youtuber, there's nothing wrong with their content, but rather with how Toms presents it as an investigation into unpowered SSD endurance while in the same article they themselves admit: "We also want to say this is a very small test sample, highlighted out of our interest in the topic rather than for its hard empirical data." This is also why I say I don't believe their trying to be disingenuous. Hell, I give them credit for admitting that. But it is not a quality or reliable source that informs us of anything at all about the nature of flash at large, or even the specific flash in question, because we don't know what the specific flash in question is Again, just because they're the same brand, model and capacity does not mean they're all the same, even for name brand devices. Crucial's MX500 SSD's for example have been around for nearly a decade now, and the drives you buy today are VERY MUCH different from the ones you could buy of the same capacities in 2017 for example.
Don't even get me started on their comments/forums.
They primarily focus on Storage (SSD's and HDD's) but also evaluate storage controllers, storage-focused servers/NAS/DAS/SAN/etc and other such storage-adjacent stuff. For an example of the various factors that differentiate different kinds of SSD's, I'd recommend the above's article reviewing Micron's 7500 line of SSD's[0]. It's from 2023, but still relevant and you don't have to read the whole thing. Heck just scroll through the graphs and it's easy to see this shit is far from simple even when you're accounting for using the same storage controllers and systems and testing methodologies and what not.
If you want to know about the NAND (or NOR) flash itself, and what the difference/usecases are at a very technical level, there's stuff like this from Micron "NAND Flash 101
NAND vs. NOR Comparison"[1]
If that's too heavy for you (it is a whitepaper/technical paper after all), and you want a more light read on some of the major differences between Enterprise Flash and Consumer flash, SuperSSD has a good article on that [2] as well as many other excellent articles.
Wanna see some cool use cases for SSD's that aren't so much about the specific low-level technicals of the storage device itself, but rather how they can be assembled into arrays and networked storage fabrics in new and interesting ways ServeTheHome as some interesting articles such as their "ZFS without a Server Using the NVIDIA BlueField-2 DPU"[3]
Apologies for responding 2 days late. I would be happy to answer any specific questions, or recommend other resources to learn more.
Personally my biggest gripe that I've not really seen anyone do proper analysis on is the thermal dynamics of storage devices and the impact that has (especially on lifespans). We know this absolutely has an effect just from deploying SSD's at scale and seeing in practice how otherwise identical drives within the same arrays and in the same systems have differing lifespans with the number one differentiating factor being peak temperatures and temperature delta's (high delta T can be just as bad or worse than just high temp, although that comes with a big "it depends"). Haven't seen a proper testing methodology really trying to take a crack at it (because that's a time consuming, expensive, and very difficult task, far harder to control for relevant variables than in GPU's imo, due in part to the many many different kinds of SSD's from different NAND flash chips, different heatsinks/form factors, wide variety in where they're located within systems, etc etc). Take note that many SSD's, save for those that explicitly are built for "extreme/rugged environments" have thermal limits that are much lower than other components in a typical server. Often the operating range spec is something like -10C to 50C for SSD's (give or take 10C on either end depending on the exact device), meanwhile GPU's and CPU's can operate at over 80C which - while not a preferred temp - isn't out of spec, especially under load. Then consider the physical packaging of SSD devices as well as where they are located in a system can mean they often don't get adequate cooling; M.2 form factor SSD's are especially prone to issues in this regard, even in many enterprise servers both due to where they're located in relation to airflow or other hot components (often have some NIC/GPU/DPU/FPGA sitting right above them or a nearby onboard chip(set) dumping heat into the board which raises the thermal floor/ambient temps). There's a reason the new EDSFF form factor has so many different specs to account for larger heatsinks and cooling on SSDs [4][5][6]
I've barely even touched on things like networked arrays, the explosion in various accelerators and controllers for storage, NVMeoF/RoCE/Storage Fabrics, Storage Class Memory, HA storage, Transition flash, DRAM and Controllers within SSD's, Wear-leveling and Error Correction, PLP, ONFI, SLC/MLC/TLC/QLC and the really fun stuff like PCIe root topologies, NVME zoned namespacing, Computational Storage, CXL, GPUDirectStorage/BaM, Cache Coherency, etc etc etc.
[1]: https://www.tomshardware.com/pc-components/storage/unpowered...