Motiv http://www.mymotiv.com/ | iOS, firmware | San Francisco, CA, USA | Full-time | On-site
Motiv is a stealth-mode hardware startup, pioneering the next generation of wearable technology. Join a team that’s not just inspired by our work, but who we work with. We’re looking for passionate people who build amazing products and experiences to join our team.
While the conclusion is true and has been for some time, the supporting data is insufficient.
For example, the author doesn't even bother to gloss over key reliability parameters such as data retention, which is primarily impacted by cycling.
(By data retention, I mean the ability of drive to retain written data over a period of time, such as 6 months, since written. This is usually accelerated in testing by a bake cycle.)
That said, it's a fun read and I'm glad there's more exposure on this topic.
While FAEs (field apps engineers) are helpful and readily available, they are constrained by their lack of product knowledge, and the unidirectional flow of reference code and designs.
On the design side, I hope that the energy and continued improvement the open source software community has will spill over to hardware. I think that would greatly improve the accessibility of the hardware world, and facilitate new product development and innovation.
Proprietary hardware reference designs and tools (whether it be for schematic capture, logic synthesis) etc, while tried and tested, often lack the polish and open interfaces, hence extensibility, good open source systems have.
Don't get me wrong, I'm certainly glad we have Verilator and gEDA.
On the manufacturing front, I feel your pain (mostly on behalf of my colleagues).
Let's be cautiously optimistic. Every few years a new memory tech comes along with tons of articles about how it's going to displace the current one in only 2 years. (RRAM has been one of "the new ones" for years.)
I'm not saying it won't happen, but on the journey from idea to millions of units, a test chip is just the beginning, and in the meantime, NAND is moving.
Look at how slowly NAND has replaced rotating magnetic storage: it's been around since the 80's. Decades of iteration have brought it to a place where it's compelling for non-niche use cases.
The whole premise of taking NAND from 25nm to 19nm (for instance) is to fit more floating gates in the same area. You can take that as a smaller die, or as more bits on a slightly larger die than the previous generation.
Die size is indeed a major factor on cost. For a given technology (litho node + process, e.g. # and type of steps), the cost to process a wafer is fairly constant regardless of die size.
If you shrink a die size, you fit more die on a wafer. Additionally, yield goes up (given an independent manufacturing defect density), and especially for large die, the tessellation around the edges has a major impact.
Indirectly, even testing is related to die size, in that there is a limit to tester parallelism, and more gates means more time and more combinatorial patterns to test, e.g. for stuck-at testing.
There are of course non-linear costs in packaging and package-level testing and elsewhere.
Seems like he's asking for modularization so we can make special-case adjustments to libraries without completely monkey patching or rewriting them. Seems like a reasonable ask of a mature library.
On the streaming verus resident working set argument... Most of what most programmers deal with doesn't have to scale to deal with huge streaming datasets, so it doesn't get the attention.
Indirection in a log-structured form is the best way to increase write IOPs and optimize write amplification. More sophisticated SSDs actually have multiple log heads, for data with different life cycle properties.
You get a write amp of 1 until the drive is filled the first time. After that, it's a function of
1) how full the drive is (from the drive's point of view—this is why TRIM was invented)
2) the over provisioning factor
3) usage patterns, such as how much static data there is
4) how good the SSD's algorithms are
5) other (should be) minor factors, such as wear leveling
The feeling you get from a good SSD is largely due to reduced IO latency (both read and write).
The ability to absorb random writes (to a degree) is huge too. It's why the 2008-vintage Intel X25-M caught the industry by surprise. Other SSDs at the time had extremely poor random write latencies.
Bandwidth is secondary, unless you are talking about an enterprise/database deployment (where queue depth is deep and you want great bandwidth and latency).
Pedants will note that bandwidth and latency are related (see Little's Law).
Build the future of wearables at Motiv! We're a small, collaborative engineering team; venture-backed and shipping 1.0 product this summer 2017.
Lead iOS Engineer - https://mymotiv.com/careers/lead-ios-engineer
Lead Android Engineer - https://mymotiv.com/careers/lead-android-engineer
Firmware Engineer - https://mymotiv.com/careers/firmware-engineer-algorithm-inte...
Full Stack Engineer - https://mymotiv.com/careers/full-stack-web-engineer
Or email myfuture-engineering [at] mymotiv.com