Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a weird wish that someone would test all these SD cards for their latency while an crossing allocation unit boundary in SPI mode. This is quite different than average latency.

One work project of mine involved a bunch of sensors, an 8-bit microcontroller without much memory, and an SD card. It turns out that cost, brand name, and speed class have nothing to do with AU boundary latency. Trying to write 255 bytes every 2 to 10 milliseconds at a few MHz on a chip with 2.5 KB SRAM, the limiting factor is the SD controller needing up to a hundred milliseconds every 10 seconds or so (depending on AU size) during which real-time data cannot be buffered in the SD card, microcontroller, or sensor FIFO. There's simply too much data waiting to store!

-----

A few notes for anyone at all who cares:

Single-level cell (aka industrial) SD didn't result in a faster AU change speed.

One SD card---out of 10 or 12 tried---would not pull down its MISO line unless an extra clock pulse was manually bit-banged, which is a really stupid behavior to debug after having a few working cards. Tracking down this bug took a lot of garbled data, scope measurements, and re-readings of the SD specification to see if my code was wrong or the card manufacturer was non-compliant.

The specification has a maximum write latency of something like 200 ms, and that's the sort of detail you miss until you've spent time and money designing/ordering a PCB and realizing you should've just added a 64Mbit flash IC and dealt with the inconvenience of UART data transfer.



I use 1TB SanDisk (Raspberry 2) and 4GB Panasonic SLC (Raspberry 4) in my distributed database cluster: http://github.com/tinspin/rupy

In my experience industrial versions of mainstream cards are less reliable!

I agree, you have to measure latency and longevity on these edge (maximum longevity or space) SD cards as bandwidth is limited by SPI (4 is not really that much faster than 2).

USB3 is not stable enough for 99.9999 uptime.

If you use the cards above and mostly write once on the 1TB and replicate all that data on 3 cards in 2 geographical locations in realtime (6x in total minimum = you have redundancy even if one node fails on local and global level) you should be fine.

After having 8GB Odroid MMC fail on me twice after 5 years, I invested in Intel Atom servers with X25-E (65nm SLC 100.000 writes per bit) drives as backup plan if the SD card solution fails/has problems.

Power goes from 2W to 25W (50W with 10x SATA SSDs) with that change so you cannot afford more than 1-2 Atoms per location as lead-acid backup will be too expensive/large otherwise.

Remember, if you do not power flash memory for a long time the data is corrupted/lost. You need your disks to be powerable on solar/wind, in practice this means Raspberry 2 when the mains go.

As electricity prices rise the supply quality will degrade. Power outages will increase exponentially until the powergrid fails without replacement parts/transport.

Peak energy is not a speculation, it's the only sure thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: