Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>10. Even AES256 has a block size of 128 bits, which means you can start expecting collisions after 2^64 encryptions. This sounds like a lot, and the exact details of how this can cause a mode of operation to break down depend on the particular mode, but that’s a not an unachievable amount of data, the internet does that much every few months (assuming 1 block per encryption)...

Correct me if I am wrong, but doesn't the fact that the internet is not encrypting everything with the same key and nonce mean that you can't just add up all the data transferred for this? Or does this mean that in the future we might be encrypting 256 EB(10^18) sessions/files?



Of course you can't, its just a visual aid. The point is to say that if a cipher _completely_ breaks down at (some imaginable amount of data), then its probably not behaving itself too well at (some much more reasonable, but still large, amount of data). AES-CTR already starts to questionable in some respects at 2^40 encryption with the same key (nonce isn't relevant, it changes with every block anyway), which is only 128TiB. Sure that's a lot for the average joe, but one could eaisly imagine someone wanting to encrypt that much data with a single key, just go checkout /r/datahoarder if you don't believe me.

And sure, good key rotation fixes that, but that's another foot-gun, and how should the average end user know if their application is using proper key rotation or not?


It's giving an idea of the order of magnitude. Kind of like comparing outputs of an industry to outputs of countries. It's also an argument that we are within a few orders of magnitude of it mattering today, so therefore tomorrow we could be in trouble.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: