Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You don't need it, but if you don't have enough entropy for key generation, something is wrong.


If that's snark, you've missed my point. If urandom on your system is insecure, all sorts of other software on that system is also insecure.


Yes, so the question is what is the preferred failure behaviour when there isn't even enough entropy in the system to be confident that you can select a single unpredictable key with any kind of confidence of uniqueness within an unbounded time window. That should mean that the system effectively has no source of entropy.

I think blocking and not returning until you have entropy is a reasonable failure behaviour for gpg in the key generation process.

It'd be nice to report something and maybe hint to the user that you are waiting for just a modicum of entropy to show up, but at least it isn't presenting a key that is entirely predictable (and worse still, the SAME as any other instance of that VM image!!!) to anyone with access to the original VM.

The bug the article is referring to is that a lot of security systems will block reading /dev/random when in fact /dev/urandom will provide a securely unpredictable sequence of data with no statistically likelihood of another system producing the same sequence. It's particularly bad for the case where timeliness is an important part of the protocol (which is largely a given for anything around say... a TCP connection). That's a silly design flaw.


I think I see what you are saying - that gpg blocking, and failing to create a key on a VM, is actually a desired behavior, and that the only real problem is gpg doesn't time out more quickly and say something like, "System not providing sufficient entropy for key generation".

But, if that's the case, then the entire thesis behind, "Use /dev/urandom" is incorrect. We can't rely on /dev/urandom, because it might not generate sufficiently random data. /dev/random may block, but at least it won't provide insecure sequences of data.

This is kind of annoying, because I was hoping that just using /dev/urandom was sufficient, but apparently there are times when /dev/random is the correct choice, right?


/dev/urandom will generate secure random data. That's what it does.

That was the point of the blog post: if you are using /dev/random as input in to a cryptographic protocol that will stretch that randomness over to many more bytes, WTF do you think /dev/urandom is already doing?

What /dev/urandom might fail to do, and this primarily applies to your specific case of a VM just as it is first launching and setting things up, is generate unpredictable data, and most terribly, it might generate duplicate data, for certain specific use cases where that would just be awful.

I would agree that you got the gist right though: /dev/urandom is usually the right choice, but when it is not, /dev/random is indeed needed. Most people misunderstand the dichotomy as "random" and "guaranteed random", which leads to very, very unfortunate outcomes. Other people misunderstand what they are doing cryptographically and somehow think that some cryptographic algorithm that uses a small key to encrypt vast amounts of data shouldn't have any concerns about insecure, non-random cyclic behaviour, but oddly take a jaundiced eye to /dev/urandom. It basically amounts to "I think you can securely generate a random data stream from a small source of entropy... as long as that belief isn't embodied in something called urandom".

Again, if you don't know the right choice, you should pass the baton to someone who does, because even if you make the right choice, odds favour you doing something terrible that compromises security.


I think the key point you made was this: "What /dev/urandom might fail to do...is generate unpredictable data."

That was not something that I was aware of, thanks.


Argh. No.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: