I didn't understand what you meant by you commentary.
The above excerpt shows they assuming upfront that they got the idea from BSD. As a system call is not an application that may be ported, they need to recode it.
If the point was the name change, I guess this is only to keep consistency within the system, and far from a problem.
They took the sane design from BSD ("here's a single system call that gives you crypto-safe entropy") and effectively made a family of system calls that replicate all the pointless drama of /dev/random and urandom. The NIH criticism is valid.
It seems pointless indeed. On the other hand, I'm almost glad the overengineering only went that far. Actually I'm expecting much worse after reading Theodore's ideas on the IETF list.. we'll see what kind of a library interface they'll come up with, if they make something besides getentropy().
But if we get all applications to use the same library, we can
abstract away not only differences in operating system but also
security policies vis-a-vis DRBG/NDRBG blocking/nonblocking. So what
*I* would prefer is a library interface where the application declares
what it wants the random numbers for:
* Monte carlo simulations
* Padding
* IV
* Session key
* long-term key
etc. The library can then decide whether or not the overall system
policy should force application to block when generating long-term
keys, ala gpg, or whether it should getting non-blocking entropy
directly from the kernel, or whether using a userspace DRBG which is
initialized from kernel-supplied entropy is the right answer.
What is a case where a random IV needs a different entropy source than a session key?
And if we're assuming the entropy pool is being compromised (full state read by attacker) from time to time, isn't it foolish to be generating keys on such a machine? Why would new state not be compromised in he same way the previous state was? I understand the system design may want to provide a robust RNG, but further than that seems slightly pointless.