Philip Potter

Why does GPG need so much entropy?

Posted on 14 March 2013

Today I have a case of conflicting advice from different tools.

I’m trying to generate GPG keys on virtual machines. The usual way of generating cryptographically secure keys is to use a good entropy source, such as /dev/random. However, this source gets most of its entropy from physical devices attached to the machine, such as keyboards, mice, and disks, none of which are available to a VM. Therefore, when generating GPG keys, it’s quite common to find yourself starved of entropy, with a message like:

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 280 more bytes)

However, things got more curious when I looked at the manpage for /dev/random (which you can find with man 4 random):

While some safety margin above that minimum is reasonable, as a guard against flaws in the CPRNG algorithm, no cryptographic primitive available today can hope to promise more than 256 bits of security, so if any program reads more than 256 bits (32 bytes) from the kernel random pool per invocation, or per reasonable reseed interval (not less than one minute), that should be taken as a sign that its cryptography is not skilfully imple- mented.

I did a bit of reading into the meaning behind this. The argument goes something like this:

  • there is no cryptographic primitive available today which can guarantee more than 256 bits of protection
  • the attack we’re trying to protect against by using true entropy rather than a pseudorandom number generator (PRNG) is that somebody guesses the seed value
  • if guessing the seed value is at least as hard as just brute-forcing the key, then your PRNG is not detracting from the security of the system.

Breaking a GPG key requires less than 256-bits' worth of effort: I don’t remember the details, but a 2048-bit GPG key requires something in the region of 100-200 bits' worth of effort to crack, as you are performing a prime factorization.

So, why is GPG greedily asking /dev/random for 280 bytes of entropy, when all it conceivably needs is 32? I’m not sure, and I’d be delighted to learn, but it seems that /dev/random and GPG fundamentally disagree on what the contract is between them. What this means for me as a user, however, is that GPG is massively gorging itself on entropy from my entropy-starved VM, which means it takes forever and a day to generate GPG keys on a VM.

Interestingly, OS X implements its /dev/random device differently; it uses Schneier, Kelsey and Ferguson’s Yarrow algorithm, which operates on a similar basis to that given above: once you have achieved a certain minimal level of true entropy, you can use that as a seed to a PRNG to feed cryptographic key generators with no loss of security in the system. That means that once it has gathered its initial 256 bits (or whatever) of entropy, OS X’s /dev/random will continue generating random bits effectively forever, making it a much better choice for PRNG on a VM.

PS: Instead of brute-forcing the seed, there is a potential alternative attack against the PRNG, which is that someone finds a way to predict the PRNG output with much less computational effort than brute-force guessing the seed. But this is much the same kind of attack as “someone finds a problem with AES” or “someone finds a problem with GPG” — ie we presume our cryptographic primitives are good because no known attack against them has been discovered, not because we are able to prove that no attack is possible. Using true entropy instead of a PRNG guards against attacks against your PRNG, but you still need to worry about attacks against your crypto algorithm if you’re being that paranoid. IOW, I don’t think GPG’s strategy here seems to be the right tradeoff.