First of all, let me state that I don't really believe that attacking the internal state of the kernel PRNG is pratical at all. It is possible, in theory. Using /dev/urandom is pretty safe for many real-world uses. We're discussing theory here.
Henrique de Moraes Holschuh wrote:
On Fri, 04 Mar 2005, Marco Colombo wrote:
You do want to use /dev/random for your session keys. _They_ are likely going to attack your session keys, not your master key. The whole point is to guess as many bits as possible of the kernel PRNG state.
Which, in a Cyrus server, won't be helpful since there is a lot of disk activity and the PRNG state changes quite often, provided you don't keep it drained (and thus without activity because nobody can talk to the server!).
Using random instead of urandom goes a long way into helping to keep the
pool drained (the kernels usually protect a small amount of entropy won't let urandom drain it completely).
Well, the problem is simple. Let's say your server generated 100 session
keys in the last minute. If the system collected at least 100*key size
random bits, you could have used /dev/random. If it not, and you used
/dev/urandom, the attacker has fewer bits to guess. Let E be this number. Assuming that he knows the pool state _now_ (that is, he collects enough keys and is able to break SHA1 and reverse the PRNG algorithm to obtain its state), in order to get the pool state _one minute_ ago, he has to
guess E bits only, which it is easier than guessing all 100 keys. Every
_single_ bit you got out of /dev/urandom for your session keys when
/dev/random would have blocked _halves_ the attacker's effort.
/dev/random blocks when the kernel thinks the PRNG state could have already been guessed, in pure theory. So the attacker would be able guess
The hard facts are that most kernel entropy accounting is not anywhere close to exact enough, so we cannot rely on it on anything close to a border condition (such as the entropy pool being empty, which is the only time urandom differs too much from random). You *are* already depending on the security of the PRNG for small ammounts of time (when the entropy in the pool is "close" to zero).
Well, the accounting needs not to be accurate at all. _Underestimating_ the randomness in the pool is perfectly safe, and that's what the kernel does. You can add 1024 bits of "truly" random data to the pool, and account only 128 entropy bits. So, even if your "truly" random bits were not so random, you're ok, as long as _at least_ 128 were "truly" random.
Since there's no way to make sure bits are "truly" random at all (all tests may fail, all devices may be somehow biased, and so on) you can only reach some level of confidence. Extracting N bits from a "good" source, injecting them into the pool, and accounting only M entropy bits, where M<<N, is the only thing you can do. The smaller M, the better. It's a tradeoff: your confidence vs. the cost of those N bits.
The kernel is quite conservative in estimating randomness, that's why the pool empties so easily, expecially on headless servers.
And no, you're not depending on the security of the PRNG+hash when you're using /dev/random, assuming they're not buggy.
Given that fact, IMHO you are much better using urandom anyway, because the system will still work enough close to an empty entropy pool to regen that pool a bit.
I'm not following you. Unless you're implying that the only source of randomness is the result of the activity the _needs_ the random bits. Such as a dedicated headleass cyrus server with no other entropy source but disk activity. If all new connections blocks on /dev/random, the disk activity will be lower.
Well, first I think disk activity is much bigger for already connected clients. I know some clients out there make a new connections to the server everytime they check for new mail (once a minute?) but those are broken clients. I agree that with those, to block connections means to block activity. But most clients I know open _one_ connection to the server and keep it open. Sane clients generate most of their activity well after the connection has been established (and the session key exchanged). Blocking connections does not block their activity at all.
Second, giving up security only to squeeze some more entropy bits from your disks isn't a winning strategy, IMHO.
anyway - but you have to trust it). Once the attacker knows the output of your /dev/urandom, he knows all the session keys your host will generate.
If, and only if, you have no new entropy going in. If you have a few bits of entropy going in, they have a reduced search space (which IS already a very bad thing)... unless the kernel does a catastrophic reseed of the PRNG. Which on newer Linux kernels for example, it does as often as it possibly can.
I know that every single bit of entropy going in changes the state, but the attacker needs only to try the possible values. The reduced search space _is_ the problem.
Whatever the kernel does, it doesn't do that "randomly enough", being
only a program, thus a finite state machine. The attacker can guess _any_
action, unless it's based on something the attacker is not able to foresee. That is, entropy bits. In order to truly randomly reseed the PRNG,
you need N truly random bits, where N is the size of the seed.
It happened in the past, some software used to generate 128 bit keys was buggy and the actual number of possible combinations was much much smaller (say 2^20). But the same PRNG and the same hash function are used by /dev/random as well, so it isn't much safer in this respect.
It won't be any safer, you mean.
It's _likely_ safer. It depends on the kind of bug. If the bug lies in the routine that computes the next state, /dev/random may be safe, since it changes the state by external means. If the bug is in output generation, both /dev/random and /dev/urandom are affected.
> Well, I am not afraid of the SHA search
> space, in Linux, BSDs and Solaris, that have somewhat good PRNGs. The state
> of the entropy pool changes unpredictably quite often enough.
What do you mean for "good" PRNG? Not invertible? That's not what you'd expect from a PRNG (I expect it to be _easily_ invertible, but I don't think I can prove it mathematically). That's why a crypto-strong hash function must be used (its job is to be not easily invertible).
No, the entropy pool does _not_ change often enough. This is the base assumption for all this discussion. If it changes often enough, you can use /dev/random.
If your kernel runs out of random bits, you should find a reliable source of randomness and feed it into the kernel.
I do. In fact, I have a 25kbit/s entropy feed into the kernel pool in such a way that I force a catastrophic reseed at least every second, and I am also working with the merkensleep.com guys to get rng-tools to a point where you can easily send entropy over the network from something like a VIA Nehemiah step 8 (a couple Mbit/s of entropy, sweet...) to a cluster.
to try 2^4096 possible combinations, not 2^8192. Usually the kernel pool is large enough (and can be enlarged at run time in modern Linux kernels).
At least so far. Someone was talking about removing "that useless feature", so keep an eye for it to make sure they don't.
Well something must be done about the "feature". If all you need is a PRNG whose seed it randomly changed now and then (ala /dev/urandom, when the pool is almost empty), you'd better implement your own in user space, and _always_ use that.
Let me explain better. Let's say your cyrus server needs 100k/bits random bits per second. The kernel gets 25kbit/s (_you_ should underestimate
them as well, btw, adding less to the counter). So, every 100000 bits
of key space, there are only 25000 random bits the attacker needs to
guess. That is, if you're using 128-bits session keys, you could just use
32-bits session keys.
So, first, what's the point in making your application believe it's
generating 128-bit keys, when the search space is, per key, only 32bits?
Secondly, assuming you're fine with it, the question is: why don't you
_always_ drain 25kbit/s only, if that is enough for you?
Why don't you use your private PRNG, extract 100kbit/s from it, and get
only 25kbit/s from the kernel, _always_? By using /dev/urandom, you're
asking for more than, by definition, you need.
Also, you know you get 25kbit/s only because you set a feed up. From the application standpoint, the API goes like this:
fd = open("/dev/urandom", O_RDONLY); c = read(fd, buf, 128); /* [a] */ ... /* time passes, some other application drains the pool */ c = read(fd, buf, 128); /* [b] */
As far as the application is concerned, the following can happen: read [a] gets 1024 random bits, of which 1024 are truly random read [b] gets 1024 random bits, of which none is truly random
Now, if your application is fine with read [b], then those 1024 truly random bits you got in [a] are wasted. You wasted a resource which is precious to other applications.
So yes, I am one who thinks that /dev/urandom is a useless feature (almost). Run a user space daemon instead, which applications can register to, and ask for the quality they need, and no more.
.TM. -- ____/ ____/ / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _____/ _____/ _/ [EMAIL PROTECTED]
--- Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html