Pavel Machek <pa...@ucw.cz> wrote: > To play devil's advocate, does RNG subsystem need to evolve? Its task > is to get random numbers. Does it fail at the task? > > Problem is, random subsystem is hard to verify, and big rewrite is > likely to cause security problems...
Parts of the problem, though, are dead easy in many of today's environments. Many CPUs, e,g. Intel, have an instruction that gives random numbers. Some systems have another hardware RNG. Some can add one using a USB device or Denker's Turbid (https://www.av8n.com/turbid/). Many Linux instances run on VMs so they have an emulated HWRNG using the host's /dev/random. None of those is necessarily 100% trustworthy, though the published analysis for Turbid & for (one version of) the Intel device seem adequate to me. However, if you use any of them to scribble over the entire 4k-bit input pool and/or a 512-bit Salsa context during initialisation, then it seems almost certain you'll get enough entropy to block attacks. They are all dirt cheap so doing that, and using them again later for incremental squirts of randomness, looks reasonable. In many cases you could go further. Consider a system with an intel CPU and another HWRNG, perhaps a VM. Get 128 bits from each source & combine them using the 128-bit finite field multiplication from the GSM authentication. Still cheap & it cannot be worse than the better of the two sources. If both sources are anywhere near reasonable, this should produce 128 bits of very high grade random material, cheaply. I am not suggesting any of these should be used for output, but using them for initialisation whenever possible looks obvious to me.