On 6/13/07, Michael Casadevall <[EMAIL PROTECTED]> wrote:

I'm looking at Linux and some of the BSD's implementations of random
using kernel data. Linux uses a few complex math equations to make
the data even more random and various sources. I'll use the equations
from Linux since I'm a total math retard to write my own set, and
pull random bits from the same source Linux does. I'll probably have
a working prototype in a few days of the kernel code; then it would
just be necessary to stick a translator on top of that (which will
have the option to get entropy from sources other then the kernel or
in addition), and we'll have a proper /dev/random.



Hum, the code used by linux and BSD is to provide an entire PNRG solution.
One possible solution to the GNU/Hurd project would be to let the kernel
only
deal with the entropy gathering itself, and thus leaving all of the
difficult math
to a translator.  One of the source  code  examples on the wiki does exactly
this,
Storing the entropy in a pool, and having a translator on top using the
entropy gathered
in a translator.


Anyone want to comment on what we should use to judge the quality of
the entropy, and what is going to be needed to get these patches to
migrate upstream into both Hurd and Mach
Michael


I don't know how to test the quality of the raw entropy itself, but there
are a lot
of free tools for statistical analyses  of PRNG out there, such as the
DieHarder[1]

Hope I was of any help :)

[1] http://www.phy.duke.edu/~rgb/General/rand_rate.php
_______________________________________________
Bug-hurd mailing list
Bug-hurd@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-hurd

Reply via email to