if you type
dd if=/dev/urandom of=/dev/null count=100000
100000+0 records in
100000+0 records out
51200000 bytes (51 MB) copied, 14.0076 seconds, 3.7 MB/s
whereas
dd if=/dev/zero of=/dev/null count=100000
100000+0 records in
100000+0 records out
51200000 bytes (51 MB) copied, 0.0932449 seconds, 549 MB/s
Is there a faster source or random or pseudo-random numbers because at
4MB/s it'll take a loooong time to blank a 60GB drive let alone a 500GB job?
Maybe we could use 10 (or 100) files on RAM disk each 1MB in size filled
by /dev/urandom, then use /dev/random and some algorithm to determine
what portion of what file to use, using random to string together chunks
of pseudo-random files, you could also randomly replace/refill files,
it's not ideal but it should be good enough and if we could get it fast
enough so that it's the drive write speed that the bottle neck that
would be perfect.
What other options are there?
Thanks
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]