On Thu, May 31, 2007 at 11:02:06PM -0400, WHIRLYCOTT wrote: > Etch: (output of hdparm) > > /dev/sda: > Timing cached reads: 1588 MB in 2.00 seconds = 794.41 MB/sec > Timing buffered disk reads: 176 MB in 3.03 seconds = 58.15 MB/sec > CentOS: (output of hdparm) > > /dev/sda: > Timing cached reads: 4480 MB in 2.00 seconds = 2242.86 MB/sec > Timing buffered disk reads: 176 MB in 3.03 seconds = 58.16 MB/sec
The buffered disk reads are the same since that's the transfer rate from the drive's buffer. The problem is the cache reads which is the time from you OS's cache. What does vmstat show as cache size with both OSs? Does one OS have more processes going than the other? What happens if you boot up each in init=/bin/sh, mount whatever filesystem holds hdparm ro, and run the timing tests that way with only those few proceses running? Finally, benchmarks are all well and good but how does this impact you directly? What do you do that relies on fast cache disk IO for which you end up waiting? Doug. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]