Well I tried few diffrent combinations of drivers before, (I've even compiled all drivers in ;} ), but it didn't help too. There are too many posibilities to gues the right one, moreover I think that it maight be sth somewhere else?

CONFIG_BLK_DEV_PIIX should be your driver. Please also attach the appropriate parts from dmesg.

Did you try removing CONFIG_IDE_GENERIC? If the ATA drives won't get recognised without it, the PIIx driver doesn't know your chipset.

I forgot to mention in last post that I use vanila-sources-2.11.11 with no module support enabled and grsecurity patch.

This should probably be 2.6.11. There were some major updates regarding Intel chipsets in 2.6.12. I'm not sure if your chipset is also affected.

Of course you're right, I meant 2.6.11.11. But I decided to used 2.6.12.5 insted now.

Removing CONFIG_IDE_GENERIC helped.
The /dev/hda device dissapeared, which confused me at the begining.
It's visible as /dev/sda now (old sda is sdb, and sdb is sdc now ;} ), but performance is much better:

# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   3100 MB in  2.00 seconds = 1549.46 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
 Timing buffered disk reads:   84 MB in  3.03 seconds =  *27.72 MB/sec*
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device

# hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   2884 MB in  2.00 seconds = 1441.50 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
 Timing buffered disk reads:  164 MB in  3.03 seconds =  *54.17 MB/sec*
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device

# hdparm -tT /dev/md1

/dev/md1:
 Timing cached reads:   2980 MB in  2.00 seconds = 1488.74 MB/sec
 Timing buffered disk reads:  206 MB in  3.04 seconds =  *67.80 MB/sec*


The previous test (on md0) was not actually the one which was crucial, coz md0 is /boot and works in mirroring (RAID1), md1 is RAID5 ;]


Thx for help.

Jarek
--
[email protected] mailing list

Reply via email to