> I assume so, but just to be clear you witnessed this behavior even with
> the -I (directio) parameter?

Yes.

for i in 1 2 4 8 16; do /cm/shared/apps/iozone/current/sbin/iozone -I -l
$i -u $i -r 16k -s 10M -F file1..file16 ; done > output &

>
>>> Can anyone tell me what might be the bottleneck on the single machines?
>>> Why can I not get 180,000 IOPS when running on a single machine.
>
> Can you rerun those tests with, 16 and 32 procs?  I've run into some
> pretty wacky relationships between numbers of cores, procs, and disks in
> the subsystem.  I assume your machine has 8 cores, and I tend to find
> around 2 processes per core to be ideal if the number of disks your
> trying to run against are greater than the number of cores.  This is a
> really handwaving rule-of-thumb, but it's served me alright in the past
> as a first benchmark.

1 proc  Children see throughput for 1 random writers    =   46036.32 KB/sec
2 proc  Children see throughput for 2 random writers    =   82828.13 KB/sec
4 proc  Children see throughput for 4 random writers    =  126709.65 KB/sec
8 proc  Children see throughput for 8 random writers    =  190070.96 KB/sec
16 proc Children see throughput for 16 random writers   =  273970.94 KB/sec

1 proc  Children see throughput for 1 random readers    =  109169.52 KB/sec
2 proc  Children see throughput for 2 random readers    =  202556.82 KB/sec
4 proc  Children see throughput for 4 random readers    =  381504.25 KB/sec
8 proc  Children see throughput for 8 random readers    =  719108.27 KB/sec
16 proc Children see throughput for 16 random readers   = 1152648.13 KB/sec

Just to be clear, I do not want to test the performance of ZFS. Just NFS
at the moment.

Before I got the ZFS box I was exporting a tmpfs over the NFS.

I am quite sure I am not doing any local caching.

Why is each process IO limited like that?

Thanks,

Andrew

>
> Best,
>
> ellis
> _______________________________________________
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to