Hello,

We purchased a disk based cache system. The content of the cache system 
is NFS exported into our cluster. Currently we'd like to tweak the cache setup 
to 
increase the IO performance. There are tools like bonny or iozone which tests
locally mounted file systems. 

First I tried to start iozone on many NFS clients hammering the cache file 
server.
I got results which are actually meaningless if they are not the same on all 
clients.

One scenario is:

Client A says I got the IO-rate Ra which is twice as big as the IO-rate of B:
Ra = 2 Rb.  The test on B took twice as long as on A. 


The simplest idea is to accumulate all the IO results to get the overall IO 
capability 
of the cache server. In this case the total IO rate is Rt = Ra + Rb. This would 
be the accurate 
result if the IO rate on B was constant during the entire test.

But one can also interpret the result in a different way. 
Client A was doing its IO test and Client B got no bandwidth left at all. 
Only after A finished the test, B has been served. This results in a twice as 
small 
average rate on B.

The total IO-rate in this case is only Rt = Ra.


These extreme case illustrate the difficulty to interpret the results reliably. 
It would make more sense to have a tool at hand which starts a test on many 
clients 
simultaneously and do a count of the IO operation within a particular time 
interval.
The simultaneity is not a problem here - this can be achieved by scripts. 
These results can be safely added together to obtain a overall IO-rate.

Do you know such a tool or are there other ways to get a picture of a IO 
capability 
of cache-file server?

Thank you and cheers,
Henning
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to