> > that's odd, and indicates that the nfs config you tested was hitting > disk limits. and unfortunately, that makes the comparison even less > comprehensible. looking at the config again, it appears that the node > might have just a single disk, which would make the results quite > expected.
all tests were conducted on the same hardware. a point-to-point (single server, single client) write over NFS on Gig/E did not peak the link throughput. on the same hardware and network, glusterfs write peaks the link speed. > >On IB - nfs works only with IPoIB, whereas glusterfs does SDP (and > >ib-verbs, > >from the source repository) and is clearly way faster than NFS. > > "clearly"s like that make me nervous. to an IB enthusiast, SDP may be > more aesthetically pleasing, but why do you think IPoIB should be > noticably > slower than SDP? in a general sense, filesystem throughput is related to link latency, since applications (unless doing AIO) issue the next read/write _after_ the current one completes. having writeback and readaheads help solve the problem to a certain extent, but, in general for filesystems lowlatency transports surely helps. > lower cpu overhead, probably, but many people have no > problem running IP at wirespeed on IB/10GE-speed wires... none of those problems, its about latency. SDP has a lot less latency than IPoIB. avati -- Shaw's Principle: Build a system that even a fool can use, and only a fool will want to use it. _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf