Doug brings up some good points. If you want to try Jumbo
Frames to improve MPI performance you might have to
tweak the TCP buffers as well. There are some links around
the web on this. Sometimes it helps performance, sometimes
it doesn't. Your mileage may vary.
Jeff
1) The results you reference are rather old. Does this
reflect your hardware?
2) To support Jumbo Frames you need both NICs and a switch
that support them.
3) It is possible to achieve wire speed from
GigE, you need something other then 32 bit PCI
connections, however. (PCIe, PCI-X)
4) While Jumbo Frames can help NFS, the effect on MPI
can vary by application. Have you run any tests to
see exactly what your network performance is?
(i.e. Netpipe)
You may find these articles helpful:
http://www.clustermonkey.net//content/view/38/34/
http://www.clustermonkey.net//content/view/39/34/
--
Doug
hi all,
new to this list, so don't know if this is offtopic.
i'd like to know experiences about MPI performance gain with jumbo frames.
i
manage a beowulf cluster (42 athlon xp, gentoo linux) with gigabit
ethernet
where fluent, openfoam and other mpi apps are run.
with NFS i'm sure wich kind of gain i would have, but with MPI apps i'm
worried about after seeing this page
http://www.scl.ameslab.gov/Projects/IBMCluster/Benchmarks.html
regards
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
!DSPAM:466d3377130762071360113!
--
Doug
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf