Dear All, It is lame, however i managed to get the following kernel paramter to scale well in terms of both performance per node, and scalability over a high bandwidth low latency network
net.ipv4.tcp_workaround_signed_windows = 1 net.ipv4.tcp_congestion_control = vegas net.ipv4.tcp_tso_win_divisor = 8 net.ipv4.tcp_rmem = 4096 87380 174760 net.ipv4.tcp_wmem = 4096 16384 131072 net.ipv4.tcp_mem = 786432 1048576 1572864 net.ipv4.route.max_size = 8388608 net.ipv4.route.gc_thresh = 524288 net.ipv4.icmp_ignore_bogus_error_responses = 0 net.ipv4.icmp_echo_ignore_broadcasts = 0 net.ipv4.tcp_max_orphans = 262144 net.core.netdev_max_backlog = 2000 regards Walid 2008/6/13 Walid <[EMAIL PROTECTED]>: > 2008/6/13 Jason Clinton <[EMAIL PROTECTED]>: > >> >> We've seen fairly erratic behavior induced by newer drivers for NVidia >> NForce-based NIC's with forcedeth. If that's your source NIC in the above >> scenario, that could be the source of the issue as congestion timing has >> probably changed. Have you tried updating your source NIC driver to >> whichever is the newest? Nearly all NIC vendors that are incorporated on >> server motherboards put out updated drivers on their websites. >> >> Jason, > > The NIC is broadcom (Bnx2 driver), I have, however i had to go to the > RHEL5.1 kernel as it does not want to compile on the new kernels. however it > did not change much, i have also played with the different congestion > settings, and even though Vegas seems to be the most performance capable, > it still did not solve it, I had one test that i did not write down where it > did perform well, I have saved the sysctl and will check what parameters > have made the difference. > > regards > > Walid
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf