Hopefully the PSC folks (or other parties) can convince the OpenSSH
maintainers to allow some options (dynamic buffers, disabled encryption
of the non-handshake payload, etc.) that result in higher-bandwidth
transfers.
the switch-to-none feature is a good one. I'm a bit skeptical about
the buffering changes, since you can accomplish the same thing with
sysctls. (has anyone ever experienced a real case where too-large
sysctl net memory settings caused problems? obviously, attempting
to do long-fat-pipe transfers to a heavily used web server might
be a problem, since the latter wants tight controls on sock mem use.)
the really cool thing would be if you could associate a default
setting for socket buffers with a _route_. heck, a round-port combo.
it seems crazy for apps to be messing with these issues.
ps - For ssh transfers over long distances (high-bandwith and high
latency) one can break the data up into pieces and run multiple
ssh transfers in parallel. Transmitting terabyte data sets from
the West Coast to the East Coast over Abilene I've seen >10X
speedups (that is, >10X the overall throughput) when running
15--20 simultaneous ssh sessions.
this implies inappropriate buffer size settings, no? what's the bw*delay
product for that path?
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf