On Mon, Feb 12, 2007 at 01:30:18PM +0000, Jon Morby wrote:
> On 12 Feb 2007, at 13:18, Stuart Henderson wrote:
> 
> >On 2007/02/12 12:44, Jon Morby wrote:
> >>My problem is that graphs of the 2 cisco ports show traffic is only
> >>going via the 1 port and not being balanced across both ports as I
> >>would have expected.
> >
> >loadbalance hashes the header to determine which link to use; you  
> >might
> >want round-robin instead? check trunk(4) for descriptions.
> 
> 
> I should have said, I have also tried with roundrobin
> 
> and also removing the channel-group from the switch.
> 

the default cisco port/etherchannel link aggregation works similar to
the loadbalance mode. some switches/ios versions allow to use other
protocols, like roundrobin.

> The only real performance increase I've seen is with the channel- 
> group removed in which case we do see some traffic across both ports,  
> but we still only get about 1.4MB/sec and not the 1.8MB/s-2.2MB/s I  
> would have expected to see from scp transfers. (graphs show 8Mbit  
> which matches what I'm seeing from scp)
> 
> With the ports set to GigE we see a major speed increase, so it's not  
> a bottle neck on the sending machines as far as I can ascertain.
> 

again, the roundrobin mode will distribute every single packet over
the ports and you may get a speed increase with single connections.

the loadbalance mode will hash the packet headers (src/dst ethernet,
src/dst ip, vlan) and distribute the connections over the ports. you
may get an overall bandwidth increase with many connections from
different addresses/vlans.

by default, all the known vendors do a hash-based loadbalancing
(cizzco-eeh etherchannel/FEC, hp trunk, ...). it is a marketing lie
that it will increase the performance by the number of ports, it
heavilly depends on your individual network traffic and the number of
different connections, but it will never exceed the maximum link speed
for a single connection.

as is said, roundrobin mode may increase the speed, but it also
increases the interrupt load and many other factors. and it doesn't
work very well with non-openbsd systems on the other side. i have seen
it only once, that i got ~166Mbit/s with a crosslink trunk between 2x2
rl(4) nics.

use trunking/bonding to increase the bandwidth and to add additional
layer 2 redundancy.

reyk

Reply via email to