Another question that bothers me is the bonding code multicast related behavior when it does fail-over.

From what I see in bond_mc_swap(), set_multicast_list() is well handled: dev_mc_delete() is called for the old slave (so if in the future the old slave has a link, it will leave the multicast group) and dev_mc_add() is called for the active slave (so the active slave joins the multicast group).

As for sending IGMP, in bond_xmit_activebackup() i see the following comment: "Xmit IGMP frames on all slaves to ensure rapid fail-over
for multicast traffic on snooping switches".

As i don't see any buffering of the IGMP packets, i understand there's no "reply" of sending them during fail-over and this means that only when the router would do IGMP query on this node it will learn on the fail-over. Is it indeed what's going on? if i understand correct it would take some meaningful time for the fail-over to be externally visible in this respect.

Also assuming it does exactly what the comment says, another issue i see here is that in the case of not only the active_slave being UP, the code would TX the IGMP packets over > 1 slave, and hence multicast packets would be sent by the switch also to ports "connected to" non-active slaves, something which will hurt the system performance!?

Or.









-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to