>>> Jehan-Guillaume de Rorthais <[email protected]> schrieb am 15.07.2021 um 10:09 in Nachricht <20210715100930.06b45f5b@firost>: > Hi all, > > On Tue, 13 Jul 2021 19:55:30 +0000 (UTC) > Strahil Nikolov <[email protected]> wrote: > >> In some cases the third location has a single IP and it makes sense to use
> it >> as QDevice. If it has multiple network connections to that location ‑ use a >> full blown node . > > By the way, what's the point of multiple rings in corosync when we can setup > bonding or teaming on OS layer? Good question: back in the times of HP-UX and ServiceGuard we had two networks, each using bonding to ensure cluster communication. With Linux and pacemaker we have the same, BUT corosync (as of SLES15 SP2) seems to use them not as redundancy, but in parallel. That is most notable if your rings use different network speeds (like 100 vs. 100, or 10000 vs. 1000): The slower net slows down ALL cluster communication. (In contrast HP-UX ServiceGuard would _switch_ to the secondary network when the primary looked like failed (and back again)) It seems there was an idea for Linux, but the implementation is bad. > > I remember some times ago bonding was recommended over corosync rings, > because > the totem protocol on multiple rings wasn't as flexible than bonding/teaming > and multiple rings was only useful to corosync/pacemaker where bonding was > useful for all other services on the server. > > ...But that was before the knet era. Did it changed? Sorry, I don't know knet yet. Regards, Ulrich > > Regards, > _______________________________________________ > Manage your subscription: > https://lists.clusterlabs.org/mailman/listinfo/users > > ClusterLabs home: https://www.clusterlabs.org/ _______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/
