Hi, On 3/9/18 5:26 PM, Jan Friesse wrote: > ... > >> TotemConfchgCallback: ringid (1.1436) >> active processors 3: 1 2 3 >> EXIT >> Finalize result is 1 (should be 1) >> >> >> Hope I did both test right, but as it reproduces multiple times >> with testcpg, our cpg usage in our filesystem, this seems like >> valid tested, not just an single occurrence. > > I've tested it too and yes, you are 100% right. Bug is there and it's pretty > easy to reproduce when node with lowest nodeid is paused. It's slightly > harder when node with higher nodeid is paused.
Good, so we're not crazy :) > > Most of the clusters are using power fencing, so they simply never sees this > problem. That may be also the reason why it wasn't reported long time ago > (this bug exists virtually at least since OpenAIS Whitetank). So really nice > work with finding this bug. > Hmm, but even slow pauses (1 to 2 seconds) cause this, so fencing should get active there yet. We here had a theory that environment changes let this bug trigger more often, i.e. scheduler, IO subsystem changes in the Kernel, for example. As we saw a significant raise of reports in the recent years. (I mean we grow in users too, but the increase feels not just like correlation). > What I'm not entirely sure is what may be best way to solve this problem. > What I'm sure is, that it's going to be "fun" :( > > Lets start with very high level of possible solutions: > - "Ignore the problem". CPG behaves more or less correctly. "Current" > membership really didn't changed so it doesn't make too much sense to inform > about change. It's possible to use cpg_totem_confchg_fn_t to find out when > ringid changes. I'm adding this solution just for completeness, because I > don't prefer it at all. same here, I mean we could work around this, but it does not really feels right. And our code is designed with the assumption that we get a membership callback, changing that assumption seems like a bit of a headache as we need to verify that no side effects gets introduced by the workaround and everything can cope with it. Doable, but also not to much fun :) > - cpg_confchg_fn_t adds all left and back joined into left/join list would work for us. > - cpg will sends extra cpg_confchg_fn_t call about left and joined nodes. I > would prefer this solution simply because it makes cpg behavior equal in all > situations. > So the behaviour you assumed it should do? Getting two callbacks, one that all others left and then the one where all others joined in the new membership? This sounds like the best approach to me, as it really tells the CPG application what happened in the way all other members see it. But I'm not an corosync guru :) > Which of the options you would prefer? Same question also for @Ken (-> what > would you prefer for PCMK) and @Chrissie. > The last approach. cheers and much thanks for your help! Thomas _______________________________________________ Users mailing list: [email protected] https://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
