On 08/26/2017 09:53 PM, strahil nikolov wrote: > Hello everyone, > > as this is my first usage (writing) to mailing list , please excuse me. > > Here is the reason I'm writing to you. I have 3 VM machines (kvm/qemu), > watchdog of type 'i6300esb' with RHEL 7.4 and iscsi target as a shared > storage. > I have created the 3 node cluster and poison pill (pcs stonith fence > node_name) works, but I can't make the sbd daemon to self-suicide the > node once the network is being cut-off (firewall-cmd --panic-on). > > The strange thing is that sbd daemon detects that the storage is off- > line via (I've stripped out the clutter): > > sbd[pid]: warning: inquisitor_child: Servant <iSCSI Disk> is outdated > (age: 4) > sbd[pid]: warning: inquisitor_child: Majority of devices lost - > surviving on pacemaker > sbd[pid]: <iSCSI Disk>: error: header_get: Unable to read header > from device 6
The log says it: surviving on pacemaker If you have pacemaker-observation activated sbd won't self-fence as long as it sees a (quorate) cluster. Regards, Klaus > > The servant keeps restarted but no self-fencing. I thought that the > issue is in the watchdog , but immediately after killing the sbd main > pid - the node gets reset (as expected). > > This is the configuration in "/etc/sysconfig/sbd": > > SBD_DELAY_START=no > SBD_DEVICE="/full/path/to/by-id/iscsi" > SBD_OPTS="-n harhel1" > SBD_PACEMAKER=yes > SBD_STARTMODE=always > SBD_WATCHDOG_DEV=/dev/watchdog > SBD_WATCHDOG_TIMEOUT=5 > > I have used the following example for setting up the sbd: https://acces > s.redhat.com/articles/3099231 > > Thank you for reading this long e-mail. I would be grateful if someone > finds out my mistake. > > > Best Regards, > Strahil Nikolov > > _______________________________________________ > Users mailing list: [email protected] > http://lists.clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org _______________________________________________ Users mailing list: [email protected] http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
