Correct, in addition to two cluster nodes, there is dedicated 3rd node physical 
server as qdevice.

I'm thinking about two level fencing topology, 1st level - fence_ipmilan, 2nd - 
diskless sbd (hpwdt, /dev/watchdog)

But I can't add sbd as a 2nd level fencing,

[root@memverge2 ~]# pcs stonith level add 2 memverge watchdog
Error: Stonith resource(s) 'watchdog' do not exist, use --force to override
Error: Errors have occurred, therefore pcs is unable to continue
[root@memverge2 ~]#

So back to the original question - what is the most correct way of implementing 
STONITH/fencing with fence_iomilan + diskless sbd (hpwdt, /dev/watchdog) ?

Anton


-----Original Message-----
From: Andrei Borzenkov <[email protected]> 
Sent: Thursday, February 5, 2026 1:17 PM
To: Cluster Labs - All topics related to open-source clustering welcomed 
<[email protected]>
Cc: Anton Gavriliuk <[email protected]>
Subject: Re: [ClusterLabs] Question about two level STONITH/fencing

On Thu, Feb 5, 2026 at 2:07 PM Klaus Wenninger <[email protected]> wrote:
>
>
>
> On Wed, Feb 4, 2026 at 4:36 PM Anton Gavriliuk via Users 
> <[email protected]> wrote:
>>
>>
>>
>> Hello
>>
>>
>>
>> There is two-node (HPE DL345 Gen12 servers) shared-nothing DRBD-based sync 
>> (Protocol C) replication, distributed active/standby pacemaker storage 
>> metro-cluster. The distributed active/standby pacemaker storage 
>> metro-cluster configured with qdevice, heuristics (parallel fping) and 
>> fencing - fence_ipmilan and diskless sbd (hpwdt, /dev/watchdog). All cluster 
>> resources are configured to always run together on the same node.
>>
>>
>>
>> The two storage cluster nodes and qdevice running on Rocky Linux 10.1
>>
>> Pacemaker version 3.0.1
>>
>> Corosync version 3.1.9
>>
>> DRBD version 9.3.0
>>
>>
>>
>> So, the question is – what is the most correct way of implementing 
>> STONITH/fencing with fence_iomilan + diskless sbd (hpwdt, /dev/watchdog) ?
>
>
> The correct way of using diskless sbd with a two-node cluster is not 
> to use it ;-)
>
> diskless sbd (watchdog-fencing) requires 'real' quorum and quorum 
> provided by corosync in two-node mode would introduce split-brain 
> which is the reason why sbd recognizes the two-node operation and 
> replaces quorum from corosync by the information that the peer node is 
> currently in the cluster. This is fine for working with poison-pill fencing - 
> a single single shared disk then doesn't become a single-point-of-failure as 
> long as the peer is there. But for watchdog-fencing that doesn't help because 
> the peer going away would mean you have to commit suicide.
>
> and alternative with a two-node cluster is to step away from the actual 
> two-node design and go with qdevice for 'real' quorum.

Hmm ... the original description does mention qdevice, although it is not quite 
clear where it is located (is there the third node?)

> You'll need some kind of 3rd node but it doesn't have to be a full cluster 
> node.
>

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to