I isolated the log when everything happens (when I disable the ha interface),
attached here.
Gabriele
Sonicle S.r.l. : http://www.sonicle.com
Music: http://www.gabrielebulfon.com
eXoplanets : https://gabrielebulfon.bandcamp.com/album/exoplanets
----------------------------------------------------------------------------------
Da: Ulrich Windl <[email protected]>
A: [email protected]
Data: 14 dicembre 2020 11.53.22 CET
Oggetto: [ClusterLabs] Antw: Re: Antw: [EXT] Recoveing from node failure
>>> Gabriele Bulfon <[email protected]> schrieb am 14.12.2020 um 11:48 in
Nachricht <1065144646.7212.1607942889206@www>:
> Thanks!
>
> I tried first option, by adding pcmk_delay_base to the two stonith
> primitives.
> First has 1 second, second has 5 seconds.
> It didn't work :( they still killed each other :(
> Anything wrong with the way I did it?
Hard to say without seeing the logs...
>
> Here's the config:
>
> node 1: xstha1 \
> attributes standby=off maintenance=off
> node 2: xstha2 \
> attributes standby=off maintenance=off
> primitive xstha1-stonith stonith:external/ipmi \
> params hostname=xstha1 ipaddr=192.168.221.18 userid=ADMIN
> passwd="***" interface=lanplus pcmk_delay_base=1 \
> op monitor interval=25 timeout=25 start-delay=25 \
> meta target-role=Started
> primitive xstha1_san0_IP IPaddr \
> params ip=10.10.10.1 cidr_netmask=255.255.255.0 nic=san0
> primitive xstha2-stonith stonith:external/ipmi \
> params hostname=xstha2 ipaddr=192.168.221.19 userid=ADMIN
> passwd="***" interface=lanplus pcmk_delay_base=5 \
> op monitor interval=25 timeout=25 start-delay=25 \
> meta target-role=Started
> primitive xstha2_san0_IP IPaddr \
> params ip=10.10.10.2 cidr_netmask=255.255.255.0 nic=san0
> primitive zpool_data ZFS \
> params pool=test \
> op start timeout=90 interval=0 \
> op stop timeout=90 interval=0 \
> meta target-role=Started
> location xstha1-stonith-pref xstha1-stonith -inf: xstha1
> location xstha1_san0_IP_pref xstha1_san0_IP 100: xstha1
> location xstha2-stonith-pref xstha2-stonith -inf: xstha2
> location xstha2_san0_IP_pref xstha2_san0_IP 100: xstha2
> order zpool_data_order inf: zpool_data ( xstha1_san0_IP )
> location zpool_data_pref zpool_data 100: xstha1
> colocation zpool_data_with_IPs inf: zpool_data xstha1_san0_IP
> property cib-bootstrap-options: \
> have-watchdog=false \
> dc-version=1.1.15-e174ec8 \
> cluster-infrastructure=corosync \
> stonith-action=poweroff \
> no-quorum-policy=stop
>
>
> Sonicle S.r.l. : http://www.sonicle.com
> Music: http://www.gabrielebulfon.com
> eXoplanets : https://gabrielebulfon.bandcamp.com/album/exoplanets
>
>
>
>
>
>
----------------------------------------------------------------------------
> ------
>
> Da: Andrei Borzenkov <[email protected]>
> A: [email protected]
> Data: 13 dicembre 2020 7.50.57 CET
> Oggetto: Re: [ClusterLabs] Antw: [EXT] Recoveing from node failure
>
>
> 12.12.2020 20:30, Gabriele Bulfon пишет:
>> Thanks, I will experiment this.
>>
>> Now, I have a last issue about stonith.
>> I tried to reproduce a stonith situation, by disabling the network
interface
> used for HA on node 1.
>> Stonith is configured with ipmi poweroff.
>> What happens, is that once the interface is down, both nodes tries to
> stonith the other node, causing both to poweroff...
>
> Yes, this is expected. The options are basically
>
> 1. Have separate stonith resource for each node and configure static
> (pcmk_delay_base) or random dynamic (pcmk_delay_max) delays to avoid
> both nodes starting stonith at the same time. This does not take
> resources in account.
>
> 2. Use fencing topology and create pseudo-stonith agent that does not
> attempt to do anything but just delays for some time before continuing
> with actual fencing agent. Delay can be based on anything including
> resources running on node.
>
> 3. If you are using pacemaker 2.0.3+, you could use new
> priority-fencing-delay feature that implements resource-based priority
> fencing:
>
> + controller/fencing/scheduler: add new feature 'priority-fencing-delay'
> Optionally derive the priority of a node from the
> resource-priorities
> of the resources it is running.
> In a fencing-race the node with the highest priority has a certain
> advantage over the others as fencing requests for that node are
> executed with an additional delay.
> controlled via cluster option priority-fencing-delay (default = 0)
>
>
> See also https://www.mail-archive.com/[email protected]/msg10328.html
>
>> I would like the node running all resources (zpool and nfs ip) to be the
> first trying to stonith the other node.
>> Or is there anything else better?
>>
>> Here is the current crm config show:
>>
>
> It is unreadable
>
>> node 1: xstha1 \ attributes standby=off maintenance=offnode 2: xstha2 \
> attributes standby=off maintenance=offprimitive xstha1-stonith
> stonith:external/ipmi \ params hostname=xstha1 ipaddr=192.168.221.18
> userid=ADMIN passwd="******" interface=lanplus \ op monitor interval=25
> timeout=25 start-delay=25 \ meta target-role=Startedprimitive xstha1_san0_IP
> IPaddr \ params ip=10.10.10.1 cidr_netmask=255.255.255.0 nic=san0primitive
> xstha2-stonith stonith:external/ipmi \ params hostname=xstha2
> ipaddr=192.168.221.19 userid=ADMIN passwd="******" interface=lanplus \ op
> monitor interval=25 timeout=25 start-delay=25 \ meta
> target-role=Startedprimitive xstha2_san0_IP IPaddr \ params ip=10.10.10.2
> cidr_netmask=255.255.255.0 nic=san0primitive zpool_data ZFS \ params
> pool=test \ op start timeout=90 interval=0 \ op stop timeout=90 interval=0 \
> meta target-role=Startedlocation xstha1-stonith-pref xstha1-stonith -inf:
> xstha1location xstha1_san0_IP_pref xstha1_san0_IP 100: xstha1location
> xstha2-stonith-pref xstha2-stonith -inf: xstha2location xstha2_san0_IP_pref
> xstha2_san0_IP 100: xstha2order zpool_data_order inf: zpool_data (
> xstha1_san0_IP )location zpool_data_pref zpool_data 100: xstha1colocation
> zpool_data_with_IPs inf: zpool_data xstha1_san0_IPproperty
> cib-bootstrap-options: \ have-watchdog=false \ dc-version=1.1.15-e174ec8 \
> cluster-infrastructure=corosync \ stonith-action=poweroff \
> no-quorum-policy=stop
>>
>> Thanks!
>> Gabriele
>>
>>
>> Sonicle S.r.l. : http://www.sonicle.com
>> Music: http://www.gabrielebulfon.com
>> eXoplanets : https://gabrielebulfon.bandcamp.com/album/exoplanets
>>
>>
>>
>>
>>
>>
>
-----------------------------------------------------------------------------
> -----
>>
>> Da: Andrei Borzenkov <[email protected]>
>> A: [email protected]
>> Data: 11 dicembre 2020 18.30.29 CET
>> Oggetto: Re: [ClusterLabs] Antw: [EXT] Recoveing from node failure
>>
>>
>> 11.12.2020 18:37, Gabriele Bulfon пишет:
>>> I found I can do this temporarily:
>>>
>>> crm config property cib-bootstrap-options: no-quorum-policy=ignore
>>>
>>
>> All two node clusters I remember run with setting forever :)
>>
>>> then once node 2 is up again:
>>>
>>> crm config property cib-bootstrap-options: no-quorum-policy=stop
>>>
>>> so that I make sure nodes will not mount in another strange situation.
>>>
>>> Is there any better way?
>>
>> "better" us subjective, but ...
>>
>>> (such as ignore until everything is back to normal then conisder top
again)
>>>
>>
>> That is what stonith does. Because quorum is pretty much useless in two
>> node cluster, as I already said all clusters I have seem used
>> no-quorum-policy=ignore and stonith-enabled=true. It means when node
>> boots and other node is not available stonith is attempted; if stonith
>> succeeds pacemaker continues with starting resources; if stonith fails,
>> node is stuck.
>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
>>
>>
>>
>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
>>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
Dec 14 12:34:39 [651] xstorage1 corosync notice [TOTEM ] A processor failed,
forming new configuration.
Dec 14 12:34:39 [651] xstorage1 corosync notice [TOTEM ] The network interface
is down.
Dec 14 12:34:41 [651] xstorage1 corosync notice [TOTEM ] A new membership
(127.0.0.1:352) was formed. Members left: 2
Dec 14 12:34:41 [651] xstorage1 corosync notice [TOTEM ] Failed to receive the
leave message. failed: 2
Dec 14 12:34:41 [679] attrd: info: pcmk_cpg_membership: Node 2
left group attrd (peer=xstha2, counter=1.0)
Dec 14 12:34:41 [676] cib: info: pcmk_cpg_membership: Node 2
left group cib (peer=xstha2, counter=1.0)
Dec 14 12:34:41 [679] attrd: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 14 12:34:41 [676] cib: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 14 12:34:41 [679] attrd: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
Dec 14 12:34:41 [681] crmd: info: pcmk_cpg_membership: Node 2
left group crmd (peer=xstha2, counter=1.0)
Dec 14 12:34:41 [676] cib: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
Dec 14 12:34:41 [681] crmd: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 14 12:34:41 [679] attrd: notice: attrd_peer_remove: Removing all
xstha2 attributes for peer loss
Dec 14 12:34:41 [676] cib: info: crm_reap_dead_member:
Removing node with name xstha2 and id 2 from membership cache
Dec 14 12:34:41 [675] pacemakerd: info: pcmk_cpg_membership: Node 2
left group pacemakerd (peer=xstha2, counter=1.0)
Dec 14 12:34:41 [676] cib: notice: reap_crm_member: Purged 1 peers
with id=2 and/or uname=xstha2 from the membership cache
Dec 14 12:34:41 [677] stonith-ng: info: pcmk_cpg_membership: Node 2
left group stonith-ng (peer=xstha2, counter=1.0)
Dec 14 12:34:41 [675] pacemakerd: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 14 12:34:41 [676] cib: info: pcmk_cpg_membership: Node 1
still member of group cib (peer=xstha1, counter=1.0)
Dec 14 12:34:41 [675] pacemakerd: info: pcmk_cpg_membership: Node 1
still member of group pacemakerd (peer=xstha1, counter=1.0)
Dec 14 12:34:41 [651] xstorage1 corosync notice [QUORUM] Members[1]: 1
Dec 14 12:34:41 [679] attrd: info: crm_reap_dead_member:
Removing node with name xstha2 and id 2 from membership cache
Dec 14 12:34:41 [651] xstorage1 corosync notice [MAIN ] Completed service
synchronization, ready to provide service.
Dec 14 12:34:41 [681] crmd: info: peer_update_callback: Client
xstha2/peer now has status [offline] (DC=xstha2, changed=4000000)
Dec 14 12:34:41 [679] attrd: notice: reap_crm_member: Purged 1 peers
with id=2 and/or uname=xstha2 from the membership cache
Dec 14 12:34:41 [681] crmd: notice: peer_update_callback: Our
peer on the DC (xstha2) is dead
Dec 14 12:34:41 [679] attrd: info: pcmk_cpg_membership: Node 1
still member of group attrd (peer=xstha1, counter=1.0)
Dec 14 12:34:41 [675] pacemakerd: info: pcmk_quorum_notification: Quorum
retained | membership=352 members=1
Dec 14 12:34:41 [677] stonith-ng: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 14 12:34:41 [675] pacemakerd: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_reap_unseen_nodes
Dec 14 12:34:41 [677] stonith-ng: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
Dec 14 12:34:41 [681] crmd: info: erase_status_tag: Deleting xpath:
//node_state[@uname='xstha2']/transient_attributes
Dec 14 12:34:41 [675] pacemakerd: info: mcp_cpg_deliver: Ignoring
process list sent by peer for local node
Dec 14 12:34:41 [677] stonith-ng: info: crm_reap_dead_member:
Removing node with name xstha2 and id 2 from membership cache
Dec 14 12:34:41 [677] stonith-ng: notice: reap_crm_member: Purged 1 peers
with id=2 and/or uname=xstha2 from the membership cache
Dec 14 12:34:41 [677] stonith-ng: info: pcmk_cpg_membership: Node 1
still member of group stonith-ng (peer=xstha1, counter=1.0)
Dec 14 12:34:41 [681] crmd: info: pcmk_cpg_membership: Node 1
still member of group crmd (peer=xstha1, counter=1.0)
Dec 14 12:34:41 [681] crmd: notice: do_state_transition: State
transition S_NOT_DC -> S_ELECTION | input=I_ELECTION
cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback
Dec 14 12:34:41 [681] crmd: info: update_dc: Unset DC. Was xstha2
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_delete operation for section
//node_state[@uname='xstha2']/transient_attributes to all (origin=local/crmd/20)
Dec 14 12:34:41 [681] crmd: info: pcmk_quorum_notification: Quorum
retained | membership=352 members=1
Dec 14 12:34:41 [681] crmd: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_reap_unseen_nodes
Dec 14 12:34:41 [681] crmd: info: peer_update_callback: Cluster
node xstha2 is now lost (was member)
Dec 14 12:34:41 [681] crmd: info: election_complete: Election
election-0 complete
Dec 14 12:34:41 [681] crmd: info: election_timeout_popped:
Election failed: Declaring ourselves the winner
Dec 14 12:34:41 [681] crmd: info: do_log: Input I_ELECTION_DC
received in state S_ELECTION from election_timeout_popped
Dec 14 12:34:41 [681] crmd: notice: do_state_transition: State
transition S_ELECTION -> S_INTEGRATION | input=I_ELECTION_DC
cause=C_TIMER_POPPED origin=election_timeout_popped
Dec 14 12:34:41 [681] crmd: info: do_te_control: Registering TE
UUID: fa7da62d-2e8d-c08a-aa5f-b51ae18735fb
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: ---
0.43.25 2
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++
0.43.26 e2a929bac3293b669a65cb55363ab565
Dec 14 12:34:41 [676] cib: info: cib_perform_op: --
/cib/status/node_state[@id='2']/transient_attributes[@id='2']
Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib:
@num_updates=26
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_delete operation for section
//node_state[@uname='xstha2']/transient_attributes: OK (rc=0,
origin=xstha1/crmd/20, version=0.43.26)
Dec 14 12:34:41 [681] crmd: info: set_graph_functions: Setting
custom graph functions
Dec 14 12:34:41 [681] crmd: info: do_dc_takeover: Taking over DC
status for this partition
Dec 14 12:34:41 [676] cib: info: cib_process_readwrite: We are
now in R/W mode
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_master operation for section 'all': OK (rc=0,
origin=local/crmd/21, version=0.43.26)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section cib to all (origin=local/crmd/22)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section cib: OK (rc=0,
origin=xstha1/crmd/22, version=0.43.26)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section crm_config to all
(origin=local/crmd/24)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section crm_config: OK (rc=0,
origin=xstha1/crmd/24, version=0.43.26)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section crm_config to all
(origin=local/crmd/26)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section crm_config: OK (rc=0,
origin=xstha1/crmd/26, version=0.43.26)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section crm_config to all
(origin=local/crmd/28)
Dec 14 12:34:41 [681] crmd: info: corosync_cluster_name: Cannot
get totem.cluster_name: Doesn't exist (12)
Dec 14 12:34:41 [681] crmd: info: join_make_offer: Not making an
offer to xstha2: not active (lost)
Dec 14 12:34:41 [681] crmd: info: join_make_offer: Making join
offers based on membership 352
Dec 14 12:34:41 [681] crmd: info: join_make_offer: join-1: Sending
offer to xstha1
Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join:
join_make_offer: Node xstha1[1] - join-1 phase 0 -> 1
Dec 14 12:34:41 [681] crmd: info: do_dc_join_offer_all: join-1:
Waiting on 1 outstanding join acks
Dec 14 12:34:41 [681] crmd: warning: do_log: Input I_ELECTION_DC
received in state S_INTEGRATION from do_election_check
Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join:
initialize_join: Node xstha1[1] - join-2 phase 1 -> 0
Dec 14 12:34:41 [681] crmd: info: join_make_offer: Not making an
offer to xstha2: not active (lost)
Dec 14 12:34:41 [681] crmd: info: join_make_offer: join-2: Sending
offer to xstha1
Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join:
join_make_offer: Node xstha1[1] - join-2 phase 0 -> 1
Dec 14 12:34:41 [681] crmd: info: do_dc_join_offer_all: join-2:
Waiting on 1 outstanding join acks
Dec 14 12:34:41 [681] crmd: info: update_dc: Set DC to xstha1
(3.0.10)
Dec 14 12:34:41 [681] crmd: info: crm_update_peer_expected:
update_dc: Node xstha1[1] - expected state is now member (was (null))
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section crm_config: OK (rc=0,
origin=xstha1/crmd/28, version=0.43.26)
Dec 14 12:34:41 [681] crmd: warning: throttle_num_cores: Couldn't read
/proc/cpuinfo, assuming a single processor: No such file or directory (2)
Dec 14 12:34:41 [681] crmd: info: parse_notifications: No
optional alerts section in cib
Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join:
do_dc_join_filter_offer: Node xstha1[1] - join-2 phase 1 -> 2
Dec 14 12:34:41 [681] crmd: info: do_state_transition: State
transition S_INTEGRATION -> S_FINALIZE_JOIN | input=I_INTEGRATED
cause=C_FSA_INTERNAL origin=check_join_state
Dec 14 12:34:41 [681] crmd: info: crmd_join_phase_log: join-2:
xstha2=none
Dec 14 12:34:41 [681] crmd: info: crmd_join_phase_log: join-2:
xstha1=integrated
Dec 14 12:34:41 [681] crmd: info: do_dc_join_finalize: join-2:
Syncing our CIB to the rest of the cluster
Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join:
finalize_join_for: Node xstha1[1] - join-2 phase 2 -> 3
Dec 14 12:34:41 [676] cib: info: cib_process_replace: Digest
matched on replace from xstha1: e2a929bac3293b669a65cb55363ab565
Dec 14 12:34:41 [676] cib: info: cib_process_replace:
Replaced 0.43.26 with 0.43.26 from xstha1
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_replace operation for section 'all': OK (rc=0,
origin=xstha1/crmd/32, version=0.43.26)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/33)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section nodes: OK (rc=0,
origin=xstha1/crmd/33, version=0.43.26)
Dec 14 12:34:41 [676] cib: info: cib_file_backup: Archived
previous version as /sonicle/var/cluster/lib/pacemaker/cib/cib-31.raw
Dec 14 12:34:41 [676] cib: info: cib_file_write_with_digest: Wrote
version 0.43.0 of the CIB to disk (digest: 614d7f9bd4a1e1b3134b91b3b996b053)
Dec 14 12:34:41 [676] cib: info: cib_file_write_with_digest: Reading
cluster configuration file /sonicle/var/cluster/lib/pacemaker/cib/cib.CJaOre
(digest: /sonicle/var/cluster/lib/pacemaker/cib/cib.DJaOre)
Dec 14 12:34:41 [681] crmd: info: action_synced_wait: Managed
ZFS_meta-data_0 process 2199 exited with rc=0
Dec 14 12:34:41 [681] crmd: info: action_synced_wait: Managed
IPaddr_meta-data_0 process 2202 exited with rc=0
Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join:
do_dc_join_ack: Node xstha1[1] - join-2 phase 3 -> 4
Dec 14 12:34:41 [681] crmd: info: do_dc_join_ack: join-2:
Updating node state to member for xstha1
Dec 14 12:34:41 [681] crmd: info: erase_status_tag: Deleting xpath:
//node_state[@uname='xstha1']/lrm
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_delete operation for section //node_state[@uname='xstha1']/lrm
to all (origin=local/crmd/34)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/35)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: ---
0.43.26 2
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++
0.43.27 (null)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: --
/cib/status/node_state[@id='1']/lrm[@id='1']
Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib:
@num_updates=27
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_delete operation for section //node_state[@uname='xstha1']/lrm:
OK (rc=0, origin=xstha1/crmd/34, version=0.43.27)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: ---
0.43.27 2
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++
0.43.28 (null)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib:
@num_updates=28
Dec 14 12:34:41 [676] cib: info: cib_perform_op: +
/cib/status/node_state[@id='1']: @crm-debug-origin=do_lrm_query_internal
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
/cib/status/node_state[@id='1']: <lrm id="1"/>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_resources>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_resource id="xstha2-stonith" type="external/ipmi"
class="stonith">
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha2-stonith_last_0"
operation_key="xstha2-stonith_start_0" operation="start"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.10"
transition-key="17:1:0:cd77f568-3cfc-ee63-da77-90f734d91efd"
transition-magic="0:0;17:1:0:cd77f568-3cfc-ee63-da77-90f734d91efd"
on_node="xstha1" call-id="24" rc-code="0" op-status="0" interval="0"
last-run="1607942466" last-rc-change="1607942466" exec
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha2-stonith_monitor_25000"
operation_key="xstha2-stonith_monitor_25000" operation="monitor"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.10"
transition-key="18:1:0:cd77f568-3cfc-ee63-da77-90f734d91efd"
transition-magic="0:0;18:1:0:cd77f568-3cfc-ee63-da77-90f734d91efd"
on_node="xstha1" call-id="25" rc-code="0" op-status="0" interval="25000"
last-rc-change="1607942493" exec-ti
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_resource id="zpool_data" type="ZFS" class="ocf"
provider="heartbeat">
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_rsc_op id="zpool_data_last_0"
operation_key="zpool_data_start_0" operation="start"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.10"
transition-key="14:1:0:cd77f568-3cfc-ee63-da77-90f734d91efd"
transition-magic="0:0;14:1:0:cd77f568-3cfc-ee63-da77-90f734d91efd"
on_node="xstha1" call-id="14" rc-code="0" op-status="0" interval="0"
last-run="1607942464" last-rc-change="1607942464" exec-time="1
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_resource id="xstha1-stonith" type="external/ipmi"
class="stonith">
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha1-stonith_last_0"
operation_key="xstha1-stonith_monitor_0" operation="monitor"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.10"
transition-key="5:1:7:cd77f568-3cfc-ee63-da77-90f734d91efd"
transition-magic="0:7;5:1:7:cd77f568-3cfc-ee63-da77-90f734d91efd"
on_node="xstha1" call-id="19" rc-code="7" op-status="0" interval="0"
last-run="1607942466" last-rc-change="1607942466" ex
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_resource id="xstha1_san0_IP" type="IPaddr"
class="ocf" provider="heartbeat">
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha1_san0_IP_last_0"
operation_key="xstha1_san0_IP_start_0" operation="start"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.10"
transition-key="12:1:0:cd77f568-3cfc-ee63-da77-90f734d91efd"
transition-magic="0:0;12:1:0:cd77f568-3cfc-ee63-da77-90f734d91efd"
on_node="xstha1" call-id="15" rc-code="0" op-status="0" interval="0"
last-run="1607942466" last-rc-change="1607942466" exec
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_resource id="xstha2_san0_IP" type="IPaddr"
class="ocf" provider="heartbeat">
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha2_san0_IP_last_0"
operation_key="xstha2_san0_IP_monitor_0" operation="monitor"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.10"
transition-key="3:1:7:cd77f568-3cfc-ee63-da77-90f734d91efd"
transition-magic="0:7;3:1:7:cd77f568-3cfc-ee63-da77-90f734d91efd"
on_node="xstha1" call-id="9" rc-code="7" op-status="0" interval="0"
last-run="1607942464" last-rc-change="1607942464" exe
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
</lrm_resources>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
</lrm>
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/35, version=0.43.28)
Dec 14 12:34:41 [681] crmd: info: do_state_transition: State
transition S_FINALIZE_JOIN -> S_POLICY_ENGINE | input=I_FINALIZED
cause=C_FSA_INTERNAL origin=check_join_state
Dec 14 12:34:41 [681] crmd: info: abort_transition_graph:
Transition aborted: Peer Cancelled | source=do_te_invoke:161 complete=true
Dec 14 12:34:41 [679] attrd: info: attrd_client_refresh:
Updating all attributes
Dec 14 12:34:41 [679] attrd: info: write_attribute: Sent update 4
with 1 changes for shutdown, id=<n/a>, set=(null)
Dec 14 12:34:41 [679] attrd: info: write_attribute: Sent update 5
with 1 changes for terminate, id=<n/a>, set=(null)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/38)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/39)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section cib to all (origin=local/crmd/40)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section nodes: OK (rc=0,
origin=xstha1/crmd/38, version=0.43.28)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: ---
0.43.28 2
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++
0.43.29 (null)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib:
@num_updates=29
Dec 14 12:34:41 [676] cib: info: cib_perform_op: +
/cib/status/node_state[@id='2']: @in_ccm=false, @crmd=offline,
@crm-debug-origin=do_state_transition, @join=down
Dec 14 12:34:41 [676] cib: info: cib_perform_op: +
/cib/status/node_state[@id='1']: @crm-debug-origin=do_state_transition
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/39, version=0.43.29)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: ---
0.43.29 2
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++
0.43.30 (null)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib:
@num_updates=30, @dc-uuid=1
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section cib: OK (rc=0,
origin=xstha1/crmd/40, version=0.43.30)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/attrd/4)
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/attrd/5)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: ---
0.43.30 2
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++
0.43.31 (null)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib:
@num_updates=31
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
/cib/status/node_state[@id='1']: <transient_attributes id="1"/>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<instance_attributes id="status-1">
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
<nvpair id="status-1-shutdown" name="shutdown"
value="0"/>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
</instance_attributes>
Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++
</transient_attributes>
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/attrd/4, version=0.43.31)
Dec 14 12:34:41 [679] attrd: info: attrd_cib_callback: Update 4 for
shutdown: OK (0)
Dec 14 12:34:41 [679] attrd: info: attrd_cib_callback: Update 4 for
shutdown[xstha1]=0: OK (0)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: ---
0.43.31 2
Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++
0.43.32 (null)
Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib:
@num_updates=32
Dec 14 12:34:41 [676] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/attrd/5, version=0.43.32)
Dec 14 12:34:41 [679] attrd: info: attrd_cib_callback: Update 5 for
terminate: OK (0)
Dec 14 12:34:41 [679] attrd: info: attrd_cib_callback: Update 5 for
terminate[xstha1]=(null): OK (0)
Dec 14 12:34:41 [681] crmd: info: abort_transition_graph:
Transition aborted by transient_attributes.1 'create': Transient attribute
change | cib=0.43.31 source=abort_unless_down:329
path=/cib/status/node_state[@id='1'] complete=true
Dec 14 12:34:41 [680] pengine: warning: pe_fence_node: Node xstha2
will be fenced because the node is no longer part of the cluster
Dec 14 12:34:41 [680] pengine: warning: determine_online_status: Node
xstha2 is unclean
Dec 14 12:34:41 [680] pengine: info: determine_online_status_fencing:
Node xstha1 is active
Dec 14 12:34:41 [680] pengine: info: determine_online_status: Node
xstha1 is online
Dec 14 12:34:41 [680] pengine: info: native_print: xstha1_san0_IP
(ocf::heartbeat:IPaddr): Started xstha1
Dec 14 12:34:41 [680] pengine: info: native_print: xstha2_san0_IP
(ocf::heartbeat:IPaddr): Started xstha2 (UNCLEAN)
Dec 14 12:34:41 [680] pengine: info: native_print: zpool_data
(ocf::heartbeat:ZFS): Started xstha1
Dec 14 12:34:41 [680] pengine: info: native_print: xstha1-stonith
(stonith:external/ipmi): Started xstha2 (UNCLEAN)
Dec 14 12:34:41 [680] pengine: info: native_print: xstha2-stonith
(stonith:external/ipmi): Started xstha1
Dec 14 12:34:41 [680] pengine: info: native_color: Resource
xstha1-stonith cannot run anywhere
Dec 14 12:34:41 [680] pengine: warning: custom_action: Action
xstha2_san0_IP_stop_0 on xstha2 is unrunnable (offline)
Dec 14 12:34:41 [680] pengine: warning: custom_action: Action
xstha1-stonith_stop_0 on xstha2 is unrunnable (offline)
Dec 14 12:34:41 [680] pengine: warning: custom_action: Action
xstha1-stonith_stop_0 on xstha2 is unrunnable (offline)
Dec 14 12:34:41 [680] pengine: warning: stage6: Scheduling Node xstha2
for STONITH
Dec 14 12:34:41 [680] pengine: info: native_stop_constraints:
xstha2_san0_IP_stop_0 is implicit after xstha2 is fenced
Dec 14 12:34:41 [680] pengine: info: native_stop_constraints:
xstha1-stonith_stop_0 is implicit after xstha2 is fenced
Dec 14 12:34:41 [680] pengine: info: LogActions: Leave xstha1_san0_IP
(Started xstha1)
Dec 14 12:34:41 [680] pengine: notice: LogActions: Move xstha2_san0_IP
(Started xstha2 -> xstha1)
Dec 14 12:34:41 [680] pengine: info: LogActions: Leave zpool_data
(Started xstha1)
Dec 14 12:34:41 [680] pengine: notice: LogActions: Stop xstha1-stonith
(xstha2)
Dec 14 12:34:41 [680] pengine: info: LogActions: Leave xstha2-stonith
(Started xstha1)
Dec 14 12:34:41 [681] crmd: info: handle_response: pe_calc
calculation pe_calc-dc-1607945681-15 is obsolete
Dec 14 12:34:41 [680] pengine: warning: process_pe_message: Calculated
transition 0 (with warnings), saving inputs in
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-42.bz2
Dec 14 12:34:41 [680] pengine: warning: pe_fence_node: Node xstha2
will be fenced because the node is no longer part of the cluster
Dec 14 12:34:41 [680] pengine: warning: determine_online_status: Node
xstha2 is unclean
Dec 14 12:34:41 [680] pengine: info: determine_online_status_fencing:
Node xstha1 is active
Dec 14 12:34:41 [680] pengine: info: determine_online_status: Node
xstha1 is online
Dec 14 12:34:41 [680] pengine: info: native_print: xstha1_san0_IP
(ocf::heartbeat:IPaddr): Started xstha1
Dec 14 12:34:41 [680] pengine: info: native_print: xstha2_san0_IP
(ocf::heartbeat:IPaddr): Started xstha2 (UNCLEAN)
Dec 14 12:34:41 [680] pengine: info: native_print: zpool_data
(ocf::heartbeat:ZFS): Started xstha1
Dec 14 12:34:41 [680] pengine: info: native_print: xstha1-stonith
(stonith:external/ipmi): Started xstha2 (UNCLEAN)
Dec 14 12:34:41 [680] pengine: info: native_print: xstha2-stonith
(stonith:external/ipmi): Started xstha1
Dec 14 12:34:41 [680] pengine: info: native_color: Resource
xstha1-stonith cannot run anywhere
Dec 14 12:34:41 [680] pengine: warning: custom_action: Action
xstha2_san0_IP_stop_0 on xstha2 is unrunnable (offline)
Dec 14 12:34:41 [680] pengine: warning: custom_action: Action
xstha1-stonith_stop_0 on xstha2 is unrunnable (offline)
Dec 14 12:34:41 [680] pengine: warning: custom_action: Action
xstha1-stonith_stop_0 on xstha2 is unrunnable (offline)
Dec 14 12:34:41 [680] pengine: warning: stage6: Scheduling Node xstha2
for STONITH
Dec 14 12:34:41 [680] pengine: info: native_stop_constraints:
xstha2_san0_IP_stop_0 is implicit after xstha2 is fenced
Dec 14 12:34:41 [680] pengine: info: native_stop_constraints:
xstha1-stonith_stop_0 is implicit after xstha2 is fenced
Dec 14 12:34:41 [680] pengine: info: LogActions: Leave xstha1_san0_IP
(Started xstha1)
Dec 14 12:34:41 [680] pengine: notice: LogActions: Move xstha2_san0_IP
(Started xstha2 -> xstha1)
Dec 14 12:34:41 [680] pengine: info: LogActions: Leave zpool_data
(Started xstha1)
Dec 14 12:34:41 [680] pengine: notice: LogActions: Stop xstha1-stonith
(xstha2)
Dec 14 12:34:41 [680] pengine: info: LogActions: Leave xstha2-stonith
(Started xstha1)
Dec 14 12:34:41 [681] crmd: info: do_state_transition: State
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS
cause=C_IPC_MESSAGE origin=handle_response
Dec 14 12:34:41 [680] pengine: warning: process_pe_message: Calculated
transition 1 (with warnings), saving inputs in
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-43.bz2
Dec 14 12:34:41 [681] crmd: info: do_te_invoke: Processing
graph 1 (ref=pe_calc-dc-1607945681-16) derived from
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-43.bz2
Dec 14 12:34:41 [681] crmd: notice: te_fence_node: Requesting
fencing (poweroff) of node xstha2 | action=13 timeout=60000
Dec 14 12:34:41 [677] stonith-ng: notice: handle_request: Client
crmd.681.0e689fbb wants to fence (poweroff) 'xstha2' with device '(any)'
Dec 14 12:34:41 [677] stonith-ng: notice: initiate_remote_stonith_op:
Requesting peer fencing (poweroff) of xstha2 |
id=ec77b99f-e029-656a-806f-d95e341b33db state=0
Dec 14 12:34:42 [677] stonith-ng: info: process_remote_stonith_query:
Query result 1 of 1 from xstha1 for xstha2/poweroff (1 devices)
ec77b99f-e029-656a-806f-d95e341b33db
Dec 14 12:34:42 [677] stonith-ng: info: call_remote_stonith: Total
timeout set to 60 for peer's fencing of xstha2 for
crmd.681|id=ec77b99f-e029-656a-806f-d95e341b33db
Dec 14 12:34:42 [677] stonith-ng: info: call_remote_stonith:
Requesting that 'xstha1' perform op 'xstha2 poweroff' for crmd.681 (72s, 0s)
Dec 14 12:34:43 [677] stonith-ng: info: stonith_fence_get_devices_cb:
Found 1 matching devices for 'xstha2'
Dec 14 12:34:44 [677] stonith-ng: notice: log_operation: Operation
'poweroff' [2235] (call 2 from crmd.681) for host 'xstha2' with device
'xstha2-stonith' returned: 0 (OK)
Dec 14 12:34:44 [677] stonith-ng: notice: remote_op_done: Operation
poweroff of xstha2 by xstha1 for [email protected]: OK
Dec 14 12:34:44 [681] crmd: notice: tengine_stonith_callback: Stonith
operation 2/13:1:0:fa7da62d-2e8d-c08a-aa5f-b51ae18735fb: OK (0)
Dec 14 12:34:44 [681] crmd: info: crm_update_peer_expected:
crmd_peer_down: Node xstha2[2] - expected state is now down (was member)
Dec 14 12:34:44 [681] crmd: info: erase_status_tag: Deleting xpath:
//node_state[@uname='xstha2']/lrm
Dec 14 12:34:44 [681] crmd: info: erase_status_tag: Deleting xpath:
//node_state[@uname='xstha2']/transient_attributes
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/43)
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Forwarding cib_delete operation for section //node_state[@uname='xstha2']/lrm
to all (origin=local/crmd/44)
Dec 14 12:34:44 [681] crmd: notice: tengine_stonith_notify: Peer
xstha2 was terminated (poweroff) by xstha1 for xstha1: OK
(ref=ec77b99f-e029-656a-806f-d95e341b33db) by client crmd.681
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Forwarding cib_delete operation for section
//node_state[@uname='xstha2']/transient_attributes to all (origin=local/crmd/45)
Dec 14 12:34:44 [681] crmd: info: erase_status_tag: Deleting xpath:
//node_state[@uname='xstha2']/lrm
Dec 14 12:34:44 [681] crmd: info: erase_status_tag: Deleting xpath:
//node_state[@uname='xstha2']/transient_attributes
Dec 14 12:34:44 [681] crmd: notice: te_rsc_command: Initiating
start operation xstha2_san0_IP_start_0 locally on xstha1 | action 6
Dec 14 12:34:44 [681] crmd: info: do_lrm_rsc_op: Performing
key=6:1:0:fa7da62d-2e8d-c08a-aa5f-b51ae18735fb op=xstha2_san0_IP_start_0
Dec 14 12:34:44 [676] cib: info: cib_perform_op: Diff: ---
0.43.32 2
Dec 14 12:34:44 [676] cib: info: cib_perform_op: Diff: +++
0.43.33 (null)
Dec 14 12:34:44 [676] cib: info: cib_perform_op: + /cib:
@num_updates=33
Dec 14 12:34:44 [676] cib: info: cib_perform_op: +
/cib/status/node_state[@id='2']: @crm-debug-origin=send_stonith_update,
@expected=down
Dec 14 12:34:44 [678] lrmd: info: log_execute: executing -
rsc:xstha2_san0_IP action:start call_id:26
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/43, version=0.43.33)
Dec 14 12:34:44 [681] crmd: info: cib_fencing_updated: Fencing
update 43 for xstha2: complete
Dec 14 12:34:44 [676] cib: info: cib_perform_op: Diff: ---
0.43.33 2
Dec 14 12:34:44 [676] cib: info: cib_perform_op: Diff: +++
0.43.34 (null)
Dec 14 12:34:44 [676] cib: info: cib_perform_op: --
/cib/status/node_state[@id='2']/lrm[@id='2']
Dec 14 12:34:44 [676] cib: info: cib_perform_op: + /cib:
@num_updates=34
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Completed cib_delete operation for section //node_state[@uname='xstha2']/lrm:
OK (rc=0, origin=xstha1/crmd/44, version=0.43.34)
Dec 14 12:34:44 [681] crmd: warning: match_down_event: No reason to
expect node 2 to be down
Dec 14 12:34:44 [681] crmd: notice: abort_transition_graph:
Transition aborted by deletion of lrm[@id='2']: Resource state removal |
cib=0.43.34 source=abort_unless_down:343
path=/cib/status/node_state[@id='2']/lrm[@id='2'] complete=false
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Completed cib_delete operation for section
//node_state[@uname='xstha2']/transient_attributes: OK (rc=0,
origin=xstha1/crmd/45, version=0.43.34)
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/46)
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Forwarding cib_delete operation for section //node_state[@uname='xstha2']/lrm
to all (origin=local/crmd/47)
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Forwarding cib_delete operation for section
//node_state[@uname='xstha2']/transient_attributes to all (origin=local/crmd/48)
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/46, version=0.43.34)
Dec 14 12:34:44 [681] crmd: info: cib_fencing_updated: Fencing
update 46 for xstha2: complete
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Completed cib_delete operation for section //node_state[@uname='xstha2']/lrm:
OK (rc=0, origin=xstha1/crmd/47, version=0.43.34)
Dec 14 12:34:44 [676] cib: info: cib_process_request:
Completed cib_delete operation for section
//node_state[@uname='xstha2']/transient_attributes: OK (rc=0,
origin=xstha1/crmd/48, version=0.43.34)
IPaddr(xstha2_san0_IP)[2248]: 2020/12/14_12:34:45 INFO: eval ifconfig san0:10
inet 10.10.10.2 && ifconfig san0:10 netmask 255.255.255.0 && ifconfig san0:10 up
Dec 14 12:34:45 [678] lrmd: notice: operation_finished:
xstha2_san0_IP_start_0:2248:stderr [ Converted dotted-quad netmask to CIDR as:
24 ]
Dec 14 12:34:45 [678] lrmd: info: log_finished: finished -
rsc:xstha2_san0_IP action:start call_id:26 pid:2248 exit-code:0 exec-time:461ms
queue-time:0ms
Dec 14 12:34:45 [681] crmd: info: action_synced_wait: Managed
IPaddr_meta-data_0 process 2384 exited with rc=0
Dec 14 12:34:45 [681] crmd: notice: process_lrm_event: Result of start
operation for xstha2_san0_IP on xstha1: 0 (ok) | call=26
key=xstha2_san0_IP_start_0 confirmed=true cib-update=49
Dec 14 12:34:45 [676] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/49)
Dec 14 12:34:45 [676] cib: info: cib_perform_op: Diff: ---
0.43.34 2
Dec 14 12:34:45 [676] cib: info: cib_perform_op: Diff: +++
0.43.35 (null)
Dec 14 12:34:45 [676] cib: info: cib_perform_op: + /cib:
@num_updates=35
Dec 14 12:34:45 [676] cib: info: cib_perform_op: +
/cib/status/node_state[@id='1']: @crm-debug-origin=do_update_resource
Dec 14 12:34:45 [676] cib: info: cib_perform_op: +
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='xstha2_san0_IP']/lrm_rsc_op[@id='xstha2_san0_IP_last_0']:
@operation_key=xstha2_san0_IP_start_0, @operation=start,
@crm-debug-origin=do_update_resource,
@transition-key=6:1:0:fa7da62d-2e8d-c08a-aa5f-b51ae18735fb,
@transition-magic=0:0;6:1:0:fa7da62d-2e8d-c08a-aa5f-b51ae18735fb, @call-id=26,
@rc-code=0, @last-run=1607945684, @last-rc-change=1607945684, @exec-time=461
Dec 14 12:34:45 [681] crmd: info: match_graph_event: Action
xstha2_san0_IP_start_0 (6) confirmed on xstha1 (rc=0)
Dec 14 12:34:45 [676] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/49, version=0.43.35)
Dec 14 12:34:45 [681] crmd: notice: run_graph: Transition 1
(Complete=6, Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-43.bz2): Complete
Dec 14 12:34:45 [681] crmd: info: do_state_transition: State
transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE | input=I_PE_CALC
cause=C_FSA_INTERNAL origin=notify_crmd
Dec 14 12:34:45 [680] pengine: info: determine_online_status_fencing:
Node xstha1 is active
Dec 14 12:34:45 [680] pengine: info: determine_online_status: Node
xstha1 is online
Dec 14 12:34:45 [680] pengine: info: native_print: xstha1_san0_IP
(ocf::heartbeat:IPaddr): Started xstha1
Dec 14 12:34:45 [680] pengine: info: native_print: xstha2_san0_IP
(ocf::heartbeat:IPaddr): Started xstha1
Dec 14 12:34:45 [680] pengine: info: native_print: zpool_data
(ocf::heartbeat:ZFS): Started xstha1
Dec 14 12:34:45 [680] pengine: info: native_print: xstha1-stonith
(stonith:external/ipmi): Stopped
Dec 14 12:34:45 [680] pengine: info: native_print: xstha2-stonith
(stonith:external/ipmi): Started xstha1
Dec 14 12:34:45 [680] pengine: info: native_color: Resource
xstha1-stonith cannot run anywhere
Dec 14 12:34:45 [680] pengine: info: LogActions: Leave xstha1_san0_IP
(Started xstha1)
Dec 14 12:34:45 [680] pengine: info: LogActions: Leave xstha2_san0_IP
(Started xstha1)
Dec 14 12:34:45 [680] pengine: info: LogActions: Leave zpool_data
(Started xstha1)
Dec 14 12:34:45 [680] pengine: info: LogActions: Leave xstha1-stonith
(Stopped)
Dec 14 12:34:45 [680] pengine: info: LogActions: Leave xstha2-stonith
(Started xstha1)
Dec 14 12:34:45 [680] pengine: notice: process_pe_message: Calculated
transition 2, saving inputs in
/sonicle/var/cluster/lib/pacemaker/pengine/pe-input-125.bz2
Dec 14 12:34:45 [681] crmd: info: do_state_transition: State
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS
cause=C_IPC_MESSAGE origin=handle_response
Dec 14 12:34:45 [681] crmd: info: do_te_invoke: Processing
graph 2 (ref=pe_calc-dc-1607945685-18) derived from
/sonicle/var/cluster/lib/pacemaker/pengine/pe-input-125.bz2
Dec 14 12:34:45 [681] crmd: notice: run_graph: Transition 2
(Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/sonicle/var/cluster/lib/pacemaker/pengine/pe-input-125.bz2): Complete
Dec 14 12:34:45 [681] crmd: info: do_log: Input I_TE_SUCCESS
received in state S_TRANSITION_ENGINE from notify_crmd
Dec 14 12:34:45 [681] crmd: notice: do_state_transition: State
transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS
cause=C_FSA_INTERNAL origin=notify_crmd
Dec 14 12:34:50 [676] cib: info: cib_process_ping: Reporting our
current digest to xstha1: d3e769f75eaf1fd102b3e5ffd4269975 for 0.43.35 (8518f10
0)
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/