Here it is, thanks!

Gabriele
 
 
Sonicle S.r.l. : http://www.sonicle.com
Music: http://www.gabrielebulfon.com
eXoplanets : https://gabrielebulfon.bandcamp.com/album/exoplanets
 




----------------------------------------------------------------------------------

Da: Andrei Borzenkov <[email protected]>
A: Cluster Labs - All topics related to open-source clustering welcomed 
<[email protected]> 
Data: 14 dicembre 2020 15.56.32 CET
Oggetto: Re: [ClusterLabs] Antw: Re: Antw: [EXT] Recoveing from node failure


On Mon, Dec 14, 2020 at 2:40 PM Gabriele Bulfon <[email protected]> wrote:
>
> I isolated the log when everything happens (when I disable the ha interface), 
> attached here.
>

And where are matching logs from the second node?
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Dec 14 12:35:26 [652] xstorage2 corosync notice  [TOTEM ] A processor failed, 
forming new configuration.
Dec 14 12:35:27 [652] xstorage2 corosync notice  [TOTEM ] A new membership 
(10.100.100.2:352) was formed. Members left: 1
Dec 14 12:35:27 [652] xstorage2 corosync notice  [TOTEM ] Failed to receive the 
leave message. failed: 1
Dec 14 12:35:27 [676]      attrd:     info: pcmk_cpg_membership:        Node 1 
left group attrd (peer=xstha1, counter=2.0)
Dec 14 12:35:27 [678]       crmd:     info: pcmk_cpg_membership:        Node 1 
left group crmd (peer=xstha1, counter=2.0)
Dec 14 12:35:27 [672] pacemakerd:     info: pcmk_cpg_membership:        Node 1 
left group pacemakerd (peer=xstha1, counter=2.0)
Dec 14 12:35:27 [678]       crmd:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 14 12:35:27 [673]        cib:     info: pcmk_cpg_membership:        Node 1 
left group cib (peer=xstha1, counter=2.0)
Dec 14 12:35:27 [676]      attrd:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 14 12:35:27 [673]        cib:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 14 12:35:27 [676]      attrd:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 14 12:35:27 [674] stonith-ng:     info: pcmk_cpg_membership:        Node 1 
left group stonith-ng (peer=xstha1, counter=2.0)
Dec 14 12:35:27 [672] pacemakerd:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 14 12:35:27 [678]       crmd:     info: peer_update_callback:       Client 
xstha1/peer now has status [offline] (DC=true, changed=4000000)
Dec 14 12:35:27 [652] xstorage2 corosync notice  [QUORUM] Members[1]: 2
Dec 14 12:35:27 [673]        cib:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 14 12:35:27 [678]       crmd:     info: peer_update_callback:       Peer 
xstha1 left us
Dec 14 12:35:27 [676]      attrd:   notice: attrd_peer_remove:  Removing all 
xstha1 attributes for peer loss
Dec 14 12:35:27 [673]        cib:     info: crm_reap_dead_member:       
Removing node with name xstha1 and id 1 from membership cache
Dec 14 12:35:27 [678]       crmd:     info: erase_status_tag:   Deleting xpath: 
//node_state[@uname='xstha1']/transient_attributes
Dec 14 12:35:27 [652] xstorage2 corosync notice  [MAIN  ] Completed service 
synchronization, ready to provide service.
Dec 14 12:35:27 [676]      attrd:   notice: attrd_peer_change_cb:       Lost 
attribute writer xstha1
Dec 14 12:35:27 [672] pacemakerd:     info: pcmk_cpg_membership:        Node 2 
still member of group pacemakerd (peer=xstha2, counter=2.0)
Dec 14 12:35:27 [673]        cib:   notice: reap_crm_member:    Purged 1 peers 
with id=1 and/or uname=xstha1 from the membership cache
Dec 14 12:35:27 [674] stonith-ng:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 14 12:35:27 [673]        cib:     info: pcmk_cpg_membership:        Node 2 
still member of group cib (peer=xstha2, counter=2.0)
Dec 14 12:35:27 [674] stonith-ng:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 14 12:35:27 [678]       crmd:  warning: match_down_event:   No reason to 
expect node 1 to be down
Dec 14 12:35:27 [672] pacemakerd:     info: pcmk_quorum_notification:   Quorum 
retained | membership=352 members=1
Dec 14 12:35:27 [676]      attrd:     info: crm_reap_dead_member:       
Removing node with name xstha1 and id 1 from membership cache
Dec 14 12:35:27 [672] pacemakerd:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Dec 14 12:35:27 [676]      attrd:   notice: reap_crm_member:    Purged 1 peers 
with id=1 and/or uname=xstha1 from the membership cache
Dec 14 12:35:27 [678]       crmd:   notice: peer_update_callback:       
Stonith/shutdown of xstha1 not matched
Dec 14 12:35:27 [674] stonith-ng:     info: crm_reap_dead_member:       
Removing node with name xstha1 and id 1 from membership cache
Dec 14 12:35:27 [676]      attrd:     info: pcmk_cpg_membership:        Node 2 
still member of group attrd (peer=xstha2, counter=2.0)
Dec 14 12:35:27 [674] stonith-ng:   notice: reap_crm_member:    Purged 1 peers 
with id=1 and/or uname=xstha1 from the membership cache
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Forwarding cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes to all (origin=local/crmd/46)
Dec 14 12:35:27 [674] stonith-ng:     info: pcmk_cpg_membership:        Node 2 
still member of group stonith-ng (peer=xstha2, counter=2.0)
Dec 14 12:35:27 [678]       crmd:     info: crm_update_peer_join:       
peer_update_callback: Node xstha1[1] - join-2 phase 4 -> 0
Dec 14 12:35:27 [672] pacemakerd:     info: mcp_cpg_deliver:    Ignoring 
process list sent by peer for local node
Dec 14 12:35:27 [678]       crmd:     info: abort_transition_graph:     
Transition aborted: Node failure | source=peer_update_callback:249 complete=true
Dec 14 12:35:27 [678]       crmd:     info: pcmk_cpg_membership:        Node 2 
still member of group crmd (peer=xstha2, counter=2.0)
Dec 14 12:35:27 [678]       crmd:   notice: do_state_transition:        State 
transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL 
origin=abort_transition_graph
Dec 14 12:35:27 [678]       crmd:     info: pcmk_quorum_notification:   Quorum 
retained | membership=352 members=1
Dec 14 12:35:27 [678]       crmd:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Dec 14 12:35:27 [678]       crmd:     info: peer_update_callback:       Cluster 
node xstha1 is now lost (was member)
Dec 14 12:35:27 [678]       crmd:  warning: match_down_event:   No reason to 
expect node 1 to be down
Dec 14 12:35:27 [678]       crmd:   notice: peer_update_callback:       
Stonith/shutdown of xstha1 not matched
Dec 14 12:35:27 [678]       crmd:     info: abort_transition_graph:     
Transition aborted: Node failure | source=peer_update_callback:249 complete=true
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Completed cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes: OK (rc=0, 
origin=xstha2/crmd/46, version=0.43.25)
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/47)
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/49)
Dec 14 12:35:27 [673]        cib:     info: cib_perform_op:     Diff: --- 
0.43.25 2
Dec 14 12:35:27 [673]        cib:     info: cib_perform_op:     Diff: +++ 
0.43.26 (null)
Dec 14 12:35:27 [673]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=26
Dec 14 12:35:27 [673]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='1']:  @crmd=offline, 
@crm-debug-origin=peer_update_callback
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/47, version=0.43.26)
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/49, version=0.43.26)
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/52)
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/53)
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Completed cib_modify operation for section nodes: OK (rc=0, 
origin=xstha2/crmd/52, version=0.43.26)
Dec 14 12:35:27 [673]        cib:     info: cib_perform_op:     Diff: --- 
0.43.26 2
Dec 14 12:35:27 [673]        cib:     info: cib_perform_op:     Diff: +++ 
0.43.27 (null)
Dec 14 12:35:27 [673]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=27
Dec 14 12:35:27 [673]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='2']:  @crm-debug-origin=post_cache_update
Dec 14 12:35:27 [673]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='1']:  @in_ccm=false, 
@crm-debug-origin=post_cache_update
Dec 14 12:35:27 [673]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/53, version=0.43.27)
Dec 14 12:35:28 [677]    pengine:     info: determine_online_status_fencing:    
Node xstha2 is active
Dec 14 12:35:28 [677]    pengine:     info: determine_online_status:    Node 
xstha2 is online
Dec 14 12:35:28 [677]    pengine:  warning: pe_fence_node:      Node xstha1 
will be fenced because the node is no longer part of the cluster
Dec 14 12:35:28 [677]    pengine:  warning: determine_online_status:    Node 
xstha1 is unclean
Dec 14 12:35:28 [677]    pengine:     info: native_print:       xstha1_san0_IP  
(ocf::heartbeat:IPaddr):        Started xstha1 (UNCLEAN)
Dec 14 12:35:28 [677]    pengine:     info: native_print:       xstha2_san0_IP  
(ocf::heartbeat:IPaddr):        Started xstha2
Dec 14 12:35:28 [677]    pengine:     info: native_print:       zpool_data      
(ocf::heartbeat:ZFS):   Started xstha1 (UNCLEAN)
Dec 14 12:35:28 [677]    pengine:     info: native_print:       xstha1-stonith  
(stonith:external/ipmi):        Started xstha2
Dec 14 12:35:28 [677]    pengine:     info: native_print:       xstha2-stonith  
(stonith:external/ipmi):        Started xstha1 (UNCLEAN)
Dec 14 12:35:28 [677]    pengine:     info: native_color:       Resource 
xstha2-stonith cannot run anywhere
Dec 14 12:35:28 [677]    pengine:  warning: custom_action:      Action 
xstha1_san0_IP_stop_0 on xstha1 is unrunnable (offline)
Dec 14 12:35:28 [677]    pengine:  warning: custom_action:      Action 
zpool_data_stop_0 on xstha1 is unrunnable (offline)
Dec 14 12:35:28 [677]    pengine:  warning: custom_action:      Action 
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
Dec 14 12:35:28 [677]    pengine:  warning: custom_action:      Action 
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
Dec 14 12:35:28 [677]    pengine:  warning: stage6:     Scheduling Node xstha1 
for STONITH
Dec 14 12:35:28 [677]    pengine:     info: native_stop_constraints:    
xstha1_san0_IP_stop_0 is implicit after xstha1 is fenced
Dec 14 12:35:28 [677]    pengine:     info: native_stop_constraints:    
zpool_data_stop_0 is implicit after xstha1 is fenced
Dec 14 12:35:28 [677]    pengine:     info: native_stop_constraints:    
xstha2-stonith_stop_0 is implicit after xstha1 is fenced
Dec 14 12:35:28 [677]    pengine:   notice: LogActions: Move    xstha1_san0_IP  
(Started xstha1 -> xstha2)
Dec 14 12:35:28 [677]    pengine:     info: LogActions: Leave   xstha2_san0_IP  
(Started xstha2)
Dec 14 12:35:28 [677]    pengine:   notice: LogActions: Move    zpool_data      
(Started xstha1 -> xstha2)
Dec 14 12:35:28 [677]    pengine:     info: LogActions: Leave   xstha1-stonith  
(Started xstha2)
Dec 14 12:35:28 [677]    pengine:   notice: LogActions: Stop    xstha2-stonith  
(xstha1)
Dec 14 12:35:28 [677]    pengine:  warning: process_pe_message: Calculated 
transition 5 (with warnings), saving inputs in 
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-4.bz2
Dec 14 12:35:28 [678]       crmd:     info: do_state_transition:        State 
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS 
cause=C_IPC_MESSAGE origin=handle_response
Dec 14 12:35:28 [678]       crmd:     info: do_te_invoke:       Processing 
graph 5 (ref=pe_calc-dc-1607945728-39) derived from 
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-4.bz2
Dec 14 12:35:28 [678]       crmd:   notice: te_fence_node:      Requesting 
fencing (poweroff) of node xstha1 | action=13 timeout=60000
Dec 14 12:35:28 [674] stonith-ng:   notice: handle_request:     Client 
crmd.678.61931216 wants to fence (poweroff) 'xstha1' with device '(any)'
Dec 14 12:35:28 [674] stonith-ng:   notice: initiate_remote_stonith_op: 
Requesting peer fencing (poweroff) of xstha1 | 
id=99024844-4616-6720-9063-c1e61cebb4f3 state=0
Dec 14 12:35:30 [674] stonith-ng:     info: process_remote_stonith_query:       
Query result 1 of 1 from xstha2 for xstha1/poweroff (1 devices) 
99024844-4616-6720-9063-c1e61cebb4f3
Dec 14 12:35:30 [674] stonith-ng:     info: call_remote_stonith:        Total 
timeout set to 60 for peer's fencing of xstha1 for 
crmd.678|id=99024844-4616-6720-9063-c1e61cebb4f3
Dec 14 12:35:30 [674] stonith-ng:     info: call_remote_stonith:        
Requesting that 'xstha2' perform op 'xstha1 poweroff' for crmd.678 (72s, 0s)
Dec 14 12:35:31 [674] stonith-ng:     info: stonith_fence_get_devices_cb:       
Found 1 matching devices for 'xstha1'
Dec 14 12:35:32 [674] stonith-ng:   notice: log_operation:      Operation 
'poweroff' [2049] (call 2 from crmd.678) for host 'xstha1' with device 
'xstha1-stonith' returned: 0 (OK)
Dec 14 12:35:32 [674] stonith-ng:   notice: remote_op_done:     Operation 
poweroff of xstha1 by xstha2 for [email protected]: OK
Dec 14 12:35:32 [678]       crmd:   notice: tengine_stonith_callback:   Stonith 
operation 2/13:5:0:cd77f568-3cfc-ee63-da77-90f734d91efd: OK (0)
Dec 14 12:35:32 [678]       crmd:     info: crm_update_peer_expected:   
crmd_peer_down: Node xstha1[1] - expected state is now down (was member)
Dec 14 12:35:32 [678]       crmd:     info: erase_status_tag:   Deleting xpath: 
//node_state[@uname='xstha1']/lrm
Dec 14 12:35:32 [678]       crmd:     info: erase_status_tag:   Deleting xpath: 
//node_state[@uname='xstha1']/transient_attributes
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/56)
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Forwarding cib_delete operation for section //node_state[@uname='xstha1']/lrm 
to all (origin=local/crmd/57)
Dec 14 12:35:32 [678]       crmd:   notice: tengine_stonith_notify:     Peer 
xstha1 was terminated (poweroff) by xstha2 for xstha2: OK 
(ref=99024844-4616-6720-9063-c1e61cebb4f3) by client crmd.678
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Forwarding cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes to all (origin=local/crmd/58)
Dec 14 12:35:32 [678]       crmd:     info: erase_status_tag:   Deleting xpath: 
//node_state[@uname='xstha1']/lrm
Dec 14 12:35:32 [678]       crmd:     info: erase_status_tag:   Deleting xpath: 
//node_state[@uname='xstha1']/transient_attributes
Dec 14 12:35:32 [678]       crmd:   notice: te_rsc_command:     Initiating 
start operation zpool_data_start_0 locally on xstha2 | action 8
Dec 14 12:35:32 [678]       crmd:     info: do_lrm_rsc_op:      Performing 
key=8:5:0:cd77f568-3cfc-ee63-da77-90f734d91efd op=zpool_data_start_0
Dec 14 12:35:32 [673]        cib:     info: cib_perform_op:     Diff: --- 
0.43.27 2
Dec 14 12:35:32 [673]        cib:     info: cib_perform_op:     Diff: +++ 
0.43.28 (null)
Dec 14 12:35:32 [673]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=28
Dec 14 12:35:32 [673]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='1']:  @crm-debug-origin=send_stonith_update, 
@join=down, @expected=down
Dec 14 12:35:32 [675]       lrmd:     info: log_execute:        executing - 
rsc:zpool_data action:start call_id:25
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/56, version=0.43.28)
Dec 14 12:35:32 [678]       crmd:     info: cib_fencing_updated:        Fencing 
update 56 for xstha1: complete
Dec 14 12:35:32 [673]        cib:     info: cib_perform_op:     Diff: --- 
0.43.28 2
Dec 14 12:35:32 [673]        cib:     info: cib_perform_op:     Diff: +++ 
0.43.29 (null)
Dec 14 12:35:32 [673]        cib:     info: cib_perform_op:     -- 
/cib/status/node_state[@id='1']/lrm[@id='1']
Dec 14 12:35:32 [673]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=29
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Completed cib_delete operation for section //node_state[@uname='xstha1']/lrm: 
OK (rc=0, origin=xstha2/crmd/57, version=0.43.29)
Dec 14 12:35:32 [678]       crmd:  warning: match_down_event:   No reason to 
expect node 1 to be down
Dec 14 12:35:32 [678]       crmd:   notice: abort_transition_graph:     
Transition aborted by deletion of lrm[@id='1']: Resource state removal | 
cib=0.43.29 source=abort_unless_down:343 
path=/cib/status/node_state[@id='1']/lrm[@id='1'] complete=false
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Completed cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes: OK (rc=0, 
origin=xstha2/crmd/58, version=0.43.29)
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/59)
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Forwarding cib_delete operation for section //node_state[@uname='xstha1']/lrm 
to all (origin=local/crmd/60)
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Forwarding cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes to all (origin=local/crmd/61)
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/59, version=0.43.29)
Dec 14 12:35:32 [678]       crmd:     info: cib_fencing_updated:        Fencing 
update 59 for xstha1: complete
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Completed cib_delete operation for section //node_state[@uname='xstha1']/lrm: 
OK (rc=0, origin=xstha2/crmd/60, version=0.43.29)
Dec 14 12:35:32 [673]        cib:     info: cib_process_request:        
Completed cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes: OK (rc=0, 
origin=xstha2/crmd/61, version=0.43.29)
Dec 14 12:35:34 [675]       lrmd:   notice: operation_finished: 
zpool_data_start_0:2062:stderr [ cannot open 'test': no such pool ]
Dec 14 12:35:34 [675]       lrmd:   notice: operation_finished: 
zpool_data_start_0:2062:stderr [ /usr/lib/ocf/resource.d/heartbeat/ZFS: line 
35: [: : integer expression expected ]
Dec 14 12:35:34 [675]       lrmd:   notice: operation_finished: 
zpool_data_start_0:2062:stderr [ /usr/lib/ocf/resource.d/heartbeat/ZFS: line 
35: [: : integer expression expected ]
Dec 14 12:35:34 [675]       lrmd:     info: log_finished:       finished - 
rsc:zpool_data action:start call_id:25 pid:2062 exit-code:0 exec-time:1555ms 
queue-time:1ms
Dec 14 12:35:34 [678]       crmd:     info: action_synced_wait: Managed 
ZFS_meta-data_0 process 2069 exited with rc=0
Dec 14 12:35:34 [678]       crmd:   notice: process_lrm_event:  Result of start 
operation for zpool_data on xstha2: 0 (ok) | call=25 key=zpool_data_start_0 
confirmed=true cib-update=62
Dec 14 12:35:34 [673]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/62)
Dec 14 12:35:34 [673]        cib:     info: cib_perform_op:     Diff: --- 
0.43.29 2
Dec 14 12:35:34 [673]        cib:     info: cib_perform_op:     Diff: +++ 
0.43.30 (null)
Dec 14 12:35:34 [673]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=30
Dec 14 12:35:34 [673]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='2']:  @crm-debug-origin=do_update_resource
Dec 14 12:35:34 [673]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='zpool_data']/lrm_rsc_op[@id='zpool_data_last_0']:
  @operation_key=zpool_data_start_0, @operation=start, 
@transition-key=8:5:0:cd77f568-3cfc-ee63-da77-90f734d91efd, 
@transition-magic=0:0;8:5:0:cd77f568-3cfc-ee63-da77-90f734d91efd, @call-id=25, 
@rc-code=0, @last-run=1607945732, @last-rc-change=1607945732, @exec-time=1555, 
@queue-time=1
Dec 14 12:35:34 [673]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/62, version=0.43.30)
Dec 14 12:35:34 [678]       crmd:     info: match_graph_event:  Action 
zpool_data_start_0 (8) confirmed on xstha2 (rc=0)
Dec 14 12:35:34 [678]       crmd:   notice: run_graph:  Transition 5 
(Complete=7, Pending=0, Fired=0, Skipped=1, Incomplete=1, 
Source=/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-4.bz2): Stopped
Dec 14 12:35:34 [678]       crmd:     info: do_state_transition:        State 
transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE | input=I_PE_CALC 
cause=C_FSA_INTERNAL origin=notify_crmd
Dec 14 12:35:34 [677]    pengine:     info: determine_online_status_fencing:    
Node xstha2 is active
Dec 14 12:35:34 [677]    pengine:     info: determine_online_status:    Node 
xstha2 is online
Dec 14 12:35:34 [677]    pengine:     info: native_print:       xstha1_san0_IP  
(ocf::heartbeat:IPaddr):        Stopped
Dec 14 12:35:34 [677]    pengine:     info: native_print:       xstha2_san0_IP  
(ocf::heartbeat:IPaddr):        Started xstha2
Dec 14 12:35:34 [677]    pengine:     info: native_print:       zpool_data      
(ocf::heartbeat:ZFS):   Started xstha2
Dec 14 12:35:34 [677]    pengine:     info: native_print:       xstha1-stonith  
(stonith:external/ipmi):        Started xstha2
Dec 14 12:35:34 [677]    pengine:     info: native_print:       xstha2-stonith  
(stonith:external/ipmi):        Stopped
Dec 14 12:35:34 [677]    pengine:     info: native_color:       Resource 
xstha2-stonith cannot run anywhere
Dec 14 12:35:34 [677]    pengine:   notice: LogActions: Start   xstha1_san0_IP  
(xstha2)
Dec 14 12:35:34 [677]    pengine:     info: LogActions: Leave   xstha2_san0_IP  
(Started xstha2)
Dec 14 12:35:34 [677]    pengine:     info: LogActions: Leave   zpool_data      
(Started xstha2)
Dec 14 12:35:34 [677]    pengine:     info: LogActions: Leave   xstha1-stonith  
(Started xstha2)
Dec 14 12:35:34 [677]    pengine:     info: LogActions: Leave   xstha2-stonith  
(Stopped)
Dec 14 12:35:34 [677]    pengine:   notice: process_pe_message: Calculated 
transition 6, saving inputs in 
/sonicle/var/cluster/lib/pacemaker/pengine/pe-input-74.bz2
Dec 14 12:35:34 [678]       crmd:     info: do_state_transition:        State 
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS 
cause=C_IPC_MESSAGE origin=handle_response
Dec 14 12:35:34 [678]       crmd:     info: do_te_invoke:       Processing 
graph 6 (ref=pe_calc-dc-1607945734-41) derived from 
/sonicle/var/cluster/lib/pacemaker/pengine/pe-input-74.bz2
Dec 14 12:35:34 [678]       crmd:   notice: te_rsc_command:     Initiating 
start operation xstha1_san0_IP_start_0 locally on xstha2 | action 3
Dec 14 12:35:34 [678]       crmd:     info: do_lrm_rsc_op:      Performing 
key=3:6:0:cd77f568-3cfc-ee63-da77-90f734d91efd op=xstha1_san0_IP_start_0
Dec 14 12:35:34 [675]       lrmd:     info: log_execute:        executing - 
rsc:xstha1_san0_IP action:start call_id:26
IPaddr(xstha1_san0_IP)[2072]:   2020/12/14_12:35:34 INFO: eval ifconfig san0:7 
inet 10.10.10.1 && ifconfig san0:7 netmask 255.255.255.0 && ifconfig san0:7 up
Dec 14 12:35:34 [675]       lrmd:   notice: operation_finished: 
xstha1_san0_IP_start_0:2072:stderr [ Converted dotted-quad netmask to CIDR as: 
24 ]
Dec 14 12:35:34 [675]       lrmd:     info: log_finished:       finished - 
rsc:xstha1_san0_IP action:start call_id:26 pid:2072 exit-code:0 exec-time:389ms 
queue-time:0ms
Dec 14 12:35:34 [678]       crmd:     info: action_synced_wait: Managed 
IPaddr_meta-data_0 process 2199 exited with rc=0
Dec 14 12:35:34 [678]       crmd:   notice: process_lrm_event:  Result of start 
operation for xstha1_san0_IP on xstha2: 0 (ok) | call=26 
key=xstha1_san0_IP_start_0 confirmed=true cib-update=64
Dec 14 12:35:34 [673]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/64)
Dec 14 12:35:34 [673]        cib:     info: cib_perform_op:     Diff: --- 
0.43.30 2
Dec 14 12:35:34 [673]        cib:     info: cib_perform_op:     Diff: +++ 
0.43.31 (null)
Dec 14 12:35:34 [673]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=31
Dec 14 12:35:34 [673]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='xstha1_san0_IP']/lrm_rsc_op[@id='xstha1_san0_IP_last_0']:
  @operation_key=xstha1_san0_IP_start_0, @operation=start, 
@transition-key=3:6:0:cd77f568-3cfc-ee63-da77-90f734d91efd, 
@transition-magic=0:0;3:6:0:cd77f568-3cfc-ee63-da77-90f734d91efd, @call-id=26, 
@rc-code=0, @last-run=1607945734, @last-rc-change=1607945734, @exec-time=389
Dec 14 12:35:34 [673]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/64, version=0.43.31)
Dec 14 12:35:34 [678]       crmd:     info: match_graph_event:  Action 
xstha1_san0_IP_start_0 (3) confirmed on xstha2 (rc=0)
Dec 14 12:35:34 [678]       crmd:   notice: run_graph:  Transition 6 
(Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, 
Source=/sonicle/var/cluster/lib/pacemaker/pengine/pe-input-74.bz2): Complete
Dec 14 12:35:34 [678]       crmd:     info: do_log:     Input I_TE_SUCCESS 
received in state S_TRANSITION_ENGINE from notify_crmd
Dec 14 12:35:34 [678]       crmd:   notice: do_state_transition:        State 
transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS 
cause=C_FSA_INTERNAL origin=notify_crmd
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to