On 25 Mar 2014, at 4:22 am, K Mehta <[email protected]> wrote:
> Hi, > > I created a cloned resource vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b on > systems vsanqa11/12. > service pacemaker stop is called on vsanqa12 at 6:40:19 and completes approx > at 6:41:18 > service pacemaker start is called on vsanqa12 at 6:41:20 and completes at > 6:42:30 > > I see that on vsanqa11, a stop on resource gets called (for vsanqa12) at > 6:42:29. Why does pacemaker invoke a stop on resource > vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b:1 ? could be a bug in an old version. also: http://blog.clusterlabs.org/blog/2014/potential-for-data-corruption-in-pacemaker-1-dot-1-6-through-1-dot-1-9/ > and why are there so many processor joined/left messages during this period > on vsanqa12 that would be one for the corosync folks > > > Configuration > ========== > > > [root@vsanqa12 ~]# crm configure show > node vsanqa11 > node vsanqa12 > primitive vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b > ocf:heartbeat:vgc-cm-agent.ocf \ > params cluster_uuid="46cd52eb-fecc-49f8-bbe8-bc4157672b7b" \ > op monitor interval="30s" role="Master" timeout="100s" \ > op monitor interval="31s" role="Slave" timeout="100s" > ms ms-46cd52eb-fecc-49f8-bbe8-bc4157672b7b > vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b \ > meta clone-max="2" globally-unique="false" target-role="Started" > location ms-46cd52eb-fecc-49f8-bbe8-bc4157672b7b-nodes > ms-46cd52eb-fecc-49f8-bbe8-bc4157672b7b \ > rule $id="ms-46cd52eb-fecc-49f8-bbe8-bc4157672b7b-nodes-rule" -inf: > #uname ne vsanqa11 and #uname ne vsanqa12 > property $id="cib-bootstrap-options" \ > dc-version="1.1.8-7.el6-394e906" \ > cluster-infrastructure="cman" \ > stonith-enabled="false" \ > no-quorum-policy="ignore" > rsc_defaults $id="rsc-options" \ > resource-stickiness="100" > > > > > > > > Logs.. > > vsanqa11 > > Mar 24 06:37:38 vsanqa11 kernel: VGC: [000000650fed1b03:I] Instance "VHA" > connected with peer "vsanqa12" (status 0xc, 1, 0) > Mar 24 06:37:58 vsanqa11 attrd[24424]: notice: attrd_trigger_update: > Sending flush op to all hosts for: > master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (2) > Mar 24 06:37:58 vsanqa11 attrd[24424]: notice: attrd_perform_update: Sent > update 11: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b=2 > Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_trigger_update: > Sending flush op to all hosts for: > master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (4) > Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_perform_update: Sent > update 17: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b=4 > Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: process_lrm_event: LRM > operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_promote_0 (call=18, rc=0, > cib-update=12, confirmed=true) ok > Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: process_lrm_event: LRM > operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_monitor_30000 (call=21, > rc=8, cib-update=13, confirmed=false) master > Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: peer_update_callback: Got > client status callback - our DC is dead > Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: do_state_transition: State > transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION > cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback ] > Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: do_state_transition: State > transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC > cause=C_FSA_INTERNAL origin=do_election_check ] > Mar 24 06:40:17 vsanqa11 corosync[24211]: [TOTEM ] Retransmit List: fa fb > Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_local_callback: > Sending full refresh (origin=crmd) > Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_trigger_update: > Sending flush op to all hosts for: > master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (4) > Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_trigger_update: > Sending flush op to all hosts for: probe_complete (true) > Mar 24 06:40:18 vsanqa11 pengine[24425]: notice: unpack_config: On loss of > CCM Quorum: Ignore > Mar 24 06:40:18 vsanqa11 pengine[24425]: notice: process_pe_message: > Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-359.bz2 > Mar 24 06:40:18 vsanqa11 crmd[24426]: notice: run_graph: Transition 0 > (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, > Source=/var/lib/pacemaker/pengine/pe-input-359.bz2): Complete > Mar 24 06:40:18 vsanqa11 crmd[24426]: notice: do_state_transition: State > transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS > cause=C_FSA_INTERNAL origin=notify_crmd ] > Mar 24 06:40:19 vsanqa11 corosync[24211]: [CMAN ] quorum lost, blocking > activity > Mar 24 06:40:19 vsanqa11 corosync[24211]: [QUORUM] This node is within the > non-primary component and will NOT provide any services. > Mar 24 06:40:19 vsanqa11 corosync[24211]: [QUORUM] Members[1]: 1 > Mar 24 06:40:19 vsanqa11 corosync[24211]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:40:19 vsanqa11 crmd[24426]: notice: cman_event_callback: > Membership 4047912: quorum lost > Mar 24 06:40:19 vsanqa11 crmd[24426]: notice: crm_update_peer_state: > cman_event_callback: Node vsanqa12[2] - state is now lost > Mar 24 06:40:19 vsanqa11 corosync[24211]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.123) ; members(old:2 left:1) > Mar 24 06:40:19 vsanqa11 corosync[24211]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:40:19 vsanqa11 kernel: dlm: closing connection to node 2 > Mar 24 06:42:01 vsanqa11 kernel: doing a send with ctx_id 1 > Mar 24 06:42:07 vsanqa11 kernel: VGC: [000000650fed1b03:I] Instance "VHA" > connected with peer "vsanqa12" (status 0xc, 1, 0) > Mar 24 06:42:26 vsanqa11 corosync[24211]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:26 vsanqa11 corosync[24211]: [CMAN ] quorum regained, > resuming activity > Mar 24 06:42:26 vsanqa11 corosync[24211]: [QUORUM] This node is within the > primary component and will provide service. > Mar 24 06:42:26 vsanqa11 corosync[24211]: [QUORUM] Members[2]: 1 2 > Mar 24 06:42:26 vsanqa11 corosync[24211]: [QUORUM] Members[2]: 1 2 > Mar 24 06:42:26 vsanqa11 crmd[24426]: notice: cman_event_callback: > Membership 4047980: quorum acquired > Mar 24 06:42:26 vsanqa11 crmd[24426]: notice: crm_update_peer_state: > cman_event_callback: Node vsanqa12[2] - state is now member > Mar 24 06:42:26 vsanqa11 corosync[24211]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.123) ; members(old:1 left:0) > Mar 24 06:42:26 vsanqa11 corosync[24211]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:27 vsanqa11 crmd[24426]: warning: match_down_event: No match > for shutdown action on vsanqa12 > Mar 24 06:42:27 vsanqa11 crmd[24426]: warning: crmd_ha_msg_filter: Another > DC detected: vsanqa12 (op=noop) > Mar 24 06:42:27 vsanqa11 crmd[24426]: warning: crmd_ha_msg_filter: Another > DC detected: vsanqa12 (op=noop) > Mar 24 06:42:27 vsanqa11 crmd[24426]: notice: do_state_transition: State > transition S_IDLE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL > origin=crmd_ha_msg_filter ] > Mar 24 06:42:27 vsanqa11 crmd[24426]: warning: do_log: FSA: Input > I_NODE_JOIN from peer_update_callback() received in state S_ELECTION > Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff > 0.11071.16 -> 0.11071.17 from vsanqa12 not applied to 0.11071.93: current > "num_updates" is greater than required > Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff > 0.11071.17 -> 0.11071.18 from vsanqa12 not applied to 0.11071.93: current > "num_updates" is greater than required > Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff > 0.11071.18 -> 0.11071.19 from vsanqa12 not applied to 0.11071.93: current > "num_updates" is greater than required > Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff > 0.11071.19 -> 0.11071.20 from vsanqa12 not applied to 0.11071.93: current > "num_updates" is greater than required > Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff > 0.11071.20 -> 0.11071.21 from vsanqa12 not applied to 0.11071.93: current > "num_updates" is greater than required > Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff > 0.11071.21 -> 0.11071.22 from vsanqa12 not applied to 0.11071.93: current > "num_updates" is greater than required > Mar 24 06:42:27 vsanqa11 crmd[24426]: notice: do_state_transition: State > transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC > cause=C_FSA_INTERNAL origin=do_election_check ] > Mar 24 06:42:28 vsanqa11 attrd[24424]: notice: attrd_local_callback: > Sending full refresh (origin=crmd) > Mar 24 06:42:28 vsanqa11 attrd[24424]: notice: attrd_trigger_update: > Sending flush op to all hosts for: > master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (4) > Mar 24 06:42:28 vsanqa11 attrd[24424]: notice: attrd_trigger_update: > Sending flush op to all hosts for: probe_complete (true) > Mar 24 06:42:28 vsanqa11 corosync[24211]: [TOTEM ] Retransmit List: aa > Mar 24 06:42:29 vsanqa11 pengine[24425]: notice: unpack_config: On loss of > CCM Quorum: Ignore > Mar 24 06:42:29 vsanqa11 pengine[24425]: notice: LogActions: Stop > vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b:1#011(vsanqa12) ===<<< ***STOP > ***>>>==== > Mar 24 06:42:29 vsanqa11 pengine[24425]: notice: process_pe_message: > Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-360.bz2 > Mar 24 06:42:35 vsanqa11 crmd[24426]: notice: run_graph: Transition 1 > (Complete=3, Pending=0, Fired=0, Skipped=1, Incomplete=0, > Source=/var/lib/pacemaker/pengine/pe-input-360.bz2): Stopped > Mar 24 06:42:35 vsanqa11 pengine[24425]: notice: unpack_config: On loss of > CCM Quorum: Ignore > Mar 24 06:42:35 vsanqa11 pengine[24425]: notice: process_pe_message: > Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-361.bz2 > Mar 24 06:42:35 vsanqa11 crmd[24426]: notice: run_graph: Transition 2 > (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, > Source=/var/lib/pacemaker/pengine/pe-input-361.bz2): Complete > Mar 24 06:42:35 vsanqa11 crmd[24426]: notice: do_state_transition: State > transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS > cause=C_FSA_INTERNAL origin=notify_crmd ] > > > > > > vsanqa12 > > Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Unloading all Corosync > service engines. > Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: > corosync extended virtual synchrony service > Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: > corosync configuration service > Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: > corosync cluster closed process group service v1.01 > Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: > corosync cluster config database access v1.01 > Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: > corosync profile loading service > Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: > openais checkpoint service B.01.01 > Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: > corosync CMAN membership service 2.90 > Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: > corosync cluster quorum service v0.1 > Mar 24 06:40:19 vsanqa12 corosync[15344]: [MAIN ] Corosync Cluster Engine > exiting with status 0 at main.c:1894. > Mar 24 06:41:22 vsanqa12 kernel: DLM (built Nov 9 2011 08:04:11) installed > Mar 24 06:41:22 vsanqa12 corosync[17159]: [MAIN ] Corosync Cluster Engine > ('1.4.1'): started and ready to provide service. > Mar 24 06:41:22 vsanqa12 corosync[17159]: [MAIN ] Corosync built-in > features: nss dbus rdma snmp > Mar 24 06:41:22 vsanqa12 corosync[17159]: [MAIN ] Successfully read config > from /etc/cluster/cluster.conf > Mar 24 06:41:22 vsanqa12 corosync[17159]: [MAIN ] Successfully parsed cman > config > Mar 24 06:41:22 vsanqa12 corosync[17159]: [TOTEM ] Initializing transport > (UDP/IP Multicast). > Mar 24 06:41:22 vsanqa12 corosync[17159]: [TOTEM ] Initializing > transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). > Mar 24 06:41:23 vsanqa12 corosync[17159]: [TOTEM ] The network interface > [172.16.68.124] is now up. > Mar 24 06:41:23 vsanqa12 corosync[17159]: [QUORUM] Using quorum provider > quorum_cman > Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: > corosync cluster quorum service v0.1 > Mar 24 06:41:23 vsanqa12 corosync[17159]: [CMAN ] CMAN 3.0.12.1 (built Feb > 23 2013 10:25:47) started > Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: > corosync CMAN membership service 2.90 > Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: > openais checkpoint service B.01.01 > Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: > corosync extended virtual synchrony service > Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: > corosync configuration service > Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: > corosync cluster closed process group service v1.01 > Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: > corosync cluster config database access v1.01 > Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: > corosync profile loading service > Mar 24 06:41:23 vsanqa12 corosync[17159]: [QUORUM] Using quorum provider > quorum_cman > Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: > corosync cluster quorum service v0.1 > Mar 24 06:41:23 vsanqa12 corosync[17159]: [MAIN ] Compatibility mode set > to whitetank. Using V1 and V2 of the synchronization engine. > Mar 24 06:41:23 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:23 vsanqa12 corosync[17159]: [QUORUM] Members[1]: 2 > Mar 24 06:41:23 vsanqa12 corosync[17159]: [QUORUM] Members[1]: 2 > Mar 24 06:41:23 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:0 left:0) > Mar 24 06:41:23 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:41:25 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:25 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:41:25 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:41:27 vsanqa12 fenced[17213]: fenced 3.0.12.1 started > Mar 24 06:41:27 vsanqa12 dlm_controld[17239]: dlm_controld 3.0.12.1 started > Mar 24 06:41:28 vsanqa12 gfs_controld[17288]: gfs_controld 3.0.12.1 started > Mar 24 06:41:29 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:29 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:41:30 vsanqa12 pacemakerd[17363]: notice: crm_add_logfile: > Additional logging available in /var/log/cluster/corosync.log > Mar 24 06:41:30 vsanqa12 pacemakerd[17363]: notice: main: Starting > Pacemaker 1.1.8-7.el6 (Build: 394e906): generated-manpages agent-manpages > ascii-docs publican-docs ncurses libqb-logging libqb-ipc corosync-plugin cman > Mar 24 06:41:30 vsanqa12 pacemakerd[17363]: notice: update_node_processes: > 0x125af80 Node 2 now known as vsanqa12, was: > Mar 24 06:41:30 vsanqa12 stonith-ng[17370]: notice: crm_add_logfile: > Additional logging available in /var/log/cluster/corosync.log > Mar 24 06:41:30 vsanqa12 cib[17369]: notice: crm_add_logfile: Additional > logging available in /var/log/cluster/corosync.log > Mar 24 06:41:30 vsanqa12 stonith-ng[17370]: notice: crm_cluster_connect: > Connecting to cluster infrastructure: cman > Mar 24 06:41:30 vsanqa12 lrmd[17371]: notice: crm_add_logfile: Additional > logging available in /var/log/cluster/corosync.log > Mar 24 06:41:30 vsanqa12 cib[17369]: notice: main: Using legacy config > location: /var/lib/heartbeat/crm > Mar 24 06:41:30 vsanqa12 attrd[17372]: notice: crm_add_logfile: Additional > logging available in /var/log/cluster/corosync.log > Mar 24 06:41:30 vsanqa12 attrd[17372]: notice: crm_cluster_connect: > Connecting to cluster infrastructure: cman > Mar 24 06:41:30 vsanqa12 pengine[17373]: notice: crm_add_logfile: > Additional logging available in /var/log/cluster/corosync.log > Mar 24 06:41:30 vsanqa12 crmd[17374]: notice: crm_add_logfile: Additional > logging available in /var/log/cluster/corosync.log > Mar 24 06:41:30 vsanqa12 crmd[17374]: notice: main: CRM Git Version: 394e906 > Mar 24 06:41:30 vsanqa12 attrd[17372]: notice: main: Starting mainloop... > Mar 24 06:41:30 vsanqa12 cib[17369]: notice: crm_cluster_connect: > Connecting to cluster infrastructure: cman > Mar 24 06:41:31 vsanqa12 stonith-ng[17370]: notice: setup_cib: Watching for > stonith topology changes > Mar 24 06:41:31 vsanqa12 crmd[17374]: notice: crm_cluster_connect: > Connecting to cluster infrastructure: cman > Mar 24 06:41:31 vsanqa12 crmd[17374]: notice: crm_update_peer_state: > cman_event_callback: Node vsanqa11[1] - state is now lost > Mar 24 06:41:31 vsanqa12 crmd[17374]: notice: crm_update_peer_state: > cman_event_callback: Node vsanqa12[2] - state is now member > Mar 24 06:41:31 vsanqa12 crmd[17374]: notice: do_started: The local CRM is > operational > Mar 24 06:41:33 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:33 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:41:33 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:41:37 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:37 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:41:37 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:41:40 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:40 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:41:40 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:41:44 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:44 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:41:44 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:41:48 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:48 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:41:48 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:41:52 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:52 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:41:52 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:41:52 vsanqa12 crmd[17374]: warning: do_log: FSA: Input > I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING > Mar 24 06:41:52 vsanqa12 crmd[17374]: notice: do_state_transition: State > transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC > cause=C_FSA_INTERNAL origin=do_election_check ] > Mar 24 06:41:52 vsanqa12 attrd[17372]: notice: attrd_local_callback: > Sending full refresh (origin=crmd) > Mar 24 06:41:53 vsanqa12 pengine[17373]: notice: unpack_config: On loss of > CCM Quorum: Ignore > Mar 24 06:41:53 vsanqa12 pengine[17373]: notice: LogActions: Start > vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b:0#011(vsanqa12) > Mar 24 06:41:53 vsanqa12 pengine[17373]: notice: process_pe_message: > Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-1494.bz2 > Mar 24 06:41:54 vsanqa12 crmd[17374]: notice: process_lrm_event: LRM > operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_monitor_0 (call=6, rc=7, > cib-update=24, confirmed=true) not running > Mar 24 06:41:54 vsanqa12 attrd[17372]: notice: attrd_trigger_update: > Sending flush op to all hosts for: probe_complete (true) > Mar 24 06:41:54 vsanqa12 attrd[17372]: notice: attrd_perform_update: Sent > update 4: probe_complete=true > Mar 24 06:41:54 vsanqa12 kernel: VGC: [0000006711331b03:I] Started vHA/vShare > instance /dev/vgca0_VHA > Mar 24 06:41:55 vsanqa12 crmd[17374]: notice: process_lrm_event: LRM > operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_start_0 (call=9, rc=0, > cib-update=25, confirmed=true) ok > Mar 24 06:41:55 vsanqa12 crmd[17374]: notice: process_lrm_event: LRM > operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_monitor_31000 (call=12, > rc=0, cib-update=26, confirmed=false) ok > Mar 24 06:41:55 vsanqa12 crmd[17374]: notice: run_graph: Transition 0 > (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, > Source=/var/lib/pacemaker/pengine/pe-input-1494.bz2): Complete > Mar 24 06:41:55 vsanqa12 crmd[17374]: notice: do_state_transition: State > transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS > cause=C_FSA_INTERNAL origin=notify_crmd ] > Mar 24 06:41:56 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:41:56 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:41:56 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:00 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:00 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:42:00 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:01 vsanqa12 kernel: doing a send with ctx_id 1 > Mar 24 06:42:03 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:03 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:42:03 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:06 vsanqa12 kernel: vgca0_VHA: unknown partition table > Mar 24 06:42:07 vsanqa12 kernel: doing a send with ctx_id 1 > Mar 24 06:42:07 vsanqa12 kernel: VGC: [000000650fed1b03:I] Instance "VHA" > connected with peer "vsanqa11" (status 0xc, 1, 0) > Mar 24 06:42:07 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:07 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:42:07 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:11 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:11 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:42:11 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:15 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:15 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:42:15 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:19 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:19 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:42:19 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:22 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:22 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:22 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.124) ; members(old:1 left:0) > Mar 24 06:42:22 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:26 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or > left the membership and a new membership was formed. > Mar 24 06:42:26 vsanqa12 corosync[17159]: [CMAN ] quorum regained, > resuming activity > Mar 24 06:42:26 vsanqa12 corosync[17159]: [QUORUM] This node is within the > primary component and will provide service. > Mar 24 06:42:26 vsanqa12 corosync[17159]: [QUORUM] Members[2]: 1 2 > Mar 24 06:42:26 vsanqa12 corosync[17159]: [QUORUM] Members[2]: 1 2 > Mar 24 06:42:26 vsanqa12 crmd[17374]: notice: cman_event_callback: > Membership 4047980: quorum acquired > Mar 24 06:42:26 vsanqa12 crmd[17374]: notice: crm_update_peer_state: > cman_event_callback: Node vsanqa11[1] - state is now member > Mar 24 06:42:26 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender > r(0) ip(172.16.68.123) ; members(old:1 left:0) > Mar 24 06:42:26 vsanqa12 corosync[17159]: [MAIN ] Completed service > synchronization, ready to provide service. > Mar 24 06:42:26 vsanqa12 fenced[17213]: fencing deferred to vsanqa11 > Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: match_down_event: No match > for shutdown action on vsanqa11 > Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not > requesting full refresh in R/W mode > Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: crmd_ha_msg_filter: Another > DC detected: vsanqa11 (op=noop) > Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: crmd_ha_msg_filter: Another > DC detected: vsanqa11 (op=noop) > Mar 24 06:42:27 vsanqa12 crmd[17374]: notice: do_state_transition: State > transition S_IDLE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL > origin=crmd_ha_msg_filter ] > Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: do_log: FSA: Input > I_NODE_JOIN from peer_update_callback() received in state S_ELECTION > Mar 24 06:42:27 vsanqa12 crmd[17374]: notice: do_state_transition: State > transition S_ELECTION -> S_RELEASE_DC [ input=I_RELEASE_DC > cause=C_FSA_INTERNAL origin=do_election_count_vote ] > Mar 24 06:42:27 vsanqa12 pacemakerd[17363]: notice: update_node_processes: > 0x1271260 Node 1 now known as vsanqa11, was: > Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not > requesting full refresh in R/W mode > Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not > requesting full refresh in R/W mode > Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not > requesting full refresh in R/W mode > Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not > requesting full refresh in R/W mode > Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not > requesting full refresh in R/W mode > Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: do_log: FSA: Input > I_RELEASE_DC from do_election_count_vote() received in state S_RELEASE_DC > Mar 24 06:42:27 vsanqa12 attrd[17372]: notice: attrd_trigger_update: > Sending flush op to all hosts for: > master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (3) > Mar 24 06:42:27 vsanqa12 attrd[17372]: notice: attrd_perform_update: Sent > update 7: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b=3 > Mar 24 06:42:27 vsanqa12 cib[17369]: notice: cib_server_process_diff: Not > applying diff 0.11071.94 -> 0.11071.95 (sync in progress) > Mar 24 06:42:27 vsanqa12 cib[17369]: notice: cib_server_process_diff: Not > applying diff 0.11071.95 -> 0.11071.96 (sync in progress) > Mar 24 06:42:28 vsanqa12 crmd[17374]: notice: do_state_transition: State > transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE > origin=do_cl_join_finalize_respond ] > Mar 24 06:42:28 vsanqa12 attrd[17372]: notice: attrd_local_callback: > Sending full refresh (origin=crmd) > Mar 24 06:42:28 vsanqa12 attrd[17372]: notice: attrd_trigger_update: > Sending flush op to all hosts for: > master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (3) > Mar 24 06:42:28 vsanqa12 attrd[17372]: notice: attrd_trigger_update: > Sending flush op to all hosts for: probe_complete (true) > Mar 24 06:42:29 vsanqa12 kernel: VGC: [0000006711341b03:I] Stopped vHA/vShare > instance /dev/vgca0_VHA > Mar 24 06:42:35 vsanqa12 attrd[17372]: notice: attrd_trigger_update: > Sending flush op to all hosts for: > master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (<null>) > Mar 24 06:42:35 vsanqa12 attrd[17372]: notice: attrd_perform_update: Sent > delete 30: node=vsanqa12, > attr=master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b, id=<n/a>, set=(null), > section=status > > > Regards, > kiran > _______________________________________________ > Pacemaker mailing list: [email protected] > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ Pacemaker mailing list: [email protected] http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
