[ClusterLabs] DC marks itself as OFFLINE, continues orchestrating the other nodes

Ken Gaillot kgaillot at redhat.com
Thu Sep 29 18:37:06 EDT 2022


I suspect this is fixed in newer versions. It's not a join timing issue
but some sort of peer state bug, and there's been a good bit of change
in that area since this code.

A few comments inline ...

On Wed, 2022-09-14 at 12:40 +0200, Lars Ellenberg wrote:
> On Thu, Sep 08, 2022 at 10:11:46AM -0500, Ken Gaillot wrote:
> > On Thu, 2022-09-08 at 15:01 +0200, Lars Ellenberg wrote:
> > > Scenario:
> > > three nodes, no fencing (I know)
> > > break network, isolating nodes
> > > unbreak network, see how cluster partitions rejoin and resume
> > > service
> > 
> > I'm guessing the CIB changed during the break, with more changes in
> > one
> > of the other partitions than mqhavm24 ...
> 
> quite likely.
> 
> > Reconciling CIB differences in different partitions is inherently
> > lossy. Basically we gotta pick one side to win, and the current
> > algorithm just looks at the number of changes. (An "admin epoch"
> > can
> > also be bumped manually to override that.)
> 
> Yes.

That turned out to be unrelated, the CIBs re-synced after the rejoin
without a problem.

> 
> > > I have full crm_reports and some context knowledge about the
> > > setup.
> > > 
> > > For now I'd like to know: has anyone seen this before,
> > > is that a known bug in corner cases/races during re-join,
> > > has it even been fixed meanwhile?
> > 
> > No, yes, no

Probably no, no, yes :)

> 
> Thank you.
> That's what I thought :-|
> 
> > It does seem we could handle the specific case of the local node's
> > state being overwritten a little better. We can't just override the
> > join state if the other nodes think it is different, but we could
> > release DC and restart the join process. How did it handle the
> > situation in this case?
> 
> I think these are the most interesting lines:
> 
> -----------------
> Aug 11 12:32:45 mqhavm24 corosync[13296]:  [QUORUM] Members[1]: 1
>    stopping stuff
> 
> Aug 11 12:33:36 mqhavm24 corosync[13296]:  [QUORUM] Members[3]: 1 3 2
> 
> Aug 11 12:33:36 [13310] mqhavm24       crmd:  warning:
> crmd_ha_msg_filter:	Another DC detected: mqhavm37 (op=noop)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info: update_dc:	
> Set DC to mqhavm24 (3.0.14)
> 
> Aug 11 12:33:36 [13308] mqhavm24      attrd:   notice:
> attrd_check_for_new_writer:	Detected another attribute writer
> (mqhavm37), starting new election
> Aug 11 12:33:36 [13308] mqhavm24      attrd:   notice:
> attrd_declare_winner:	Recorded local node as attribute writer (was
> unset)
> 
> plan to start stuff on all three nodes
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice:
> process_pe_message:	Calculated transition 161, saving inputs in
> /var/lib/pacemaker/pengine/pe-input-688.bz2
> 
> but then
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='1']:  @crm-
> debug-origin=do_cib_replaced, @join=down
> 
> and we now keep stuff stopped locally, but continue to manage the
> other two nodes.
> -----------------
> 
> 
> commented log of the most intersting node below,
> starting at the point when communication goes down.
> maybe you see something that gives you an idea how to handle this
> better.
> 
> If it helps, I have the full crm_report of all nodes,
> should you feel the urge to have a look.
> 
> Aug 11 12:32:45 mqhavm24 corosync[13296]:  [TOTEM ] Failed to receive
> the leave message. failed: 3 2
> Aug 11 12:32:45 mqhavm24 corosync[13296]:  [QUORUM] This node is
> within the non-primary component and will NOT provide any services.
> Aug 11 12:32:45 mqhavm24 corosync[13296]:  [QUORUM] Members[1]: 1
> Aug 11 12:32:45 mqhavm24 corosync[13296]:  [MAIN  ] Completed service
> synchronization, ready to provide service.
>     [stripping most info level for now]
> Aug 11 12:32:45 [13306] mqhavm24 stonith-ng:   notice:
> crm_update_peer_state_iter:	Node mqhavm37 state is now lost |
> nodeid=2 previous=member source=crm_update_peer_proc
> Aug 11 12:32:45 [13306] mqhavm24 stonith-ng:   notice:
> reap_crm_member:	Purged 1 peer with id=2 and/or uname=mqhavm37
> from the membership cache
> Aug 11 12:32:45 [13306] mqhavm24 stonith-ng:   notice:
> crm_update_peer_state_iter:	Node mqhavm34 state is now lost |
> nodeid=3 previous=member source=crm_update_peer_proc
> Aug 11 12:32:45 [13306] mqhavm24 stonith-ng:   notice:
> reap_crm_member:	Purged 1 peer with id=3 and/or uname=mqhavm34
> from the membership cache
> Aug 11 12:32:45 [13303] mqhavm24 pacemakerd:  warning:
> pcmk_quorum_notification:	Quorum lost | membership=3112546
> members=1
> Aug 11 12:32:45 [13303] mqhavm24 pacemakerd:   notice:
> crm_update_peer_state_iter:	Node mqhavm34 state is now lost |
> nodeid=3 previous=member source=crm_reap_unseen_nodes
> Aug 11 12:32:45 [13303] mqhavm24 pacemakerd:   notice:
> crm_update_peer_state_iter:	Node mqhavm37 state is now lost |
> nodeid=2 previous=member source=crm_reap_unseen_nodes
> Aug 11 12:32:45 [13310] mqhavm24       crmd:  warning:
> pcmk_quorum_notification:	Quorum lost | membership=3112546
> members=1
> Aug 11 12:32:45 [13310] mqhavm24       crmd:   notice:
> crm_update_peer_state_iter:	Node mqhavm34 state is now lost |
> nodeid=3 previous=member source=crm_reap_unseen_nodes
> Aug 11 12:32:45 [13308] mqhavm24      attrd:   notice:
> crm_update_peer_state_iter:	Node mqhavm37 state is now lost |
> nodeid=2 previous=member source=crm_update_peer_proc
> Aug 11 12:32:45 [13305] mqhavm24        cib:   notice:
> crm_update_peer_state_iter:	Node mqhavm37 state is now lost |
> nodeid=2 previous=member source=crm_update_peer_proc
> Aug 11 12:32:45 [13308] mqhavm24      attrd:   notice:
> attrd_peer_remove:	Removing all mqhavm37 attributes for peer loss
> Aug 11 12:32:45 [13305] mqhavm24        cib:   notice:
> reap_crm_member:	Purged 1 peer with id=2 and/or uname=mqhavm37
> from the membership cache
> Aug 11 12:32:45 [13308] mqhavm24      attrd:   notice:
> reap_crm_member:	Purged 1 peer with id=2 and/or uname=mqhavm37
> from the membership cache
> Aug 11 12:32:45 [13305] mqhavm24        cib:   notice:
> crm_update_peer_state_iter:	Node mqhavm34 state is now lost |
> nodeid=3 previous=member source=crm_update_peer_proc
> Aug 11 12:32:45 [13308] mqhavm24      attrd:   notice:
> crm_update_peer_state_iter:	Node mqhavm34 state is now lost |
> nodeid=3 previous=member source=crm_update_peer_proc
> Aug 11 12:32:45 [13305] mqhavm24        cib:   notice:
> reap_crm_member:	Purged 1 peer with id=3 and/or uname=mqhavm34
> from the membership cache
> Aug 11 12:32:45 [13308] mqhavm24      attrd:   notice:
> attrd_peer_remove:	Removing all mqhavm34 attributes for peer loss
> Aug 11 12:32:45 [13308] mqhavm24      attrd:   notice:
> reap_crm_member:	Purged 1 peer with id=3 and/or uname=mqhavm34
> from the membership cache
> Aug 11 12:32:45 [13310] mqhavm24       crmd:  warning:
> match_down_event:	No reason to expect node 3 to be down
> Aug 11 12:32:45 [13310] mqhavm24       crmd:   notice:
> peer_update_callback:	Stonith/shutdown of mqhavm34 not matched
> Aug 11 12:32:45 [13310] mqhavm24       crmd:   notice:
> crm_update_peer_state_iter:	Node mqhavm37 state is now lost |
> nodeid=2 previous=member source=crm_reap_unseen_nodes
> Aug 11 12:32:45 [13310] mqhavm24       crmd:  warning:
> match_down_event:	No reason to expect node 2 to be down
> Aug 11 12:32:45 [13310] mqhavm24       crmd:   notice:
> peer_update_callback:	Stonith/shutdown of mqhavm37 not matched
> Aug 11 12:32:45 [13310] mqhavm24       crmd:   notice:
> do_state_transition:	State transition S_IDLE -> S_POLICY_ENGINE |
> input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
> Aug 11 12:32:45 [13310] mqhavm24       crmd:  warning:
> match_down_event:	No reason to expect node 2 to be down
> Aug 11 12:32:45 [13310] mqhavm24       crmd:   notice:
> peer_update_callback:	Stonith/shutdown of mqhavm37 not matched
> Aug 11 12:32:45 [13310] mqhavm24       crmd:  warning:
> match_down_event:	No reason to expect node 3 to be down
> Aug 11 12:32:45 [13310] mqhavm24       crmd:   notice:
> peer_update_callback:	Stonith/shutdown of mqhavm34 not matched
> Aug 11 12:32:45 [13309] mqhavm24    pengine:  warning:
> cluster_status:	Fencing and resource management disabled due to
> lack of quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       drgxrde_rdqma                 (                           
>                   mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_fs_drgxrde_rdqma            (                           
>                   mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_rdqmx_drgxrde_rdqma         (                           
>                   mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_drbd_dr_drgxrde_rdqma:0     (                           
>            Master mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_drbd_drgxrde_rdqma:0        (                           
>            Master mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       drgxrde_rdqmb                 (                           
>                   mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_fs_drgxrde_rdqmb            (                           
>                   mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_rdqmx_drgxrde_rdqmb         (                           
>                   mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_drbd_dr_drgxrde_rdqmb:0     (                           
>            Master mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_drbd_drgxrde_rdqmb:0        (                           
>            Master mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_ip_drgxrde_rdqma            (                           
>                   mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Stop       p_ip_drgxrde_rdqmb            (                           
>                   mqhavm24 )   due to no quorum
> Aug 11 12:32:45 [13309] mqhavm24    pengine:   notice:
> process_pe_message:	Calculated transition 154, saving inputs in
> /var/lib/pacemaker/pengine/pe-input-682.bz2
> Aug 11 12:32:45 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating stop operation drgxrde_rdqma_stop_0
> locally ...
> 
> boring stopping stuff stripped ...
> 
> Aug 11 12:32:48 [13310] mqhavm24       crmd:   notice:
> do_state_transition:	State transition S_TRANSITION_ENGINE -> S_IDLE
> | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
> Aug 11 12:33:02 [13310] mqhavm24       crmd:   notice:
> do_state_transition:	State transition S_IDLE -> S_POLICY_ENGINE |
> input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
> Aug 11 12:33:02 [13309] mqhavm24    pengine:  warning:
> cluster_status:	Fencing and resource management disabled due to
> lack of quorum
> Aug 11 12:33:02 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:0        (                           
>                   mqhavm24 )   due to no quorum (blocked)
> Aug 11 12:33:02 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:0        (                           
>                   mqhavm24 )   due to no quorum (blocked)
> Aug 11 12:33:02 [13309] mqhavm24    pengine:   notice:
> process_pe_message:	Calculated transition 158, saving inputs in
> /var/lib/pacemaker/pengine/pe-input-686.bz2
> Aug 11 12:33:02 [13310] mqhavm24       crmd:   notice: run_graph:	
> Transition 158 (Complete=0, Pending=0, Fired=0, Skipped=0,
> Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-686.bz2):
> Complete
> Aug 11 12:33:02 [13310] mqhavm24       crmd:   notice:
> do_state_transition:	State transition S_TRANSITION_ENGINE -> S_IDLE
> | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
> Aug 11 12:33:02 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> resources to all (origin=local/crm_resource/6)
> Aug 11 12:33:02 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.171.0 2
> Aug 11 12:33:02 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.0 (null)
> Aug 11 12:33:02 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @epoch=172
> Aug 11 12:33:02 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/configuration/resources/master[@id='ms_
> drbd_dr_drgxrde_rdqmb']/meta_attributes[@id='ms_drbd_dr_drgxrde_rdqmb
> -meta_attributes']/nvpair[@id='ms_drbd_dr_drgxrde_rdqmb-
> meta_attributes-target-role']:  @value=Slave
> Aug 11 12:33:02 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> resources: OK (rc=0, origin=mqhavm24/crm_resource/6, version=0.172.0)
> Aug 11 12:33:02 [13310] mqhavm24       crmd:     info:
> abort_transition_graph:	Transition aborted by
> ms_drbd_dr_drgxrde_rdqmb-meta_attributes-target-role doing modify
> target-role=Slave: Configuration change | cib=0.172.0
> source=te_update_diff_v2:522
> path=/cib/configuration/resources/master[@id='ms_drbd_dr_drgxrde_rdqm
> b']/meta_attributes[@id='ms_drbd_dr_drgxrde_rdqmb-
> meta_attributes']/nvpair[@id='ms_drbd_dr_drgxrde_rdqmb-
> meta_attributes-target-role'] complete=true
> Aug 11 12:33:02 [13310] mqhavm24       crmd:   notice:
> do_state_transition:	State transition S_IDLE -> S_POLICY_ENGINE |
> input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
> Aug 11 12:33:02 [13305] mqhavm24        cib:     info:
> cib_file_backup:	Archived previous version as
> /var/lib/pacemaker/cib/cib-78.raw
> Aug 11 12:33:02 [13309] mqhavm24    pengine:  warning:
> cluster_status:	Fencing and resource management disabled due to
> lack of quorum
> Aug 11 12:33:02 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:0        (                           
>                   mqhavm24 )   due to no quorum (blocked)
> Aug 11 12:33:02 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:0        (                           
>                   mqhavm24 )   due to no quorum (blocked)
> Aug 11 12:33:02 [13309] mqhavm24    pengine:   notice:
> process_pe_message:	Calculated transition 159, saving inputs in
> /var/lib/pacemaker/pengine/pe-input-687.bz2
> Aug 11 12:33:02 [13310] mqhavm24       crmd:   notice: run_graph:	
> Transition 159 (Complete=0, Pending=0, Fired=0, Skipped=0,
> Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-687.bz2):
> Complete
> Aug 11 12:33:02 [13310] mqhavm24       crmd:   notice:
> do_state_transition:	State transition S_TRANSITION_ENGINE -> S_IDLE
> | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
> 
> nothing, until connectivity is back.
> I leave the info level in below:
> 
> Aug 11 12:33:36 mqhavm24 corosync[13296]:  [TOTEM ] A new membership
> (192.168.101.24:3112554) was formed. Members joined: 3 2
> Aug 11 12:33:36 mqhavm24 corosync[13296]:  [QUORUM] This node is
> within the primary component and will provide service.
> Aug 11 12:33:36 mqhavm24 corosync[13296]:  [QUORUM] Members[3]: 1 3 2
> Aug 11 12:33:36 mqhavm24 corosync[13296]:  [MAIN  ] Completed service
> synchronization, ready to provide service.
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> pcmk_cpg_membership:	Group attrd event 11: node 3 pid 2553 joined
> via cluster join
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> pcmk_cpg_membership:	Group pacemakerd event 11: node 3 pid 2546
> joined via cluster join
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> pcmk_cpg_membership:	Group attrd event 11: mqhavm24 (node 1 pid
> 13308) is member
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> pcmk_cpg_membership:	Group pacemakerd event 11: mqhavm24 (node 1 pid
> 13303) is member
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> pcmk_cpg_membership:	Group pacemakerd event 11: mqhavm34 (node 3 pid
> 2546) is member
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node mqhavm34[3] -
> corosync-cpg is now online
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:  warning:
> pcmk_cpg_membership:	Node 3 is member of group pacemakerd but was
> believed offline

The warnings like this one are due to receiving corosync's CPG messages
before its membership messages. I'm pretty sure these would go away
with current corosync+pacemaker.

> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:   notice:
> crm_update_peer_state_iter:	Node mqhavm34 state is now member |
> nodeid=3 previous=lost source=pcmk_cpg_membership
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info: crm_cs_flush:	
> Sent 0 CPG messages  (1 remaining, last=17): Try again (6)
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> pcmk_cpg_membership:	Group stonith-ng event 11: node 3 pid 2551
> joined via cluster join
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> pcmk_cpg_membership:	Group stonith-ng event 11: mqhavm24 (node 1 pid
> 13306) is member
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> pcmk_cpg_membership:	Group pacemakerd event 12: node 2 pid 41735
> joined via cluster join
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> pcmk_cpg_membership:	Group pacemakerd event 12: mqhavm24 (node 1 pid
> 13303) is member
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> pcmk_cpg_membership:	Group pacemakerd event 12: mqhavm37 (node 2 pid
> 41735) is member
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node mqhavm37[2] -
> corosync-cpg is now online
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:  warning:
> pcmk_cpg_membership:	Node 2 is member of group pacemakerd but was
> believed offline
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:   notice:
> crm_update_peer_state_iter:	Node mqhavm37 state is now member |
> nodeid=2 previous=lost source=pcmk_cpg_membership
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> pcmk_cpg_membership:	Group pacemakerd event 12: mqhavm34 (node 3 pid
> 2546) is member
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> pcmk_cpg_membership:	Group cib event 11: node 3 pid 2550 joined via
> cluster join
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> pcmk_cpg_membership:	Group cib event 11: mqhavm24 (node 1 pid 13305)
> is member
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> pcmk_cpg_membership:	Group crmd event 11: node 3 pid 2555 joined via
> cluster join
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> pcmk_cpg_membership:	Group crmd event 11: mqhavm24 (node 1 pid
> 13310) is member
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> pcmk_cpg_membership:	Group crmd event 11: mqhavm34 (node 3 pid 2555)
> is member
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node mqhavm34[3] -
> corosync-cpg is now online
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> peer_update_callback:	Client mqhavm34/peer now has status [online]
> (DC=true, changed=4000000)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> te_trigger_stonith_history_sync:	Fence history will be
> synchronized cluster-wide within 5 seconds
> Aug 11 12:33:36 [13310] mqhavm24       crmd:  warning:
> pcmk_cpg_membership:	Node 3 is member of group crmd but was believed
> offline
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> crm_update_peer_state_iter:	Node mqhavm34 state is now member |
> nodeid=3 previous=lost source=pcmk_cpg_membership
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> peer_update_callback:	Cluster node mqhavm34 is now member (was lost)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> exec_alert_list:	Sending node alert via rdqm-alert to (null)
> Aug 11 12:33:36 [13307] mqhavm24       lrmd:     info:
> process_lrmd_alert_exec:	Executing alert rdqm-alert for
> 304b95f0-bb72-4697-a1ec-45633c59f62d
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> pcmk_cpg_membership:	Group crmd event 12: node 2 pid 41742 joined
> via cluster join
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> pcmk_cpg_membership:	Group crmd event 12: mqhavm24 (node 1 pid
> 13310) is member
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> pcmk_cpg_membership:	Group crmd event 12: mqhavm37 (node 2 pid
> 41742) is member
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node mqhavm37[2] -
> corosync-cpg is now online
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> peer_update_callback:	Client mqhavm37/peer now has status [online]
> (DC=true, changed=4000000)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> te_trigger_stonith_history_sync:	Fence history will be
> synchronized cluster-wide within 5 seconds
> Aug 11 12:33:36 [13310] mqhavm24       crmd:  warning:
> pcmk_cpg_membership:	Node 2 is member of group crmd but was believed
> offline
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> crm_update_peer_state_iter:	Node mqhavm37 state is now member |
> nodeid=2 previous=lost source=pcmk_cpg_membership
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> peer_update_callback:	Cluster node mqhavm37 is now member (was lost)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> exec_alert_list:	Sending node alert via rdqm-alert to (null)
> Aug 11 12:33:36 [13307] mqhavm24       lrmd:     info:
> process_lrmd_alert_exec:	Executing alert rdqm-alert for
> 304b95f0-bb72-4697-a1ec-45633c59f62d
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> pcmk_cpg_membership:	Group crmd event 12: mqhavm34 (node 3 pid 2555)
> is member
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:   notice:
> pcmk_quorum_notification:	Quorum acquired | membership=3112554
> members=3
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> pcmk_quorum_notification:	Quorum acquired | membership=3112554
> members=3
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> corosync_node_name:	Unable to get node name for nodeid 3
> Aug 11 12:33:36 [13308] mqhavm24      attrd:   notice: get_node_name:
> 	Could not obtain a node name for corosync nodeid 3
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info: crm_get_peer:	
> Created entry 9ee8517a-3318-45e3-9d3a-dc93a8094e87/0x55901ed8de70 for
> node (null)/3 (2 total)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info: crm_get_peer:	
> Node 3 has uuid 3
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> pcmk_cpg_membership:	Group attrd event 11: peer node (node 3 pid
> 2553) is member
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node (null)[3] - corosync-
> cpg is now online
> Aug 11 12:33:36 [13308] mqhavm24      attrd:   notice:
> crm_update_peer_state_iter:	Node (null) state is now member |
> nodeid=3 previous=unknown source=crm_update_peer_proc
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> pcmk_cpg_membership:	Group attrd event 12: node 2 pid 41740 joined
> via cluster join
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> pcmk_cpg_membership:	Group attrd event 12: mqhavm24 (node 1 pid
> 13308) is member
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> corosync_node_name:	Unable to get node name for nodeid 3
> Aug 11 12:33:36 [13305] mqhavm24        cib:   notice: get_node_name:
> 	Could not obtain a node name for corosync nodeid 3
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info: crm_get_peer:	
> Created entry f56cdbb5-3019-4ab7-9cce-4afd171fe3b9/0x55d7e0b7f6e0 for
> node (null)/3 (2 total)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info: crm_get_peer:	
> Node 3 has uuid 3
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> pcmk_cpg_membership:	Group cib event 11: peer node (node 3 pid 2550)
> is member
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node (null)[3] - corosync-
> cpg is now online
> Aug 11 12:33:36 [13305] mqhavm24        cib:   notice:
> crm_update_peer_state_iter:	Node (null) state is now member |
> nodeid=3 previous=unknown source=crm_update_peer_proc
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> pcmk_cpg_membership:	Group cib event 12: node 2 pid 41737 joined via
> cluster join
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> pcmk_cpg_membership:	Group cib event 12: mqhavm24 (node 1 pid 13305)
> is member
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> corosync_node_name:	Unable to get node name for nodeid 3
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:   notice: get_node_name:
> 	Could not obtain a node name for corosync nodeid 3
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info: crm_get_peer:	
> Created entry 12e40f2a-affc-4a30-bdc1-6db3d06f20e6/0x560ff1f04310 for
> node (null)/3 (2 total)
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info: crm_get_peer:	
> Node 3 has uuid 3
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> pcmk_cpg_membership:	Group stonith-ng event 11: peer node (node 3
> pid 2551) is member
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node (null)[3] - corosync-
> cpg is now online
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:   notice:
> crm_update_peer_state_iter:	Node (null) state is now member |
> nodeid=3 previous=unknown source=crm_update_peer_proc
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> pcmk_cpg_membership:	Group stonith-ng event 12: node 2 pid 41738
> joined via cluster join
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> pcmk_cpg_membership:	Group stonith-ng event 12: mqhavm24 (node 1 pid
> 13306) is member
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> corosync_node_name:	Unable to get node name for nodeid 2
> Aug 11 12:33:36 [13308] mqhavm24      attrd:   notice: get_node_name:
> 	Could not obtain a node name for corosync nodeid 2
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info: crm_get_peer:	
> Created entry d59e708d-c858-4d3d-a516-842e8f979e37/0x55901ed8dee0 for
> node (null)/2 (3 total)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info: crm_get_peer:	
> Node 2 has uuid 2
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> pcmk_cpg_membership:	Group attrd event 12: peer node (node 2 pid
> 41740) is member
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node (null)[2] - corosync-
> cpg is now online
> Aug 11 12:33:36 [13308] mqhavm24      attrd:   notice:
> crm_update_peer_state_iter:	Node (null) state is now member |
> nodeid=2 previous=unknown source=crm_update_peer_proc
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> corosync_node_name:	Unable to get node name for nodeid 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:   notice: get_node_name:
> 	Could not obtain a node name for corosync nodeid 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info: crm_get_peer:	
> Created entry 30120675-3a25-4fe3-91ac-a6f408ef5e30/0x55d7e0b7f750 for
> node (null)/2 (3 total)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info: crm_get_peer:	
> Node 2 has uuid 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> pcmk_cpg_membership:	Group cib event 12: peer node (node 2 pid
> 41737) is member
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node (null)[2] - corosync-
> cpg is now online
> Aug 11 12:33:36 [13305] mqhavm24        cib:   notice:
> crm_update_peer_state_iter:	Node (null) state is now member |
> nodeid=2 previous=unknown source=crm_update_peer_proc
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> corosync_node_name:	Unable to get node name for nodeid 3
> Aug 11 12:33:36 [13308] mqhavm24      attrd:   notice: get_node_name:
> 	Could not obtain a node name for corosync nodeid 3
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> pcmk_cpg_membership:	Group attrd event 12: peer node (node 3 pid
> 2553) is member
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> corosync_node_name:	Unable to get node name for nodeid 2
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:   notice: get_node_name:
> 	Could not obtain a node name for corosync nodeid 2
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info: crm_get_peer:	
> Created entry 7cee70c9-7707-4380-9f70-2c6c425cef33/0x560ff1f04380 for
> node (null)/2 (3 total)
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info: crm_get_peer:	
> Node 2 has uuid 2
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> pcmk_cpg_membership:	Group stonith-ng event 12: peer node (node 2
> pid 41738) is member
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> crm_update_peer_proc:	pcmk_cpg_membership: Node (null)[2] - corosync-
> cpg is now online
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:   notice:
> crm_update_peer_state_iter:	Node (null) state is now member |
> nodeid=2 previous=unknown source=crm_update_peer_proc
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> corosync_node_name:	Unable to get node name for nodeid 3
> Aug 11 12:33:36 [13305] mqhavm24        cib:   notice: get_node_name:
> 	Could not obtain a node name for corosync nodeid 3
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> pcmk_cpg_membership:	Group cib event 12: peer node (node 3 pid 2550)
> is member
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/crmd/1085)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/crmd/1086)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/crmd/1087)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/crmd/1088)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section cib
> to all (origin=local/crmd/1089)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> nodes to all (origin=local/crmd/1093)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/crmd/1094)
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> corosync_node_name:	Unable to get node name for nodeid 3
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:   notice: get_node_name:
> 	Could not obtain a node name for corosync nodeid 3
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info:
> pcmk_cpg_membership:	Group stonith-ng event 12: peer node (node 3
> pid 2551) is member
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info: crm_get_peer:	
> Node 3 is now known as mqhavm34
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> register_fsa_error_adv:	Resetting the current action list
> 
> Elections:
> 
> Aug 11 12:33:36 [13310] mqhavm24       crmd:  warning:
> crmd_ha_msg_filter:	Another DC detected: mqhavm37 (op=noop)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> do_state_transition:	State transition S_IDLE -> S_ELECTION |
> input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info: update_dc:	
> Unset DC. Was mqhavm24
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> election_count_vote:	election-DC round 6 (owner node ID 2) pass:
> vote from mqhavm37 (Uptime)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> election_count_vote:	election-DC round 7 (owner node ID 2) pass:
> vote from mqhavm37 (Uptime)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info: crm_get_peer:	
> Node 3 is now known as mqhavm34
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_peer_message:	Processing sync-response from mqhavm34
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_peer_update:	Setting #attrd-protocol[mqhavm34]: (null) -> 2
> from mqhavm34
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> write_attribute:	Processed 2 private changes for #attrd-
> protocol, id=n/a, set=n/a
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_peer_update:	Setting rdqm-transient-attribute[mqhavm34]:
> (null) -> 1 from mqhavm34
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_replace:	Digest matched on replace from mqhavm34:
> 3c345690432f9c09a722bbf58085e174
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_replace:	Replacement 0.169.51 from mqhavm34 not applied
> to 0.172.0: current epoch is greater than the replacement
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> write_attribute:	Sent CIB request 138 with 2 changes for rdqm-
> transient-attribute (id n/a, set n/a)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.p_fs_drgxrde_rdqma moved from 1 to
> 0
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.p_fs_drgxrde_rdqmb moved from 5 to
> 1
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.drgxrde_rdqma moved from 0 to 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.drgxrde_rdqmb moved from 4 to 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.p_drbd_drgxrde_rdqmb moved from 6
> to 3
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.p_drbd_dr_drgxrde_rdqma moved from
> 2 to 4
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.p_drbd_dr_drgxrde_rdqmb moved from
> 5 to 4
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.p_drbd_drgxrde_rdqma moved from 3
> to 5
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.p_rdqmx_drgxrde_rdqma moved from 1
> to 6
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> __xml_diff_object:	lrm_resource.p_rdqmx_drgxrde_rdqmb moved from 4
> to 6
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.0 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.1 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=1
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='3']:  @crmd=onli
> ne, @crm-debug-origin=peer_update_callback
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/crmd/1085, version=0.172.1)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/crmd/1086, version=0.172.1)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.1 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.2 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='2']:  @crmd=onli
> ne, @crm-debug-origin=peer_update_callback
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/crmd/1087, version=0.172.2)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/crmd/1088, version=0.172.2)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.2 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.3
> 4ee9a15c3183a0db2ce37e6fc5615a57
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=3, @have-quorum=1
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section cib:
> OK (rc=0, origin=mqhavm24/crmd/1089, version=0.172.3)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> nodes: OK (rc=0, origin=mqhavm24/crmd/1093, version=0.172.3)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.3 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.4 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=4
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='1']:  @crm-
> debug-origin=post_cache_update
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='2']:  @in_ccm=tr
> ue, @crm-debug-origin=post_cache_update
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='3']:  @in_ccm=tr
> ue, @crm-debug-origin=post_cache_update
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/crmd/1094, version=0.172.4)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/attrd/138)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> election_check:	election-DC won by local node
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info: do_log:	
> Input I_ELECTION_DC received in state S_ELECTION from election_win_cb
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> do_state_transition:	State transition S_ELECTION -> S_INTEGRATION |
> input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=election_win_cb
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.4 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.5 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=5
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++
> /cib/status/node_state[@id='3']:  <transient_attributes id="3"/>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     <instanc
> e_attributes id="status-3">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <nvpai
> r id="status-3-rdqm-transient-attribute" name="rdqm-transient-
> attribute" value="1"/>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     </instan
> ce_attributes>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                   </transien
> t_attributes>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/attrd/138, version=0.172.5)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	CIB update 138 result for rdqm-transient-
> attribute: OK | rc=0
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* rdqm-transient-attribute[mqhavm34]=1
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* rdqm-transient-attribute[mqhavm24]=1
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> do_dc_takeover:	Taking over DC status for this partition
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_master operation for section
> 'all': OK (rc=0, origin=local/crmd/1095, version=0.172.5)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section cib
> to all (origin=local/crmd/1096)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> crm_config to all (origin=local/crmd/1098)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> crm_config to all (origin=local/crmd/1100)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> crm_config to all (origin=local/crmd/1102)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section cib:
> OK (rc=0, origin=mqhavm24/crmd/1096, version=0.172.5)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> corosync_cluster_name:	Cannot get totem.cluster_name:
> CS_ERR_NOT_EXIST (12)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> join_make_offer:	Making join-4 offers based on membership event
> 3112554
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> join_make_offer:	Sending join-4 offer to mqhavm34
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> join_make_offer:	Sending join-4 offer to mqhavm24
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> join_make_offer:	Sending join-4 offer to mqhavm37
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> do_dc_join_offer_all:	Waiting on join-4 requests from 3 outstanding
> nodes
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> crm_config: OK (rc=0, origin=mqhavm24/crmd/1098, version=0.172.5)
> 
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info: update_dc:	
> Set DC to mqhavm24 (3.0.14)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> crm_config: OK (rc=0, origin=mqhavm24/crmd/1100, version=0.172.5)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> crm_config: OK (rc=0, origin=mqhavm24/crmd/1102, version=0.172.5)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> do_state_transition:	State transition S_INTEGRATION ->
> S_FINALIZE_JOIN | input=I_INTEGRATED cause=C_FSA_INTERNAL
> origin=check_join_state
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> nodes to all (origin=local/crmd/1106)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> nodes to all (origin=local/crmd/1107)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> nodes to all (origin=local/crmd/1108)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> controld_delete_node_state:	Deleting resource history for node
> mqhavm24 (via CIB call 1109) |
> xpath=//node_state[@uname='mqhavm24']/lrm
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> controld_delete_node_state:	Deleting resource history for node
> mqhavm37 (via CIB call 1111) |
> xpath=//node_state[@uname='mqhavm37']/lrm
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_replace:	Digest matched on replace from mqhavm24:
> f257c75862a5238d3303815722d3205b
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_replace:	Replaced 0.172.5 with 0.172.5 from mqhavm24
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_replace operation for section
> 'all': OK (rc=0, origin=mqhavm24/crmd/1105, version=0.172.5)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> controld_delete_node_state:	Deleting resource history for node
> mqhavm34 (via CIB call 1113) |
> xpath=//node_state[@uname='mqhavm34']/lrm
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> nodes: OK (rc=0, origin=mqhavm24/crmd/1106, version=0.172.5)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> nodes: OK (rc=0, origin=mqhavm24/crmd/1107, version=0.172.5)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> nodes: OK (rc=0, origin=mqhavm24/crmd/1108, version=0.172.5)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_delete operation for section
> //node_state[@uname='mqhavm24']/lrm to all (origin=local/crmd/1109)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/crmd/1110)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_delete operation for section
> //node_state[@uname='mqhavm37']/lrm to all (origin=local/crmd/1111)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/crmd/1112)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_delete operation for section
> //node_state[@uname='mqhavm34']/lrm to all (origin=local/crmd/1113)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.5 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.6 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	-- /cib/status/node_state[@id='1']/lrm[@id='1']
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=6
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_delete operation for section
> //node_state[@uname='mqhavm24']/lrm: OK (rc=0,
> origin=mqhavm24/crmd/1109, version=0.172.6)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.6 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.7 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=7
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='1']:  @crm-
> debug-origin=do_lrm_query_internal
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++ /cib/status/node_state[@id='1']:  <lrm
> id="1"/>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     <lrm_res
> ources>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_fs_drgxrde_rdqma" type="Filesystem" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_fs_drgxrde_rdqma_last_0"
> operation_key="p_fs_drgxrde_rdqma_stop_0" operation="stop" crm-debug-
> origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="5:155:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;5:155:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1095" rc-code="0" op-st
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_fs_drgxrde_rdqmb" type="Filesystem" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_fs_drgxrde_rdqmb_last_0"
> operation_key="p_fs_drgxrde_rdqmb_stop_0" operation="stop" crm-debug-
> origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="61:155:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;61:155:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1096" rc-code="0" op-
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_drgxrde_rdqma" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqma_last_failure_0"
> operation_key="p_drbd_drgxrde_rdqma_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="5:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:8;5:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm24" call-id="24" rc-
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqma_last_0"
> operation_key="p_drbd_drgxrde_rdqma_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="27:157:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;27:157:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1143" rc-code="0"
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="drgxrde_rdqma" type="rdqm" class="ocf" provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="drgxrde_rdqma_last_0"
> operation_key="drgxrde_rdqma_stop_0" operation="stop" crm-debug-
> origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="9:154:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;9:154:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1060" rc-code="0" op-status="0" 
> i
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="drgxrde_rdqmb" type="rdqm" class="ocf" provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="drgxrde_rdqmb_last_0"
> operation_key="drgxrde_rdqmb_stop_0" operation="stop" crm-debug-
> origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="69:154:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;69:154:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1062" rc-code="0" op-status="0"
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_dr_drgxrde_rdqma" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_dr_drgxrde_rdqma_last_failure_0"
> operation_key="p_drbd_dr_drgxrde_rdqma_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="4:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:8;4:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm24" call-id="1
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_dr_drgxrde_rdqma_last_0"
> operation_key="p_drbd_dr_drgxrde_rdqma_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="1:156:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;1:156:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1122" rc-code
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_drgxrde_rdqmb" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqmb_last_failure_0"
> operation_key="p_drbd_drgxrde_rdqmb_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="10:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:8;10:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm24" call-id="60" r
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqmb_last_0"
> operation_key="p_drbd_drgxrde_rdqmb_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="79:157:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;79:157:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1144" rc-code="0"
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_dr_drgxrde_rdqmb" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_dr_drgxrde_rdqmb_last_failure_0"
> operation_key="p_drbd_dr_drgxrde_rdqmb_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="9:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:8;9:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm24" call-id="5
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_dr_drgxrde_rdqmb_last_0"
> operation_key="p_drbd_dr_drgxrde_rdqmb_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="54:156:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;54:156:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1123" rc-co
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_ip_drgxrde_rdqma" type="IPaddr2" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_ip_drgxrde_rdqma_last_0"
> operation_key="p_ip_drgxrde_rdqma_stop_0" operation="stop" crm-debug-
> origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="129:154:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;129:154:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1075" rc-code="0" o
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_rdqmx_drgxrde_rdqma" type="rdqmx" class="ocf"
> provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_rdqmx_drgxrde_rdqma_last_0"
> operation_key="p_rdqmx_drgxrde_rdqma_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="6:155:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;6:155:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1083" rc-code="0"
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_rdqmx_drgxrde_rdqmb" type="rdqmx" class="ocf"
> provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_rdqmx_drgxrde_rdqmb_last_0"
> operation_key="p_rdqmx_drgxrde_rdqmb_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="62:155:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;62:155:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1085" rc-code="
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_ip_drgxrde_rdqmb" type="IPaddr2" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_ip_drgxrde_rdqmb_last_0"
> operation_key="p_ip_drgxrde_rdqmb_stop_0" operation="stop" crm-debug-
> origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="131:154:0:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:0;131:154:0:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm24" call-id="1079" rc-code="0" o
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     </lrm_re
> sources>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                   </lrm>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/crmd/1110, version=0.172.7)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.7 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.8 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	-- /cib/status/node_state[@id='2']/lrm[@id='2']
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=8
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_delete operation for section
> //node_state[@uname='mqhavm37']/lrm: OK (rc=0,
> origin=mqhavm24/crmd/1111, version=0.172.8)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> do_state_transition:	State transition S_FINALIZE_JOIN ->
> S_POLICY_ENGINE | input=I_FINALIZED cause=C_FSA_INTERNAL
> origin=check_join_state
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> abort_transition_graph:	Transition aborted: Peer Cancelled |
> source=do_te_invoke:143 complete=true
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.8 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.9 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=9
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='2']:  @crm-
> debug-origin=do_lrm_query_internal, @join=member
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++ /cib/status/node_state[@id='2']:  <lrm
> id="2"/>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     <lrm_res
> ources>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_fs_drgxrde_rdqma" type="Filesystem" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_fs_drgxrde_rdqma_last_0"
> operation_key="p_fs_drgxrde_rdqma_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="12:14:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;12:14:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm37" call-id="9" rc-code="7" op
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_fs_drgxrde_rdqmb" type="Filesystem" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_fs_drgxrde_rdqmb_last_0"
> operation_key="p_fs_drgxrde_rdqmb_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="17:16:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;17:16:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm37" call-id="34" rc-code="7" o
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_drgxrde_rdqma" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqma_last_failure_0"
> operation_key="p_drbd_drgxrde_rdqma_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="15:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:0;15:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm37" call-id="23" r
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqma_last_0"
> operation_key="p_drbd_drgxrde_rdqma_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="30:2:0:1816128c-a49e-4b13-b1b6-ee3672c04867" transition-
> magic="0:0;30:2:0:1816128c-a49e-4b13-b1b6-ee3672c04867" exit-
> reason="" on_node="mqhavm37" call-id="354" rc-code="0" op-s
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="drgxrde_rdqma" type="rdqm" class="ocf" provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="drgxrde_rdqma_last_0"
> operation_key="drgxrde_rdqma_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="11:14:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;11:14:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm37" call-id="5" rc-code="7" op-status="0
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="drgxrde_rdqmb" type="rdqm" class="ocf" provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="drgxrde_rdqmb_last_0"
> operation_key="drgxrde_rdqmb_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="16:16:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;16:16:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm37" call-id="30" rc-code="7" op-status="
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_dr_drgxrde_rdqma" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_dr_drgxrde_rdqma_last_0"
> operation_key="p_drbd_dr_drgxrde_rdqma_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="14:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:7;14:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm37" call-id="18" rc-
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_dr_drgxrde_rdqmb" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_dr_drgxrde_rdqmb_last_0"
> operation_key="p_drbd_dr_drgxrde_rdqmb_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="19:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:7;19:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm37" call-id="43" rc-
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_drgxrde_rdqmb" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqmb_last_failure_0"
> operation_key="p_drbd_drgxrde_rdqmb_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="20:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:0;20:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm37" call-id="48" r
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqmb_last_0"
> operation_key="p_drbd_drgxrde_rdqmb_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="83:2:0:1816128c-a49e-4b13-b1b6-ee3672c04867" transition-
> magic="0:0;83:2:0:1816128c-a49e-4b13-b1b6-ee3672c04867" exit-
> reason="" on_node="mqhavm37" call-id="356" rc-code="0" op-s
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_ip_drgxrde_rdqma" type="IPaddr2" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_ip_drgxrde_rdqma_last_0"
> operation_key="p_ip_drgxrde_rdqma_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="13:35:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;13:35:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm37" call-id="55" rc-code="7" o
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_rdqmx_drgxrde_rdqma" type="rdqmx" class="ocf"
> provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_rdqmx_drgxrde_rdqma_last_0"
> operation_key="p_rdqmx_drgxrde_rdqma_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="13:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:7;13:14:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm37" call-id="13" rc-code
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_rdqmx_drgxrde_rdqmb" type="rdqmx" class="ocf"
> provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_rdqmx_drgxrde_rdqmb_last_0"
> operation_key="p_rdqmx_drgxrde_rdqmb_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="18:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:7;18:16:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm37" call-id="38" rc-code
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_ip_drgxrde_rdqmb" type="IPaddr2" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_ip_drgxrde_rdqmb_last_0"
> operation_key="p_ip_drgxrde_rdqmb_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="14:39:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;14:39:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm37" call-id="59" rc-code="7" o
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     </lrm_re
> sources>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                   </lrm>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/crmd/1112, version=0.172.9)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.9 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.10 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	-- /cib/status/node_state[@id='3']/lrm[@id='3']
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=10
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_delete operation for section
> //node_state[@uname='mqhavm34']/lrm: OK (rc=0,
> origin=mqhavm24/crmd/1113, version=0.172.10)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/crmd/1114)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> abort_transition_graph:	Transition aborted: LRM Refresh |
> source=process_resource_updates:294 complete=true
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> nodes to all (origin=local/crmd/1118)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/crmd/1119)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> abort_transition_graph:	Transition aborted by deletion of
> lrm[@id='3']: Resource state removal | cib=0.172.10
> source=abort_unless_down:370
> path=/cib/status/node_state[@id='3']/lrm[@id='3'] complete=true
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section cib
> to all (origin=local/crmd/1120)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.10 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.11 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=11
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='3']:  @crm-
> debug-origin=do_lrm_query_internal, @join=member
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++ /cib/status/node_state[@id='3']:  <lrm
> id="3"/>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     <lrm_res
> ources>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_fs_drgxrde_rdqma" type="Filesystem" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_fs_drgxrde_rdqma_last_0"
> operation_key="p_fs_drgxrde_rdqma_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="8:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;8:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm34" call-id="10" rc-code="7" o
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_fs_drgxrde_rdqmb" type="Filesystem" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_fs_drgxrde_rdqmb_last_0"
> operation_key="p_fs_drgxrde_rdqmb_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="13:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;13:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm34" call-id="32" rc-code="7"
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="drgxrde_rdqma" type="rdqm" class="ocf" provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="drgxrde_rdqma_last_0"
> operation_key="drgxrde_rdqma_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="7:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;7:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm34" call-id="6" rc-code="7" op-status="0
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="drgxrde_rdqmb" type="rdqm" class="ocf" provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="drgxrde_rdqmb_last_0"
> operation_key="drgxrde_rdqmb_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="12:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;12:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm34" call-id="28" rc-code="7" op-status
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_drgxrde_rdqmb" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqmb_last_0"
> operation_key="p_drbd_drgxrde_rdqmb_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="83:0:0:18b7675e-eee2-4abb-8521-a55663441465" transition-
> magic="0:0;83:0:0:18b7675e-eee2-4abb-8521-a55663441465" exit-
> reason="" on_node="mqhavm34" call-id="77" rc-code="0" op-st
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_dr_drgxrde_rdqma" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_dr_drgxrde_rdqma_last_0"
> operation_key="p_drbd_dr_drgxrde_rdqma_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="10:147:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:7;10:147:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm34" call-id="19" r
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_dr_drgxrde_rdqmb" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_dr_drgxrde_rdqmb_last_0"
> operation_key="p_drbd_dr_drgxrde_rdqmb_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="15:147:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:7;15:147:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm34" call-id="41" r
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_drbd_drgxrde_rdqma" type="drbd" class="ocf"
> provider="linbit">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_drbd_drgxrde_rdqma_last_0"
> operation_key="p_drbd_drgxrde_rdqma_stop_0" operation="stop" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="30:0:0:18b7675e-eee2-4abb-8521-a55663441465" transition-
> magic="0:0;30:0:0:18b7675e-eee2-4abb-8521-a55663441465" exit-
> reason="" on_node="mqhavm34" call-id="79" rc-code="0" op-st
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_ip_drgxrde_rdqma" type="IPaddr2" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_ip_drgxrde_rdqma_last_0"
> operation_key="p_ip_drgxrde_rdqma_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="17:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;17:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm34" call-id="50" rc-code="7"
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_rdqmx_drgxrde_rdqma" type="rdqmx" class="ocf"
> provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_rdqmx_drgxrde_rdqma_last_0"
> operation_key="p_rdqmx_drgxrde_rdqma_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="9:147:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:7;9:147:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm34" call-id="14" rc-code
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_rdqmx_drgxrde_rdqmb" type="rdqmx" class="ocf"
> provider="ibm">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_rdqmx_drgxrde_rdqmb_last_0"
> operation_key="p_rdqmx_drgxrde_rdqmb_monitor_0" operation="monitor"
> crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
> transition-key="14:147:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> transition-magic="0:7;14:147:7:f0b9e946-fb53-4805-bea5-05c841b38129"
> exit-reason="" on_node="mqhavm34" call-id="36" rc-co
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <lrm_r
> esource id="p_ip_drgxrde_rdqmb" type="IPaddr2" class="ocf"
> provider="heartbeat">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                         <lrm
> _rsc_op id="p_ip_drgxrde_rdqmb_last_0"
> operation_key="p_ip_drgxrde_rdqmb_monitor_0" operation="monitor" crm-
> debug-origin="build_active_RAs" crm_feature_set="3.0.14" transition-
> key="18:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" transition-
> magic="0:7;18:147:7:f0b9e946-fb53-4805-bea5-05c841b38129" exit-
> reason="" on_node="mqhavm34" call-id="54" rc-code="7"
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       </lrm_
> resource>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     </lrm_re
> sources>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                   </lrm>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/crmd/1114, version=0.172.11)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> nodes: OK (rc=0, origin=mqhavm24/crmd/1118, version=0.172.11)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.11 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.12 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=12
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='1']:  @crm-
> debug-origin=do_state_transition
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='2']:  @crm-
> debug-origin=do_state_transition
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='3']:  @crm-
> debug-origin=do_state_transition
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/crmd/1119, version=0.172.12)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section cib:
> OK (rc=0, origin=mqhavm24/crmd/1120, version=0.172.12)

By this time, the local node was re-elected DC, and all nodes completed
the join process, including sync'ing their resource history. The CIB is
synchronized with good information at this point.

> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> abort_transition_graph:	Transition aborted: LRM Refresh |
> source=process_resource_updates:294 complete=true
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_file_backup:	Archived previous version as
> /var/lib/pacemaker/cib/cib-79.raw
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_file_write_with_digest:	Wrote version 0.172.0 of the CIB to
> disk (digest: e499c1e040d16e4a97fce7f8b0c5bf32)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_file_write_with_digest:	Reading cluster configuration file
> /var/lib/pacemaker/cib/cib.Jrr9mZ (digest:
> /var/lib/pacemaker/cib/cib.7vPm5x)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_online_status:	Node mqhavm24 is online
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_online_status:	Node mqhavm37 is online
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_online_status:	Node mqhavm34 is online
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqma:0 active in master mode on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_dr_drgxrde_rdqma:0 active in master mode on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqmb:0 active in master mode on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_dr_drgxrde_rdqmb:0 active in master mode on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqma:0 active on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqmb:0 active on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 1 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 2 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 3 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 1 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 2 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 3 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> drgxrde_rdqma	(ocf::ibm:rdqm):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_fs_drgxrde_rdqma	(ocf::heartbeat:Filesystem):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_rdqmx_drgxrde_rdqma	(ocf::ibm:rdqmx):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_dr_drgxrde_rdqma [p_drbd_dr_drgxrde_rdqma]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_drgxrde_rdqma [p_drbd_drgxrde_rdqma]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> drgxrde_rdqmb	(ocf::ibm:rdqm):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_fs_drgxrde_rdqmb	(ocf::heartbeat:Filesystem):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_rdqmx_drgxrde_rdqmb	(ocf::ibm:rdqmx):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_dr_drgxrde_rdqmb [p_drbd_dr_drgxrde_rdqmb]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_drgxrde_rdqmb [p_drbd_drgxrde_rdqmb]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_ip_drgxrde_rdqma	(ocf::heartbeat:IPaddr2):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_ip_drgxrde_rdqmb	(ocf::heartbeat:IPaddr2):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_drgxrde_rdqma: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_drbd_dr_drgxrde_rdqma:0: Rolling back
> optional scores from p_fs_drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_drbd_dr_drgxrde_rdqma:0
> cannot run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_dr_drgxrde_rdqma: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_fs_drgxrde_rdqma: Rolling back
> optional scores from p_rdqmx_drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_fs_drgxrde_rdqma cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_rdqmx_drgxrde_rdqma: Rolling back
> optional scores from p_ip_drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_rdqmx_drgxrde_rdqma cannot
> run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_ip_drgxrde_rdqma: Rolling back
> optional scores from drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_ip_drgxrde_rdqma cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource drgxrde_rdqma cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_drgxrde_rdqmb: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_drbd_dr_drgxrde_rdqmb:0: Rolling back
> optional scores from p_fs_drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_drbd_dr_drgxrde_rdqmb:0
> cannot run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_dr_drgxrde_rdqmb: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_fs_drgxrde_rdqmb: Rolling back
> optional scores from p_rdqmx_drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_fs_drgxrde_rdqmb cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_rdqmx_drgxrde_rdqmb: Rolling back
> optional scores from p_ip_drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_rdqmx_drgxrde_rdqmb cannot
> run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_ip_drgxrde_rdqmb: Rolling back
> optional scores from drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_ip_drgxrde_rdqmb cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource drgxrde_rdqmb cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:0 on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:1 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:2 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:0 on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:1 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:2 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:0 on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:1 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:2 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:0 on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:1 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:2 on mqhavm37
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info: crm_get_peer:	
> Node 3 is now known as mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_fs_drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_rdqmx_drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_drbd_dr_drgxrde_rdqma:0	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:0        (                           
>                   mqhavm24 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:1        (                           
>                   mqhavm34 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:2        (                           
>                   mqhavm37 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_fs_drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_rdqmx_drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_drbd_dr_drgxrde_rdqmb:0	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:0        (                           
>                   mqhavm24 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:1        (                           
>                   mqhavm34 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:2        (                           
>                   mqhavm37 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_ip_drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_ip_drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info: crm_get_peer:	
> Node 2 is now known as mqhavm37
> 
> Aug 11 12:33:36 [13308] mqhavm24      attrd:   notice:
> attrd_check_for_new_writer:	Detected another attribute writer
> (mqhavm37), starting new election
> 
> ^^^ This one looks fishy.
> attrd / cib writer should have followed DC

It does, but attrd conducts its own election separate from the
controller. It'll always come to the same conclusion though.

By contrast, the CIB manager doesn't conduct an election for its
writer. The DC tells the CIB manager to become the writer.

Of course that's not very consistent :) and it would be nice to stick
with one approach, but it works.

> 
> 
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_peer_message:	Processing sync-response from mqhavm37
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_peer_update:	Setting #attrd-protocol[mqhavm37]: (null) -> 2
> from mqhavm37
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_peer_update:	Setting rdqm-transient-attribute[mqhavm37]:
> (null) -> 1 from mqhavm37
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> election_count_vote:	election-attrd round 10 (owner node ID 2) pass:
> vote from mqhavm37 (Uptime)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_peer_message:	Processing sync-response from mqhavm37
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info: crm_cs_flush:	
> Sent 2 CPG messages  (0 remaining, last=19): OK (1)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice:
> process_pe_message:	Calculated transition 160, saving inputs in
> /var/lib/pacemaker/pengine/pe-input-688.bz2
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> handle_response:	pe_calc calculation pe_calc-dc-1660217616-1276
> is obsolete
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> election_check:	election-attrd won by local node
> Aug 11 12:33:36 [13308] mqhavm24      attrd:   notice:
> attrd_declare_winner:	Recorded local node as attribute writer (was
> unset)
> 
> which it does now.
> 
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> mcp_cpg_deliver:	Ignoring process list sent by peer for local
> node
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> write_attribute:	Sent CIB request 139 with 1 change for master-
> p_drbd_dr_drgxrde_rdqma (id n/a, set n/a)
> Aug 11 12:33:36 [13303] mqhavm24 pacemakerd:     info:
> mcp_cpg_deliver:	Ignoring process list sent by peer for local
> node
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> write_attribute:	Sent CIB request 140 with 1 change for master-
> p_drbd_dr_drgxrde_rdqmb (id n/a, set n/a)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> write_attribute:	Processed 3 private changes for #attrd-
> protocol, id=n/a, set=n/a
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> write_attribute:	Sent CIB request 141 with 3 changes for rdqm-
> transient-attribute (id n/a, set n/a)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> write_attribute:	Sent CIB request 142 with 3 changes for master-
> p_drbd_drgxrde_rdqma (id n/a, set n/a)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> write_attribute:	Sent CIB request 143 with 3 changes for master-
> p_drbd_drgxrde_rdqmb (id n/a, set n/a)
> Aug 11 12:33:36 [13306] mqhavm24 stonith-ng:     info: crm_get_peer:	
> Node 2 is now known as mqhavm37
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/attrd/139)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/attrd/140)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/attrd/141)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/attrd/142)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Forwarding cib_modify operation for section
> status to all (origin=local/attrd/143)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> process_pe_message:	Input has not changed since last time, not
> saving to disk
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_online_status:	Node mqhavm24 is online
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_online_status:	Node mqhavm37 is online
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_online_status:	Node mqhavm34 is online
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqma:0 active in master mode on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_dr_drgxrde_rdqma:0 active in master mode on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqmb:0 active in master mode on mqhavm24
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info: crm_get_peer:	
> Node 2 is now known as mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_dr_drgxrde_rdqmb:0 active in master mode on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqma:0 active on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqmb:0 active on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 1 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 2 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 3 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 1 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 2 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 3 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> drgxrde_rdqma	(ocf::ibm:rdqm):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_fs_drgxrde_rdqma	(ocf::heartbeat:Filesystem):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_rdqmx_drgxrde_rdqma	(ocf::ibm:rdqmx):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_dr_drgxrde_rdqma [p_drbd_dr_drgxrde_rdqma]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_drgxrde_rdqma [p_drbd_drgxrde_rdqma]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> drgxrde_rdqmb	(ocf::ibm:rdqm):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_fs_drgxrde_rdqmb	(ocf::heartbeat:Filesystem):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_rdqmx_drgxrde_rdqmb	(ocf::ibm:rdqmx):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_dr_drgxrde_rdqmb [p_drbd_dr_drgxrde_rdqmb]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_drgxrde_rdqmb [p_drbd_drgxrde_rdqmb]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_ip_drgxrde_rdqma	(ocf::heartbeat:IPaddr2):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_ip_drgxrde_rdqmb	(ocf::heartbeat:IPaddr2):	Stopped
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.12 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.13 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=13
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='1']:  @crm-
> debug-origin=peer_update_callback
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/crmd/294, version=0.172.13)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_drgxrde_rdqma: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_drbd_dr_drgxrde_rdqma:0: Rolling back
> optional scores from p_fs_drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_drbd_dr_drgxrde_rdqma:0
> cannot run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_dr_drgxrde_rdqma: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_fs_drgxrde_rdqma: Rolling back
> optional scores from p_rdqmx_drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_fs_drgxrde_rdqma cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_rdqmx_drgxrde_rdqma: Rolling back
> optional scores from p_ip_drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_rdqmx_drgxrde_rdqma cannot
> run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_ip_drgxrde_rdqma: Rolling back
> optional scores from drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_ip_drgxrde_rdqma cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource drgxrde_rdqma cannot run
> anywhere
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/crmd/295, version=0.172.13)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_drgxrde_rdqmb: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_drbd_dr_drgxrde_rdqmb:0: Rolling back
> optional scores from p_fs_drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_drbd_dr_drgxrde_rdqmb:0
> cannot run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_dr_drgxrde_rdqmb: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_fs_drgxrde_rdqmb: Rolling back
> optional scores from p_rdqmx_drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_fs_drgxrde_rdqmb cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_rdqmx_drgxrde_rdqmb: Rolling back
> optional scores from p_ip_drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_rdqmx_drgxrde_rdqmb cannot
> run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_ip_drgxrde_rdqmb: Rolling back
> optional scores from drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_ip_drgxrde_rdqmb cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource drgxrde_rdqmb cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:0 on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:1 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:2 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:0 on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:1 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:2 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:0 on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:1 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:2 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:0 on mqhavm24
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:1 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:2 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_fs_drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_rdqmx_drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_drbd_dr_drgxrde_rdqma:0	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:0        (                           
>                   mqhavm24 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:1        (                           
>                   mqhavm34 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:2        (                           
>                   mqhavm37 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_fs_drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_rdqmx_drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_drbd_dr_drgxrde_rdqmb:0	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:0        (                           
>                   mqhavm24 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:1        (                           
>                   mqhavm34 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:2        (                           
>                   mqhavm37 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_ip_drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_ip_drgxrde_rdqmb	(Stopped)
> 
> We have a plan: starting stuff everywhere (including on "local node"
> aka DC aka mqhavm24)
> 
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> nodes: OK (rc=0, origin=mqhavm37/crmd/299, version=0.172.13)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice:
> process_pe_message:	Calculated transition 161, saving inputs in
> /var/lib/pacemaker/pengine/pe-input-688.bz2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.13 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.14 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=14
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='1']:  @crm-
> debug-origin=post_cache_update
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='2']:  @crm-
> debug-origin=post_cache_update
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='3']:  @crm-
> debug-origin=post_cache_update
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/crmd/300, version=0.172.14)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.14 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.15 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=15
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++
> /cib/status/node_state[@id='2']:  <transient_attributes id="2"/>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     <instanc
> e_attributes id="status-2">
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                       <nvpai
> r id="status-2-rdqm-transient-attribute" name="rdqm-transient-
> attribute" value="1"/>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                     </instan
> ce_attributes>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	++                                   </transien
> t_attributes>
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/attrd/19, version=0.172.15)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> do_state_transition:	State transition S_POLICY_ENGINE ->
> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE
> origin=handle_response
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.15 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.16 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=16
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/attrd/20, version=0.172.16)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.16 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.17 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=17
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/attrd/21, version=0.172.17)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info: do_te_invoke:	
> Processing graph 161 (ref=pe_calc-dc-1660217616-1277) derived from
> /var/lib/pacemaker/pengine/pe-input-688.bz2
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> abort_transition_graph:	Transition aborted by
> transient_attributes.2 'create': Transient attribute change |
> cib=0.172.15 source=abort_unless_down:356
> path=/cib/status/node_state[@id='2'] complete=false
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> nodes: OK (rc=0, origin=mqhavm37/crmd/304, version=0.172.17)
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice: run_graph:	
> Transition 161 (Complete=6, Pending=0, Fired=0, Skipped=6,
> Incomplete=24, Source=/var/lib/pacemaker/pengine/pe-input-688.bz2):
> Stopped
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> do_state_transition:	State transition S_TRANSITION_ENGINE ->
> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL
> origin=notify_crmd
> 
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.17 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.18 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=18
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='1']:  @crm-
> debug-origin=do_cib_replaced, @join=down

Here is the CIB update that messed up the node state. The update came
from the controller on mqhavm37, and it's after the join process
completed, so it's not a join timing issue as originally suspected.

I have no idea why that node decided that this one should be marked
down, but I'm pretty sure the current code wouldn't do that.

Nodes *record* node state in the CIB, but they don't *learn* it from
the CIB, they learn it from the cluster layer. Only the scheduler uses
the node state from the CIB. This is why the local node (which knows
it's up via the cluster layer) doesn't consider this as anything but a
CIB update. Ideally, the controller would inspect CIB updates for node
state changes, and send a new update with its known status if it
doesn't match. That would recover from bugs like this, as long as the
other node doesn't continue sending the faulty update.

> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='2']:  @crm-
> debug-origin=do_cib_replaced
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib/status/node_state[@id='3']:  @crm-
> debug-origin=do_cib_replaced
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/crmd/305, version=0.172.18)
> 
> But now we are "join=down" ourselves :-(
> 
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.18 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.19 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=19
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/attrd/139, version=0.172.19)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	CIB update 139 result for master-
> p_drbd_dr_drgxrde_rdqma: OK | rc=0
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* master-
> p_drbd_dr_drgxrde_rdqma[mqhavm24]=(null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.19 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.20 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=20
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/attrd/140, version=0.172.20)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	CIB update 140 result for master-
> p_drbd_dr_drgxrde_rdqmb: OK | rc=0
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* master-
> p_drbd_dr_drgxrde_rdqmb[mqhavm24]=(null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/attrd/141, version=0.172.20)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	CIB update 141 result for rdqm-transient-
> attribute: OK | rc=0
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* rdqm-transient-attribute[mqhavm34]=1
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* rdqm-transient-attribute[mqhavm24]=1
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* rdqm-transient-attribute[mqhavm37]=1
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.20 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.21 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=21
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/attrd/142, version=0.172.21)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	CIB update 142 result for master-
> p_drbd_drgxrde_rdqma: OK | rc=0
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* master-p_drbd_drgxrde_rdqma[mqhavm34]=(null)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* master-p_drbd_drgxrde_rdqma[mqhavm24]=(null)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* master-p_drbd_drgxrde_rdqma[mqhavm37]=(null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.21 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.22 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=22
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm24/attrd/143, version=0.172.22)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	CIB update 143 result for master-
> p_drbd_drgxrde_rdqmb: OK | rc=0
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* master-p_drbd_drgxrde_rdqmb[mqhavm34]=(null)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* master-p_drbd_drgxrde_rdqmb[mqhavm37]=(null)
> Aug 11 12:33:36 [13308] mqhavm24      attrd:     info:
> attrd_cib_callback:	* master-p_drbd_drgxrde_rdqmb[mqhavm24]=(null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.22 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.23 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=23
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/attrd/24, version=0.172.23)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.23 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.24 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=24
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/attrd/25, version=0.172.24)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/attrd/26, version=0.172.24)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.24 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.25 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=25
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/attrd/27, version=0.172.25)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: --- 0.172.25 2
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	Diff: +++ 0.172.26 (null)
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_perform_op:	+  /cib:  @num_updates=26
> Aug 11 12:33:36 [13305] mqhavm24        cib:     info:
> cib_process_request:	Completed cib_modify operation for section
> status: OK (rc=0, origin=mqhavm37/attrd/28, version=0.172.26)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_online_status:	Node mqhavm37 is online
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_online_status:	Node mqhavm34 is online
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqma:0 active on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> determine_op_status:	Operation monitor found resource
> p_drbd_drgxrde_rdqmb:0 active on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 2 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 3 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 2 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> unpack_node_loop:	Node 3 is already processed
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> drgxrde_rdqma	(ocf::ibm:rdqm):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_fs_drgxrde_rdqma	(ocf::heartbeat:Filesystem):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_rdqmx_drgxrde_rdqma	(ocf::ibm:rdqmx):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_dr_drgxrde_rdqma [p_drbd_dr_drgxrde_rdqma]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_drgxrde_rdqma [p_drbd_drgxrde_rdqma]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> drgxrde_rdqmb	(ocf::ibm:rdqm):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_fs_drgxrde_rdqmb	(ocf::heartbeat:Filesystem):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_rdqmx_drgxrde_rdqmb	(ocf::ibm:rdqmx):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_dr_drgxrde_rdqmb [p_drbd_dr_drgxrde_rdqmb]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: clone_print:	
>  Master/Slave Set: ms_drbd_drgxrde_rdqmb [p_drbd_drgxrde_rdqmb]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: short_print:	
>      Stopped: [ mqhavm24 mqhavm34 mqhavm37 ]
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_ip_drgxrde_rdqma	(ocf::heartbeat:IPaddr2):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: common_print:	
> p_ip_drgxrde_rdqmb	(ocf::heartbeat:IPaddr2):	Stopped
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_drbd_drgxrde_rdqma:2 cannot
> run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_drgxrde_rdqma: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_drbd_dr_drgxrde_rdqma:0: Rolling back
> optional scores from p_fs_drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_drbd_dr_drgxrde_rdqma:0
> cannot run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_dr_drgxrde_rdqma: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_fs_drgxrde_rdqma: Rolling back
> optional scores from p_rdqmx_drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_fs_drgxrde_rdqma cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_rdqmx_drgxrde_rdqma: Rolling back
> optional scores from p_ip_drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_rdqmx_drgxrde_rdqma cannot
> run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_ip_drgxrde_rdqma: Rolling back
> optional scores from drgxrde_rdqma
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_ip_drgxrde_rdqma cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource drgxrde_rdqma cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_drbd_drgxrde_rdqmb:2 cannot
> run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_drgxrde_rdqmb: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_drbd_dr_drgxrde_rdqmb:0: Rolling back
> optional scores from p_fs_drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_drbd_dr_drgxrde_rdqmb:0
> cannot run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__set_instance_roles:	ms_drbd_dr_drgxrde_rdqmb: Promoted 0
> instances of a possible 1 to master
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_fs_drgxrde_rdqmb: Rolling back
> optional scores from p_rdqmx_drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_fs_drgxrde_rdqmb cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_rdqmx_drgxrde_rdqmb: Rolling back
> optional scores from p_ip_drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_rdqmx_drgxrde_rdqmb cannot
> run anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_merge_weights:	p_ip_drgxrde_rdqmb: Rolling back
> optional scores from drgxrde_rdqmb
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource p_ip_drgxrde_rdqmb cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info:
> pcmk__native_allocate:	Resource drgxrde_rdqmb cannot run
> anywhere
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:0 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:1 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:0 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqma:1 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:0 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:1 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:0 on mqhavm34
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: RecurringOp:	
>  Start recurring monitor (20s) for p_drbd_drgxrde_rdqmb:1 on mqhavm37
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_fs_drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_rdqmx_drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_drbd_dr_drgxrde_rdqma:0	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:0        (                           
>                   mqhavm34 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqma:1        (                           
>                   mqhavm37 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_drbd_drgxrde_rdqma:2	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_fs_drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_rdqmx_drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_drbd_dr_drgxrde_rdqmb:0	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:0        (                           
>                   mqhavm34 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice: LogAction:	
>  *
> Start      p_drbd_drgxrde_rdqmb:1        (                           
>                   mqhavm37 )  
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_drbd_drgxrde_rdqmb:2	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_ip_drgxrde_rdqma	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:     info: LogActions:	
> Leave   p_ip_drgxrde_rdqmb	(Stopped)
> Aug 11 12:33:36 [13309] mqhavm24    pengine:   notice:
> process_pe_message:	Calculated transition 162, saving inputs in
> /var/lib/pacemaker/pengine/pe-input-689.bz2
> 
> Which made us change the plan: we don't start anything locally.
> Still we continue to manage the other nodes, because we continue to
> be DC.
> 
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info:
> do_state_transition:	State transition S_POLICY_ENGINE ->
> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE
> origin=handle_response
> Aug 11 12:33:36 [13310] mqhavm24       crmd:     info: do_te_invoke:	
> Processing graph 162 (ref=pe_calc-dc-1660217616-1278) derived from
> /var/lib/pacemaker/pengine/pe-input-689.bz2
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating start operation
> p_drbd_drgxrde_rdqma_start_0 on mqhavm34 | action 25
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating start operation
> p_drbd_drgxrde_rdqma:1_start_0 on mqhavm37 | action 27
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating start operation
> p_drbd_drgxrde_rdqmb_start_0 on mqhavm34 | action 77
> Aug 11 12:33:36 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating start operation
> p_drbd_drgxrde_rdqmb:1_start_0 on mqhavm37 | action 79
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating notify operation
> p_drbd_drgxrde_rdqmb_post_notify_start_0 on mqhavm34 | action 133
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating notify operation
> p_drbd_drgxrde_rdqmb:1_post_notify_start_0 on mqhavm37 | action 134
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating notify operation
> p_drbd_drgxrde_rdqma_post_notify_start_0 on mqhavm34 | action 131
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating notify operation
> p_drbd_drgxrde_rdqma:1_post_notify_start_0 on mqhavm37 | action 132
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating monitor operation
> p_drbd_drgxrde_rdqma_monitor_20000 on mqhavm34 | action 26
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating monitor operation
> p_drbd_drgxrde_rdqma:1_monitor_20000 on mqhavm37 | action 28
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating monitor operation
> p_drbd_drgxrde_rdqmb_monitor_20000 on mqhavm34 | action 78
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice:
> te_rsc_command:	Initiating monitor operation
> p_drbd_drgxrde_rdqmb:1_monitor_20000 on mqhavm37 | action 80
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice: run_graph:	
> Transition 162 (Complete=24, Pending=0, Fired=0, Skipped=0,
> Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-689.bz2):
> Complete
> Aug 11 12:33:42 [13310] mqhavm24       crmd:   notice:
> do_state_transition:	State transition S_TRANSITION_ENGINE -> S_IDLE
> | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
> 
> Cluster state remains stable, stuff is started on the other two
> nodes,
> just not locally, because "join=down".
-- 
Ken Gaillot <kgaillot at redhat.com>



More information about the Users mailing list