<div dir="ltr"><div>I created a multi state resource ms-2be6c088-a1fa-464a-b00d-f4bccb4f5af2 (vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2).</div><div><br></div><div>Here is the configuration:</div><div>==========================</div>
<div>[root@vsanqa11 ~]# pcs config</div><div>Cluster Name: vsanqa11_12</div><div>Corosync Nodes:</div><div><br></div><div>Pacemaker Nodes:</div><div> vsanqa11 vsanqa12</div><div><br></div><div>Resources:</div><div> Master: ms-2be6c088-a1fa-464a-b00d-f4bccb4f5af2</div>
<div> Meta Attrs: clone-max=2 globally-unique=false target-role=started</div><div> Resource: vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2 (class=ocf provider=heartbeat type=vgc-cm-agent.ocf)</div><div> Attributes: cluster_uuid=2be6c088-a1fa-464a-b00d-f4bccb4f5af2</div>
<div> Operations: monitor interval=30s role=Master timeout=100s (vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2-monitor-interval-30s)</div><div> monitor interval=31s role=Slave timeout=100s (vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2-monitor-interval-31s)</div>
<div><br></div><div>Stonith Devices:</div><div>Fencing Levels:</div><div><br></div><div>Location Constraints:</div><div> Resource: ms-2be6c088-a1fa-464a-b00d-f4bccb4f5af2</div><div> Enabled on: vsanqa11 (score:INFINITY) (id:location-ms-2be6c088-a1fa-464a-b00d-f4bccb4f5af2-vsanqa11-INFINITY)</div>
<div> Enabled on: vsanqa12 (score:INFINITY) (id:location-ms-2be6c088-a1fa-464a-b00d-f4bccb4f5af2-vsanqa12-INFINITY)</div><div> Resource: vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2</div><div> Enabled on: vsanqa11 (score:INFINITY) (id:location-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2-vsanqa11-INFINITY)</div>
<div> Enabled on: vsanqa12 (score:INFINITY) (id:location-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2-vsanqa12-INFINITY)</div><div>Ordering Constraints:</div><div>Colocation Constraints:</div><div><br></div><div>Cluster Properties:</div>
<div> cluster-infrastructure: cman</div><div> dc-version: 1.1.10-14.el6_5.2-368c726</div><div> last-lrm-refresh: 1399466204</div><div> no-quorum-policy: ignore</div><div> stonith-enabled: false</div><div><br></div><div>==============================================</div>
<div>When i try to create and delete this resource in a loop, after few iteration, delete fails as shown below. This can be reproduced easily. I make sure to unclone resource before deleting the resource. Unclone happens successfully</div>
<div><br></div><div>Removing Constraint - location-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2-vsanqa11-INFINITY</div><div>Removing Constraint - location-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2-vsanqa12-INFINITY</div><div>Attempting to stop: vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2...Error: Unable to stop: vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2 before deleting (re-run with --force to force deletion)</div>
<div>Failed to delete resource with uuid: 2be6c088-a1fa-464a-b00d-f4bccb4f5af2</div><div><br></div><div>==============================================</div><div><br></div><div>Log file snippet of relevant time</div><div>============================================</div>
<div><br></div><div>May 7 07:20:12 vsanqa12 vgc-vha-config: /usr/bin/vgc-vha-config --stop /dev/vgca0_vha</div><div>May 7 07:20:12 vsanqa12 crmd[4319]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]</div>
<div>May 7 07:20:12 vsanqa12 kernel: VGC: [0000006711341b03:I] Stopped vHA/vShare instance /dev/vgca0_vha</div><div>May 7 07:20:12 vsanqa12 stonith-ng[4315]: notice: unpack_config: On loss of CCM Quorum: Ignore</div><div>
May 7 07:20:12 vsanqa12 vgc-vha-config: Success</div><div>May 7 07:20:13 vsanqa12 stonith-ng[4315]: notice: unpack_config: On loss of CCM Quorum: Ignore</div><div>May 7 07:20:13 vsanqa12 attrd[4317]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2 (<null>)</div>
<div>May 7 07:20:13 vsanqa12 attrd[4317]: notice: attrd_perform_update: Sent delete 4404: node=vsanqa12, attr=master-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2, id=<n/a>, set=(null), section=status</div><div>May 7 07:20:13 vsanqa12 crmd[4319]: notice: process_lrm_event: LRM operation vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2_stop_0 (call=1379, rc=0, cib-update=1161, confirmed=true) ok</div>
<div>May 7 07:20:13 vsanqa12 attrd[4317]: notice: attrd_perform_update: Sent delete 4406: node=vsanqa12, attr=master-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2, id=<n/a>, set=(null), section=status</div><div>May 7 07:20:12 vsanqa12 kernel: VGC: [0000006711341b03:I] Stopped vHA/vShare instance /dev/vgca0_vh088-a1fa-464a-b00d-f4bccb4f5af2=(null) failed: Application of an updMay 7 07:20:12 vsanqa12 stonith-ng[4315]: notice: unpack_config: On loss of CCM Quorum: Ignore</div>
<div>May 7 07:20:12 vsanqa12 vgc-vha-config: Success</div><div>May 7 07:20:13 vsanqa12 stonith-ng[4315]: notice: unpack_config: On loss of CCM Quorum: Ignore</div><div>May 7 07:20:13 vsanqa12 attrd[4317]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2 (<null>)</div>
<div>May 7 07:20:13 vsanqa12 attrd[4317]: notice: attrd_perform_update: Sent delete 4404: node=vsanqa12, attr=master-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2, id=<n/a>, set=(null), section=status</div><div>May 7 07:20:13 vsanqa12 crmd[4319]: notice: process_lrm_event: LRM operation vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2_stop_0 (call=1379, rc=0, cib-update=1161, confirmed=true) ok</div>
<div>May 7 07:20:13 vsanqa12 attrd[4317]: notice: attrd_perform_update: Sent delete 4406: node=vsanqa12, attr=master-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2, id=<n/a>, set=(null), section=status</div><div>May 7 07:20:13 vsanqa12 attrd[4317]: warning: attrd_cib_callback: Update 4404 for master-vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2=(null) failed: Application of an update diff failed</div>
<div>May 7 07:20:13 vsanqa12 cib[4314]: warning: cib_process_diff: Diff 0.6804.2 -> 0.6804.3 from vsanqa11 not applied to 0.6804.2: Failed application of an update diff</div><div>May 7 07:20:13 vsanqa12 cib[4314]: notice: cib_server_process_diff: Not applying diff 0.6804.3 -> 0.6804.4 (sync in progress)</div>
<div><br></div><div><br></div><div>[root@vsanqa12 ~]# pcs status</div><div>Cluster name: vsanqa11_12</div><div>Last updated: Wed May 7 07:24:29 2014</div><div>Last change: Wed May 7 07:20:13 2014 via crm_resource on vsanqa11</div>
<div>Stack: cman</div><div>Current DC: vsanqa11 - partition with quorum</div><div>Version: 1.1.10-14.el6_5.2-368c726</div><div>2 Nodes configured</div><div>1 Resources configured</div><div><br></div><div><br></div><div>Online: [ vsanqa11 vsanqa12 ]</div>
<div><br></div><div>Full list of resources:</div><div><br></div><div> vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2 (ocf::heartbeat:vgc-cm-agent.ocf): Stopped</div><div><br></div><div><br></div></div>