[ClusterLabs] Master-Slaver resource Restarted after configuration change

Ilia Sokolinski ilia at clearskydata.com
Thu Jun 9 23:35:28 EDT 2016


Hi,

We have a custom Master-Slave resource running on a 3-node pcs cluster on CentOS 7.1

As part of what is supposed to be an NDU we do update some properties of the resource.
For some reason this causes both Master and Slave instances of the  resource to be restarted.

Since restart takes a fairly long time for us, the update becomes very much disruptive.

Is this expected? 
We have not seen this behavior with the previous release of pacemaker.


Jun 10 02:06:11 dev-ceph02 crmd[30570]: notice: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jun 10 02:06:11 dev-ceph02 attrd[30568]: notice: Updating all attributes after cib_refresh_notify event
Jun 10 02:06:11 dev-ceph02 crmd[30570]: notice: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]
Jun 10 02:06:11 dev-ceph02 crmd[30570]: warning: FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION

Jun 10 02:06:12 dev-ceph02 pengine[30569]: notice: Restart L3:0	(Master d-l303-a.dev-bos.csdops.net)
Jun 10 02:06:12 dev-ceph02 pengine[30569]: notice: Restart L3:1	(Slave d-l303-b.dev-bos.csdops.net)
Jun 10 02:06:12 dev-ceph02 pengine[30569]: notice: Calculated Transition 4845: /var/lib/pacemaker/pengine/pe-input-2934.bz2
Jun 10 02:06:12 dev-ceph02 crmd[30570]: notice: Initiating action 63: demote L3_demote_0 on d-l303-a.dev-bos.csdops.net
Jun 10 02:06:14 dev-ceph02 crmd[30570]: notice: Initiating action 64: stop L3_stop_0 on d-l303-a.dev-bos.csdops.net
Jun 10 02:06:14 dev-ceph02 crmd[30570]: notice: Initiating action 66: stop L3_stop_0 on d-l303-b.dev-bos.csdops.net
Jun 10 02:06:15 dev-ceph02 crmd[30570]: notice: Initiating action 17: start L3_start_0 on d-l303-a.dev-bos.csdops.net
Jun 10 02:06:15 dev-ceph02 crmd[30570]: notice: Initiating action 18: start L3_start_0 on d-l303-b.dev-bos.csdops.net <http://d-l303-b.dev-bos.csdops.net/>


Here is the cluster configuration:

pcs status
Cluster name: L3_cluster
Last updated: Fri Jun 10 03:17:31 2016		Last change: Fri Jun 10 02:06:11 2016 by root via cibadmin on d-l303-a.dev-bos.csdops.net
Stack: corosync
Current DC: dev-ceph02.dev-bos.csdops.net (version 1.1.13-a14efad) - partition with quorum
3 nodes and 12 resources configured

Online: [ d-l303-a.dev-bos.csdops.net d-l303-b.dev-bos.csdops.net dev-ceph02.dev-bos.csdops.net ]

Full list of resources:

 idrac-d-l303-b.dev-bos.csdops.net	(stonith:fence_idrac):	Started dev-ceph02.dev-bos.csdops.net
 idrac-d-l303-a.dev-bos.csdops.net	(stonith:fence_idrac):	Started d-l303-b.dev-bos.csdops.net
 noop-dev-ceph02.dev-bos.csdops.net	(stonith:fence_noop):	Started d-l303-a.dev-bos.csdops.net
 L3-5bb92-0-ip	(ocf::heartbeat:IPaddr2):	Started d-l303-a.dev-bos.csdops.net
 Master/Slave Set: L3-5bb92-0-master [L3-5bb92-0]
     Masters: [ d-l303-a.dev-bos.csdops.net ]
     Slaves: [ d-l303-b.dev-bos.csdops.net ]
 L3-86a2c-1-ip	(ocf::heartbeat:IPaddr2):	Started d-l303-b.dev-bos.csdops.net
 Master/Slave Set: L3-86a2c-1-master [L3-86a2c-1]
     Masters: [ d-l303-b.dev-bos.csdops.net ]
     Slaves: [ d-l303-a.dev-bos.csdops.net ]
 L3-ip	(ocf::heartbeat:IPaddr2):	Started d-l303-a.dev-bos.csdops.net
 Master/Slave Set: L3-master [L3]
     Masters: [ d-l303-a.dev-bos.csdops.net ]
     Slaves: [ d-l303-b.dev-bos.csdops.net ]

PCSD Status:
  d-l303-b.dev-bos.csdops.net: Online
  d-l303-a.dev-bos.csdops.net: Online
  dev-ceph02.dev-bos.csdops.net: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Rpms:

pcs-0.9.137-13.el7_1.4.x86_64
pacemaker-cluster-libs-1.1.12-22.el7_1.4.x86_64
pacemaker-cli-1.1.12-22.el7_1.4.x86_64
pacemaker-libs-1.1.12-22.el7_1.4.x86_64
pacemaker-1.1.12-22.el7_1.4.x86_64


Thanks a lot

Ilia Sokolinski
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20160609/daf4b22e/attachment-0002.html>


More information about the Users mailing list