[Pacemaker] [PACEMAKER] Why cant't migrate group resource with collation in drbd resource

Andrew Beekhof andrew at beekhof.net
Mon Feb 4 21:16:48 EST 2013


On Tue, Feb 5, 2013 at 1:03 PM, Andrew Beekhof <andrew at beekhof.net> wrote:
> On Tue, Feb 5, 2013 at 1:03 PM, Andrew Beekhof <andrew at beekhof.net> wrote:
>> On Wed, Jan 23, 2013 at 10:04 PM, and k <not4mad at gmail.com> wrote:
>>> Hello Everybody,
>>>
>>> I've got a problem (but I am not quite sure if it is not a feature in
>>> pacemaker ) that's why I decided to write on that mailing list.
>>>
>>> It comes about migrate resource with collation in drbd resource.
>>>
>>> I've got group including virtual IP and filesystem which is collated with ms
>>> resource drbd in mster-slave configuration.
>>>
>>> ============
>>> Last updated: Wed Jan 23 03:40:55 2013
>>> Stack: openais
>>> Current DC: drbd01 - partition with quorum
>>> Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b
>>> ============
>>>
>>> Online: [ drbd01 drbd02 ]
>>>
>>>  Master/Slave Set: ms_drbd
>>>      Masters: [ drbd01 ]
>>>      Slaves: [ drbd02 ]
>>>  Resource Group: IP-AND-FS
>>>      fs_r1      (ocf::heartbeat:Filesystem):    Started drbd01
>>>      VIRTUAL-IP (ocf::heartbeat:IPaddr):        Started drbd01
>>>
>>> I would like to migrate that group manually to another node which is slave.
>>> So i type in: crm resource migrate IP-AND-FS drbd02
>>>
>>> after that configuration include additional line:
>>>
>>> location cli-prefer-IP-AND-FS IP-AND-FS \
>>>         rule $id="cli-prefer-rule-IP-AND-FS" inf: #uname eq drbd02
>>
>> That location constraint is incomplete, there needs to be a score (how
>> much do you prefer drbd02) there.
>> Which command did you run to (try to) initiate the resource move?
>
> Ah, I see a later reply has the answer to this.
> Looks like a bug in crm_resource :(

Looking at the source code, it seems to have been fixed since then.
Could you try something more recent?

>
>>
>>>
>>> and in logs i see:
>>>
>>> Jan 23 11:30:12 drbd02 cibadmin: [1126]: info: Invoked: cibadmin -Ql -o
>>> resources
>>> Jan 23 11:30:12 drbd02 cibadmin: [1129]: info: Invoked: cibadmin -Ql -o
>>> nodes
>>> Jan 23 11:30:12 drbd02 cibadmin: [1131]: info: Invoked: cibadmin -Ql -o
>>> resources
>>> Jan 23 11:30:12 drbd02 cibadmin: [1133]: info: Invoked: cibadmin -Ql -o
>>> nodes
>>> Jan 23 11:30:12 drbd02 cibadmin: [1135]: info: Invoked: cibadmin -Ql -o
>>> resources
>>> Jan 23 11:30:12 drbd02 cibadmin: [1137]: info: Invoked: cibadmin -Ql -o
>>> nodes
>>> Jan 23 11:30:14 drbd02 cibadmin: [1166]: info: Invoked: cibadmin -Ql -o
>>> resources
>>> Jan 23 11:30:14 drbd02 cibadmin: [1168]: info: Invoked: cibadmin -Ql -o
>>> nodes
>>> Jan 23 11:30:14 drbd02 cibadmin: [1170]: info: Invoked: cibadmin -Ql -o
>>> resources
>>> Jan 23 11:30:14 drbd02 cibadmin: [1172]: info: Invoked: cibadmin -Ql -o
>>> nodes
>>> Jan 23 11:30:16 drbd02 cibadmin: [1174]: info: Invoked: cibadmin -Ql -o
>>> resources
>>> Jan 23 11:30:16 drbd02 cibadmin: [1176]: info: Invoked: cibadmin -Ql -o
>>> nodes
>>> Jan 23 11:30:16 drbd02 cibadmin: [1178]: info: Invoked: cibadmin -Ql -o
>>> resources
>>> Jan 23 11:30:16 drbd02 cibadmin: [1180]: info: Invoked: cibadmin -Ql -o
>>> nodes
>>> Jan 23 11:30:40 drbd02 cibadmin: [1211]: info: Invoked: cibadmin -Ql -o
>>> nodes
>>> Jan 23 11:30:40 drbd02 crm_resource: [1213]: info: Invoked: crm_resource -M
>>> -r IP-AND-FS --node=drbd02
>>> Jan 23 11:30:40 drbd02 cib: [1214]: info: write_cib_contents: Archived
>>> previous version as /var/lib/heartbeat/crm/cib-73.raw
>>> Jan 23 11:30:40 drbd02 cib: [1214]: info: write_cib_contents: Wrote version
>>> 0.225.0 of the CIB to disk (digest: 166251193cbe1e0b9314ab07358accca)
>>> Jan 23 11:30:40 drbd02 cib: [1214]: info: retrieveCib: Reading cluster
>>> configuration from: /var/lib/heartbeat/crm/cib.tk72Ft (digest:
>>> /var/lib/heartbeat/crm/cib.hF2UsS)
>>> Jan 23 11:30:44 drbd02 cib: [30098]: info: cib_stats: Processed 153
>>> operations (1437.00us average, 0% utilization) in the last 10min
>>>
>>> but nothing happened  resource group is still active on drbd01 node, as well
>>> as there was no new master promotion.
>>>
>>> Shouldn't pacemaker automatically promote second node to master and move my
>>> resource group ?
>>>
>>>
>>> Below is my test configuration, i will be appreciate for help:
>>>
>>> crm(live)# configure show
>>> node drbd01 \
>>>         attributes standby="off"
>>> node drbd02 \
>>>         attributes standby="off"
>>> primitive VIRTUAL-IP ocf:heartbeat:IPaddr \
>>>         params ip="10.11.11.111"
>>> primitive drbd ocf:linbit:drbd \
>>>         params drbd_resource="r1" \
>>>         op start interval="0" timeout="240" \
>>>         op stop interval="0" timeout="100" \
>>>         op monitor interval="59s" role="Master" timeout="30s" \
>>>         op monitor interval="60s" role="Slave" timeout="30s"
>>> primitive fs_r1 ocf:heartbeat:Filesystem \
>>>         params device="/dev/drbd1" directory="/mnt" fstype="ext3" \
>>>         op start interval="0" timeout="60" \
>>>         op stop interval="0" timeout="120" \
>>>         meta allow-migrate="true"
>>> group IP-AND-FS fs_r1 VIRTUAL-IP \
>>>         meta target-role="Started"
>>> ms ms_drbd drbd \
>>>         meta master-node-max="1" clone-max="2" clone-node-max="1"
>>> globally-unique="false" notify="true" target-role="Master"
>>> location cli-prefer-IP-AND-FS IP-AND-FS \
>>>         rule $id="cli-prefer-rule-IP-AND-FS" inf: #uname eq drbd02
>>> colocation FS_WITH_DRBD inf: IP-AND-FS ms_drbd:Master
>>> order DRBD_BEF_FS inf: ms_drbd:promote IP-AND-FS:start
>>> property $id="cib-bootstrap-options" \
>>>         dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
>>>         cluster-infrastructure="openais" \
>>>         expected-quorum-votes="2" \
>>>         stonith-enabled="false" \
>>>         no-quorum-policy="ignore" \
>>>         last-lrm-refresh="1358868655" \
>>>         default-resource-stickiness="1"
>>>
>>> Regards
>>> Andrew
>>>
>>>
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>




More information about the Pacemaker mailing list