[Pacemaker] Colocation constraint to External Managed Resource

Robert H. pacemaker at elconas.de
Tue Oct 15 17:28:14 EDT 2013


Hi,

I finally got it working.

I had to set cluster-recheck-interval="5m" or some other value and had 
to set failure-timeout to the same value (failure-timeout="5m"). This 
causes a "probe" after 5 minutes and then the cluster shows the correct 
state and reevaluates the engine.

So the very stripped down config is like this:

primitive mysql-percona lsb:mysql \
         op start enabled="false" interval="0" \
         op stop enabled="false" interval="0" \
         op monitor enabled="true" timeout="20s" interval="10s" \
         op monitor enabled="true" timeout="20s" interval="11s" 
role="Stopped" \
         meta migration-threshold="2" failure-timeout="5m" 
is-managed="false"
clone CLONE-percona mysql-percona \
         meta clone-max="2" clone-node-max="1" is-managed="false"
property $id="cib-bootstrap-options" \
         .. many more options ...
         cluster-recheck-interval="5m"

With this config and some location constraints, the cluster moves 
virtual IP's away from nodes not not having percona running 
automatically.

Thanks for the Tipps,
Robert


Am 14.10.2013 12:07, schrieb Robert H.:
> Hi,
>
> one more note:
>
> When I cleanup the ressource, the monitor operation is triggered and
> the result is as expected:
>
> [root at NODE2 ~]# crm_resource --resource mysql-percona --cleanup 
> --node NODE2
> Cleaning up mysql-percona:0 on NODE2
> Waiting for 1 replies from the CRMd. OK
>
>  Clone Set: CLONE-percona [mysql-percona] (unmanaged)
>      mysql-percona:0    (lsb:mysql):    Started NODE1 (unmanaged)
>      mysql-percona:1    (lsb:mysql):    Started NODE2 (unmanaged)
>
>
> I assumed that the failure-timeout="xxx" will cause the cleanup to be
> done automatically. Am I wrong ?
>
> Can I tell pacemaker to perform a "cleanup" automatically from time
> to time (I don't want to use cron...) ?
>
> Regards,
> Robert
>
>
> Am 14.10.2013 11:30, schrieb Robert H.:
>>> You probably also want to monitor even if pacemaker thinks this is
>>> supposed to be stopped.
>>>
>>> 	op monitor interval=11s timeout=20s role=Stopped
>>>
>>
>> I added this:
>>
>> primitive mysql-percona lsb:mysql \
>>         op start enabled="false" interval="0" \
>>         op stop enabled="false" interval="0" \
>>         op monitor enabled="true" timeout="20s" interval="10s" \
>>         op monitor enabled="true" timeout="20s" interval="11s"
>> role="Stopped" \
>>         meta migration-threshold="2" failure-timeout="30s" 
>> is-managed="false"
>>
>> However after a reboot of NODE2, the resource stays at:
>>
>>  Clone Set: CLONE-percona [mysql-percona] (unmanaged)
>>      mysql-percona:0    (lsb:mysql):    Started NODE1(unmanaged)
>>      Stopped: [ mysql-percona:1 ]
>>
>> But mysql is running:
>>
>> [root at NODE2~]# /etc/init.d/mysql status
>> MySQL (Percona XtraDB Cluster) running (2619)              [  OK  ]
>> [root at NODE2~]# echo $?
>> 0
>>
>> .. hmm beeing confused :/
>>
>>
>>> crm_mon reflects what is in the cib.  If no-one re-populates the 
>>> cib
>>> with the current state of the world, what it shows will be stale.
>>
>> How can I force this ?
>>
>> Regards,
>> Robert

-- 
--
Robert




More information about the Pacemaker mailing list