[Pacemaker] Colocation constraint to External Managed Resource

Robert H. pacemaker at elconas.de
Thu Oct 10 16:20:54 UTC 2013


Hello,

Am 10.10.2013 16:18, schrieb Andreas Kurz:

> You configured a monitor operation for this unmanaged resource?

Yes, and some parts work as expected, however some behaviour is 
strange.

Config (relevant part only):
----------------------------

primitive mysql-percona lsb:mysql \
         op start enabled="false" interval="0" \
         op stop enabled="false" interval="0" \
         op monitor enabled="true" timeout="20s" interval="10s" \
         meta migration-threshold="2" failure-timeout="30s" 
is-managed="false"
clone CLONE-percona mysql-percona \
         meta clone-max="2" clone-node-max="1" is-managed="false"
location clone-percona-placement CLONE-percona \
         rule $id="clone-percona-placement-rule" -inf: #uname ne NODE1 
and #uname ne NODE2
colocation APP-dev2-private-percona-withip inf: IP CLONE-percona


Test:
----

I start by both Percona XtraDB machines running:

  IP-dev2-privatevip1        (ocf::heartbeat:IPaddr2):       Started 
NODE2
  Clone Set: CLONE-percona [mysql-percona] (unmanaged)
      mysql-percona:0    (lsb:mysql):    Started NODE1 (unmanaged)
      mysql-percona:1    (lsb:mysql):    Started NODE2 (unmanaged)

shell# /etc/init.d/mysql stop on NODE2

... Pacemaker reacts as expected ....

  IP-dev2-privatevip1        (ocf::heartbeat:IPaddr2):       Started 
NODE1
  Clone Set: CLONE-percona [mysql-percona] (unmanaged)
      mysql-percona:0    (lsb:mysql):    Started NODE1 (unmanaged)
      mysql-percona:1    (lsb:mysql):    Started NODE2 (unmanaged) 
FAILED

	 .. then I wait ....
	 .. after some time (1 min), the ressource is shown as running ...

  IP-dev2-privatevip1        (ocf::heartbeat:IPaddr2):       Started 
NODE1
  Clone Set: CLONE-percona [mysql-percona] (unmanaged)
      mysql-percona:0    (lsb:mysql):    Started NODE1 (unmanaged)
      mysql-percona:1    (lsb:mysql):    Started NODE2 (unmanaged)

But it is definitly not running:

shell# /etc/init.d/mysql status
MySQL (Percona XtraDB Cluster) is not running              
[FEHLGESCHLAGEN]

When I run probe "crm resource reprobe" it switches to:

  IP-dev2-privatevip1        (ocf::heartbeat:IPaddr2):       Started 
NODE1
  Clone Set: CLONE-percona [mysql-percona] (unmanaged)
      mysql-percona:0    (lsb:mysql):    Started NODE1 (unmanaged)
      Stopped: [ mysql-percona:1 ]

Then when I start it again:

/etc/init.d/mysql start on NODE2

It stays this way:

  IP-dev2-privatevip1        (ocf::heartbeat:IPaddr2):       Started 
NODE1
  Clone Set: CLONE-percona [mysql-percona] (unmanaged)
      mysql-percona:0    (lsb:mysql):    Started NODE1 (unmanaged)
      Stopped: [ mysql-percona:1 ]

Only a manual "reprobe" helps:

  IP-dev2-privatevip1        (ocf::heartbeat:IPaddr2):       Started 
NODE1
  Clone Set: CLONE-percona [mysql-percona] (unmanaged)
      mysql-percona:0    (lsb:mysql):    Started NODE1 (unmanaged)
      mysql-percona:1    (lsb:mysql):    Started NODE2 (unmanaged)

Same thing happens when I reboot NODE2 (or other way around).

---

I would expect that crm_mon ALWAYS reflects the local state, however it 
looks like a bug for me.

Any hints whats missing ?



>
> Regards,
> Andreas
>

-- 
--
Robert




More information about the Pacemaker mailing list