[Pacemaker] Pacemaker and LDAP (389 Directory Service)

veghead sean at studyblue.com
Mon Jun 27 18:56:38 EDT 2011


Serge Dubrouski <sergeyfd at ...> writes:
> On Mon, Jun 27, 2011 at 3:33 PM, veghead <sean <at> studyblue.com> wrote:
> If I remove the co-location, won't the elastic_ip resource just stay where it
> is? Regardless of what happens to LDAP?
> 
> Right. That's why I think that you don't really want to do it. You have 
> to make sure that your IP is up where you LDAP is up. 

Okay. So I took a step and revamped the configuration to test the elastic_ip 
less frequently and with a long timeout. I committed the changes, but "crm 
status" doesn't reflect the resources in question.

Here's the new config:

---snip---
# crm configure show
node $id="d2b294cf-328f-4481-aa2f-cc7b553e6cde" ldap1.example.ec2
node $id="e2a2e42e-1644-4f7d-8e54-71e1f7531e08" ldap2.example.ec2
primitive elastic_ip lsb:elastic-ip \
        op monitor interval="30" timeout="300" on-fail="ignore" 
requires="nothing"
primitive ldap lsb:dirsrv \
        op monitor interval="15s" on-fail="standby" requires="nothing"
clone ldap-clone ldap
colocation ldap-with-eip inf: elastic_ip ldap-clone
order ldap-after-eip inf: elastic_ip ldap-clone
property $id="cib-bootstrap-options" \
        dc-version="1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87" \
        cluster-infrastructure="Heartbeat" \
        stonith-enabled="false" \
        no-quorum-policy="ignore" \
        stop-all-resources="true"
rsc_defaults $id="rsc-options" \
        resource-stickiness="100"
---snip---

And here's the output from "crm status":

---snip---
# crm status
============
Last updated: Mon Jun 27 18:50:14 2011
Stack: Heartbeat
Current DC: ldap2.studyblue.ec2 (e2a2e42e-1644-4f7d-8e54-71e1f7531e08) - 
partition with quorum
Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87
2 Nodes configured, unknown expected votes
2 Resources configured.
============

Online: [ ldap1.example.ec2 ldap2.example.ec2 ]
---snip---

I restarted the nodes one at a time - first I restarted ldap2, then I restarted 
ldap1. When ldap1 went down, ldap2 stopped the ldap resource and didn't make any 
attempt to start the elastic_ip resource:

---snip---
pengine: [12910]: notice: unpack_config: On loss of CCM Quorum: Ignore
pengine: [12910]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' 
= 0, 'green' = 0
pengine: [12910]: info: determine_online_status: Node ldap2.example.ec2 is 
online
pengine: [12910]: notice: native_print: elastic_ip       (lsb:elastic-ip):       
Stopped 
pengine: [12910]: notice: clone_print:  Clone Set: ldap-clone
pengine: [12910]: notice: short_print:      Stopped: [ ldap:0 ldap:1 ]
pengine: [12910]: notice: LogActions: Leave   resource elastic_ip        
(Stopped)
pengine: [12910]: notice: LogActions: Leave   resource ldap:0    (Stopped)
pengine: [12910]: notice: LogActions: Leave   resource ldap:1    (Stopped)
---snip---

After heartbeat/pacemaker came back up on ldap1, it terminated the ldap service 
on ldap1. Now I'm just confused.





More information about the Pacemaker mailing list