[Pacemaker] Pacemaker 0.7.3: How to use pingd clone resource and constraints

Andrew Beekhof beekhof at gmail.com
Thu Sep 25 06:24:19 EDT 2008


On Sep 24, 2008, at 10:36 PM, Serge Dubrouski wrote:

> There is a problem with attrd that affects  pingd in Pacemeaker
> 0.7.3/Heartbeat 2.99. I've already created a Bugzilla ticket for it.
> You can add your information there:
>
> http://developerbugs.linux-foundation.org/show_bug.cgi?id=1969

I'm not so sure this is the same thing.
Those "Bad echo" messages look suspicious

>> Sep 24 22:01:46 xen20a pingd: [13142]: WARN: dump_v4_echo: Bad echo  
>> (0):
>> 3, code=1, seq=0, id=0, check=24031

In fact, type=3 is ICMP_DEST_UNREACH - so pingd really is having  
trouble contacting that host.


>
>
> On Wed, Sep 24, 2008 at 2:04 PM, Bruno Voigt <Bruno.Voigt at ic3s.de>  
> wrote:
>> I defined two ping clone resources,
>> to be used independently by different resources:
>>
>>     <clone id="clone-pingd-internal">
>>       <primitive id="pingd-internal" provider="pacemaker" class="ocf"
>> type="pingd">
>>         <instance_attributes id="pingd-internal-ia">
>>           <nvpair id="pingd-internal-ia01" name="name"
>> value="pingd-internal"/>
>>           <nvpair id="pingd-internal-ia02" name="dampen" value="5s"/>
>>           <nvpair id="pingd-internal-ia03" name="multiplier"
>> value="1000"/>
>>           <nvpair id="pingd-internal-ia04" name="host_list"
>> value="172.17.32.23 192.168.132.23"/>
>>         </instance_attributes>
>>       </primitive>
>>     </clone>
>>
>>     <clone id="clone-pingd-external">
>>       <primitive id="pingd-external" provider="pacemaker" class="ocf"
>> type="pingd">
>>         <instance_attributes id="pingd-external-ia">
>>           <nvpair id="pingd-external-ia01" name="name"
>> value="pingd-external"/>
>>           <nvpair id="pingd-external-ia02" name="dampen" value="5s"/>
>>           <nvpair id="pingd-external-ia03" name="multiplier"
>> value="1000"/>
>>           <nvpair id="pingd-external-ia04" name="host_list"
>> value="195.244.97.241"/>
>>         </instance_attributes>
>>       </primitive>
>>     </clone>
>>
>> I defined a constraint for a resource so that it depends on pingd- 
>> internal
>>
>> <constraints>
>> <rsc_location id="hbtest1b-connectivity" rsc="hbtest1b">
>>   <rule id="hbtest1b-connectivity-exclude-rule" score="-INFINITY" >
>>     <expression id="hbtest1b-connectivity-exclude"
>> attribute="pingd-internal" operation="not_defined"/>
>>   </rule>
>> </rsc_location>
>> </constraints>
>>
>> But this causes the resource to be unrunnable on either of my both  
>> nodes,
>>
>>
>> There are as expected 2 pingd daemons running:
>>
>> root      6132     1  0 21:07 ?        00:00:00 /usr/lib/heartbeat/ 
>> pingd
>> -D -p /var/run/heartbeat/rsctmp/pingd-pingd-internal:0 -a pingd- 
>> internal
>> -d 5s -m 1000 -h 172.17.32.23 -h 192.168.132.23
>> root     13142     1  0 21:47 ?        00:00:00 /usr/lib/heartbeat/ 
>> pingd
>> -D -p /var/run/heartbeat/rsctmp/pingd-pingd-external:0 -a pingd- 
>> external
>> -d 5s -m 1000 -h 195.244.97.241
>>
>> The problem is, I can't see in the cibadmin -Q output that the pingd
>> daemons have
>> have stored  their results anywhere..
>>
>> In the log I see the following output:
>>
>> Sep 24 22:01:46 xen20a pingd: [13142]: WARN: dump_v4_echo: Bad echo  
>> (0):
>> 3, code=1, seq=0, id=0, check=24031
>> Sep 24 22:01:47 xen20a pingd: [13142]: WARN: dump_v4_echo: Bad echo  
>> (0):
>> 3, code=1, seq=0, id=0, check=24031
>> Sep 24 22:01:48 xen20a pingd: [13142]: WARN: dump_v4_echo: Bad echo  
>> (0):
>> 8, code=0, seq=261, id=0, check=22762
>> Sep 24 22:01:48 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad echo  
>> (0):
>> 8, code=0, seq=263, id=0, check=22250
>> Sep 24 22:01:50 xen20a pingd: [13142]: info: stand_alone_ping: Node
>> 195.244.97.241 is alive (1)
>> Sep 24 22:01:50 xen20a pingd: [13142]: info: send_update: 1 active  
>> ping
>> nodes
>> Sep 24 22:01:51 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad echo  
>> (0):
>> 3, code=1, seq=0, id=0, check=24031
>> Sep 24 22:01:52 xen20a pingd: [6132]: info: stand_alone_ping: Node
>> 172.17.32.23 is alive (3)
>> Sep 24 22:01:53 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad echo  
>> (0):
>> 3, code=1, seq=0, id=0, check=24031
>> Sep 24 22:01:54 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad echo  
>> (0):
>> 3, code=1, seq=0, id=0, check=25823
>> Sep 24 22:01:55 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad echo  
>> (0):
>> 3, code=1, seq=0, id=0, check=24031
>> Sep 24 22:01:57 xen20a pingd: [6132]: info: stand_alone_ping: Node
>> 192.168.132.23 is alive (2)
>> Sep 24 22:01:57 xen20a pingd: [6132]: info: send_update: 2 active  
>> ping nodes
>>
>> Where should the current pingd status be located in the cib ?
>> What is wrong with my setup ?
>>
>> TIA,
>> Bruno
>>
>>
>>
>> _______________________________________________
>> Pacemaker mailing list
>> Pacemaker at clusterlabs.org
>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>
>>
>
>
>
> -- 
> Serge Dubrouski.
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at clusterlabs.org
> http://list.clusterlabs.org/mailman/listinfo/pacemaker





More information about the Pacemaker mailing list