[Pacemaker] Pacemaker 0.7.3: How to use pingd clone resource and constraints

Andrew Beekhof beekhof at gmail.com
Fri Sep 26 07:37:33 UTC 2008


Most likely you've found a bug :-(
Would you be able to create a bugzilla entry for this?

On Sep 25, 2008, at 9:33 PM, Bruno Voigt wrote:

> Wow.. these warnings are even shown for 127.0.0.1 ?!
> Do I need to finetune the IP stack options somewhere like in  
> sysctl.conf
> to have these warnings of pingd fixed?
>
> root at xen20b:~# /usr/lib/heartbeat/pingd -V -a pingd-internal -d 5s - 
> m 1000 -h 127.0.0.1
> pingd[26741]: 2008/09/25_21:30:04 debug: main: Adding ping host  
> 127.0.0.1
> pingd[26741]: 2008/09/25_21:30:04 debug: main: attrd registration  
> attempt: 0
> pingd[26741]: 2008/09/25_21:30:09 debug:  
> init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/ 
> heartbeat/crm/attrd
> pingd[26741]: 2008/09/25_21:30:09 info: main: Starting pingd
> pingd[26741]: 2008/09/25_21:30:19 debug: stand_alone_ping: Checking  
> connectivity
> pingd[26741]: 2008/09/25_21:30:19 debug: ping_open: Got address  
> 127.0.0.1 for 127.0.0.1
> pingd[26741]: 2008/09/25_21:30:19 debug: ping_open: Opened  
> connection to 127.0.0.1
> pingd[26741]: 2008/09/25_21:30:19 debug: ping_write: Sent 39 bytes  
> to 127.0.0.1
> pingd[26741]: 2008/09/25_21:30:19 WARN: dump_v4_echo: Bad echo (0):  
> 8, code=0, seq=5, id=0, check=22763
> pingd[26741]: 2008/09/25_21:30:20 debug: ping_write: Sent 39 bytes  
> to 127.0.0.1
> pingd[26741]: 2008/09/25_21:30:20 debug: dump_v4_echo: 59 bytes from  
> 127.0.0.1, icmp_seq=5: beekhof-v4
> pingd[26741]: 2008/09/25_21:30:21 debug: ping_write: Sent 39 bytes  
> to 127.0.0.1
> pingd[26741]: 2008/09/25_21:30:21 WARN: dump_v4_echo: Bad echo (0):  
> 8, code=0, seq=13138, id=0, check=3000
> pingd[26741]: 2008/09/25_21:30:22 debug: ping_write: Sent 39 bytes  
> to 127.0.0.1
> pingd[26741]: 2008/09/25_21:30:22 debug: dump_v4_echo: 59 bytes from  
> 10.10.10.167, icmp_seq=13138: beekhof-v4
> pingd[26741]: 2008/09/25_21:30:23 debug: ping_write: Sent 39 bytes  
> to 127.0.0.1
> pingd[26741]: 2008/09/25_21:30:23 WARN: dump_v4_echo: Bad echo (0):  
> 8, code=0, seq=8284, id=0, check=459
>
> root at xen20b:~# uname -a
> Linux xen20b.bb.ic3s.de 2.6.18-6-xen-amd64 #1 SMP Tue Aug 19  
> 06:15:09 UTC 2008 x86_64 GNU/Linux
>
> Bruno Voigt wrote:
>>
>> Hi Andrew,
>>
>> is pingd doing alive tests differently compared to the normal ping  
>> command?
>> normal & flood ping of these hosts show 0% packet lost from my both  
>> nodes.
>>
>> In the log below pingd - besides the warnings -
>> states that the node is alive and that it had sent an update,
>> but it does not show up in the cib.
>>
>> WR,
>> Bruno
>>
>> Andrew Beekhof wrote:
>>
>>> On Sep 24, 2008, at 10:36 PM, Serge Dubrouski wrote:
>>>
>>>
>>>> There is a problem with attrd that affects  pingd in Pacemeaker
>>>> 0.7.3/Heartbeat 2.99. I've already created a Bugzilla ticket for  
>>>> it.
>>>> You can add your information there:
>>>>
>>>> http://developerbugs.linux-foundation.org/show_bug.cgi?id=1969
>>>>
>>> I'm not so sure this is the same thing.
>>> Those "Bad echo" messages look suspicious
>>>
>>>
>>>>> Sep 24 22:01:46 xen20a pingd: [13142]: WARN: dump_v4_echo: Bad  
>>>>> echo
>>>>> (0):
>>>>> 3, code=1, seq=0, id=0, check=24031
>>>>>
>>> In fact, type=3 is ICMP_DEST_UNREACH - so pingd really is having
>>> trouble contacting that host.
>>>
>>>
>>>
>>>> On Wed, Sep 24, 2008 at 2:04 PM, Bruno Voigt <Bruno.Voigt at ic3s.de>
>>>> wrote:
>>>>
>>>>> I defined two ping clone resources,
>>>>> to be used independently by different resources:
>>>>>
>>>>>     <clone id="clone-pingd-internal">
>>>>>       <primitive id="pingd-internal" provider="pacemaker"  
>>>>> class="ocf"
>>>>> type="pingd">
>>>>>         <instance_attributes id="pingd-internal-ia">
>>>>>           <nvpair id="pingd-internal-ia01" name="name"
>>>>> value="pingd-internal"/>
>>>>>           <nvpair id="pingd-internal-ia02" name="dampen"  
>>>>> value="5s"/>
>>>>>           <nvpair id="pingd-internal-ia03" name="multiplier"
>>>>> value="1000"/>
>>>>>           <nvpair id="pingd-internal-ia04" name="host_list"
>>>>> value="172.17.32.23 192.168.132.23"/>
>>>>>         </instance_attributes>
>>>>>       </primitive>
>>>>>     </clone>
>>>>>
>>>>>     <clone id="clone-pingd-external">
>>>>>       <primitive id="pingd-external" provider="pacemaker"  
>>>>> class="ocf"
>>>>> type="pingd">
>>>>>         <instance_attributes id="pingd-external-ia">
>>>>>           <nvpair id="pingd-external-ia01" name="name"
>>>>> value="pingd-external"/>
>>>>>           <nvpair id="pingd-external-ia02" name="dampen"  
>>>>> value="5s"/>
>>>>>           <nvpair id="pingd-external-ia03" name="multiplier"
>>>>> value="1000"/>
>>>>>           <nvpair id="pingd-external-ia04" name="host_list"
>>>>> value="195.244.97.241"/>
>>>>>         </instance_attributes>
>>>>>       </primitive>
>>>>>     </clone>
>>>>>
>>>>> I defined a constraint for a resource so that it depends on
>>>>> pingd-internal
>>>>>
>>>>> <constraints>
>>>>> <rsc_location id="hbtest1b-connectivity" rsc="hbtest1b">
>>>>>   <rule id="hbtest1b-connectivity-exclude-rule" score="- 
>>>>> INFINITY" >
>>>>>     <expression id="hbtest1b-connectivity-exclude"
>>>>> attribute="pingd-internal" operation="not_defined"/>
>>>>>   </rule>
>>>>> </rsc_location>
>>>>> </constraints>
>>>>>
>>>>> But this causes the resource to be unrunnable on either of my both
>>>>> nodes,
>>>>>
>>>>>
>>>>> There are as expected 2 pingd daemons running:
>>>>>
>>>>> root      6132     1  0 21:07 ?        00:00:00
>>>>> /usr/lib/heartbeat/pingd
>>>>> -D -p /var/run/heartbeat/rsctmp/pingd-pingd-internal:0 -a
>>>>> pingd-internal
>>>>> -d 5s -m 1000 -h 172.17.32.23 -h 192.168.132.23
>>>>> root     13142     1  0 21:47 ?        00:00:00
>>>>> /usr/lib/heartbeat/pingd
>>>>> -D -p /var/run/heartbeat/rsctmp/pingd-pingd-external:0 -a
>>>>> pingd-external
>>>>> -d 5s -m 1000 -h 195.244.97.241
>>>>>
>>>>> The problem is, I can't see in the cibadmin -Q output that the  
>>>>> pingd
>>>>> daemons have
>>>>> have stored  their results anywhere..
>>>>>
>>>>> In the log I see the following output:
>>>>>
>>>>> Sep 24 22:01:46 xen20a pingd: [13142]: WARN: dump_v4_echo: Bad  
>>>>> echo
>>>>> (0):
>>>>> 3, code=1, seq=0, id=0, check=24031
>>>>> Sep 24 22:01:47 xen20a pingd: [13142]: WARN: dump_v4_echo: Bad  
>>>>> echo
>>>>> (0):
>>>>> 3, code=1, seq=0, id=0, check=24031
>>>>> Sep 24 22:01:48 xen20a pingd: [13142]: WARN: dump_v4_echo: Bad  
>>>>> echo
>>>>> (0):
>>>>> 8, code=0, seq=261, id=0, check=22762
>>>>> Sep 24 22:01:48 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad  
>>>>> echo (0):
>>>>> 8, code=0, seq=263, id=0, check=22250
>>>>> Sep 24 22:01:50 xen20a pingd: [13142]: info: stand_alone_ping:  
>>>>> Node
>>>>> 195.244.97.241 is alive (1)
>>>>> Sep 24 22:01:50 xen20a pingd: [13142]: info: send_update: 1  
>>>>> active ping
>>>>> nodes
>>>>> Sep 24 22:01:51 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad  
>>>>> echo (0):
>>>>> 3, code=1, seq=0, id=0, check=24031
>>>>> Sep 24 22:01:52 xen20a pingd: [6132]: info: stand_alone_ping: Node
>>>>> 172.17.32.23 is alive (3)
>>>>> Sep 24 22:01:53 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad  
>>>>> echo (0):
>>>>> 3, code=1, seq=0, id=0, check=24031
>>>>> Sep 24 22:01:54 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad  
>>>>> echo (0):
>>>>> 3, code=1, seq=0, id=0, check=25823
>>>>> Sep 24 22:01:55 xen20a pingd: [6132]: WARN: dump_v4_echo: Bad  
>>>>> echo (0):
>>>>> 3, code=1, seq=0, id=0, check=24031
>>>>> Sep 24 22:01:57 xen20a pingd: [6132]: info: stand_alone_ping: Node
>>>>> 192.168.132.23 is alive (2)
>>>>> Sep 24 22:01:57 xen20a pingd: [6132]: info: send_update: 2 active
>>>>> ping nodes
>>>>>
>>>>> Where should the current pingd status be located in the cib ?
>>>>> What is wrong with my setup ?
>>>>>
>>>>> TIA,
>>>>> Bruno
>>>>>
>>>>>
>>>> -- 
>>>> Serge Dubrouski.
>>>>
>>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at clusterlabs.org
> http://list.clusterlabs.org/mailman/listinfo/pacemaker

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20080926/5164a6f4/attachment-0002.htm>


More information about the Pacemaker mailing list