I don't know but see the fail it's in the operation lx0_monitor_0, so i ask to someone with more experience then me, if pacemaker does a monitor operation before start?<br><br>maybe when you restart the resource something goes wrong and the resource fail and after that it's blocked<br>
<br>================<br>on-fail="block"<br>================<br><br><br><div class="gmail_quote">2012/6/20 Kadlecsik József <span dir="ltr"><<a href="mailto:kadlecsik.jozsef@wigner.mta.hu" target="_blank">kadlecsik.jozsef@wigner.mta.hu</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Wed, 20 Jun 2012, emmanuel segura wrote:<br>
<br>
> Why you say there is not error in the message<br>
> =========================================================<br>
> Jun 20 11:57:25 atlas4 lrmd: [17568]: info: operation monitor[35] on lx0<br>
> for client 17571: pid 30179 exited with return code 7<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: debug: create_operation_update:<br>
> do_update_resource: Updating resouce lx0 after complete monitor op<br>
> (interval=0)<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: info: process_lrm_event: LRM<br>
> operation lx0_monitor_0 (call=35, rc=7, cib-update=61, confirmed=true) not<br>
> running<br>
<br>
</div>I interpreted those lines as a checking that the resource hasn't been<br>
started yet (confirmed=true). And indeed, it's not running so the return<br>
code is OCF_NOT_RUNNING.<br>
<br>
There's no log message about an attempt to start the resource.<br>
<br>
Best regards,<br>
Jozsef<br>
<div class="HOEnZb"><div class="h5"><br>
> 2012/6/20 Kadlecsik József <<a href="mailto:kadlecsik.jozsef@wigner.mta.hu">kadlecsik.jozsef@wigner.mta.hu</a>><br>
> Hello,<br>
><br>
> Somehow a VirtualDomain resource after a "crm resource restart",<br>
> which did<br>
> *not* start the resource but just stop, the resource cannot be<br>
> started<br>
> anymore. The most baffling is that I do not see an error<br>
> message. The<br>
> resource in question, named 'lx0', can be started directly via<br>
> virsh/libvirt and libvirtd is running on all cluster nodes.<br>
><br>
> We run corosync 1.4.2-1~bpo60+1, pacemaker 1.1.6-2~bpo60+1<br>
> (debian).<br>
><br>
> # crm status<br>
> ============<br>
> Last updated: Wed Jun 20 15:14:44 2012<br>
> Last change: Wed Jun 20 14:07:40 2012 via cibadmin on atlas0<br>
> Stack: openais<br>
> Current DC: atlas0 - partition with quorum<br>
> Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c<br>
> 7 Nodes configured, 7 expected votes<br>
> 18 Resources configured.<br>
> ============<br>
><br>
> Online: [ atlas0 atlas1 atlas2 atlas3 atlas4 atlas5 atlas6 ]<br>
><br>
> kerberos (ocf::heartbeat:VirtualDomain): Started atlas0<br>
> stonith-atlas3 (stonith:ipmilan): Started atlas4<br>
> stonith-atlas1 (stonith:ipmilan): Started atlas4<br>
> stonith-atlas2 (stonith:ipmilan): Started atlas4<br>
> stonith-atlas0 (stonith:ipmilan): Started atlas4<br>
> stonith-atlas4 (stonith:ipmilan): Started atlas3<br>
> mailman (ocf::heartbeat:VirtualDomain): Started atlas6<br>
> indico (ocf::heartbeat:VirtualDomain): Started atlas0<br>
> papi (ocf::heartbeat:VirtualDomain): Started atlas1<br>
> wwwd (ocf::heartbeat:VirtualDomain): Started atlas2<br>
> webauth (ocf::heartbeat:VirtualDomain): Started atlas3<br>
> caladan (ocf::heartbeat:VirtualDomain): Started atlas4<br>
> radius (ocf::heartbeat:VirtualDomain): Started atlas5<br>
> mail0 (ocf::heartbeat:VirtualDomain): Started atlas6<br>
> stonith-atlas5 (stonith:apcmastersnmp): Started atlas4<br>
> stonith-atlas6 (stonith:apcmastersnmp): Started atlas4<br>
> w0 (ocf::heartbeat:VirtualDomain): Started atlas2<br>
><br>
> # crm resource show<br>
> kerberos (ocf::heartbeat:VirtualDomain) Started<br>
> stonith-atlas3 (stonith:ipmilan) Started<br>
> stonith-atlas1 (stonith:ipmilan) Started<br>
> stonith-atlas2 (stonith:ipmilan) Started<br>
> stonith-atlas0 (stonith:ipmilan) Started<br>
> stonith-atlas4 (stonith:ipmilan) Started<br>
> mailman (ocf::heartbeat:VirtualDomain) Started<br>
> indico (ocf::heartbeat:VirtualDomain) Started<br>
> papi (ocf::heartbeat:VirtualDomain) Started<br>
> wwwd (ocf::heartbeat:VirtualDomain) Started<br>
> webauth (ocf::heartbeat:VirtualDomain) Started<br>
> caladan (ocf::heartbeat:VirtualDomain) Started<br>
> radius (ocf::heartbeat:VirtualDomain) Started<br>
> mail0 (ocf::heartbeat:VirtualDomain) Started<br>
> stonith-atlas5 (stonith:apcmastersnmp) Started<br>
> stonith-atlas6 (stonith:apcmastersnmp) Started<br>
> w0 (ocf::heartbeat:VirtualDomain) Started<br>
> lx0 (ocf::heartbeat:VirtualDomain) Stopped<br>
><br>
> # crm configure show<br>
> node atlas0 \<br>
> attributes standby="false" \<br>
> utilization memory="24576"<br>
> node atlas1 \<br>
> attributes standby="false" \<br>
> utilization memory="24576"<br>
> node atlas2 \<br>
> attributes standby="false" \<br>
> utilization memory="24576"<br>
> node atlas3 \<br>
> attributes standby="false" \<br>
> utilization memory="24576"<br>
> node atlas4 \<br>
> attributes standby="false" \<br>
> utilization memory="24576"<br>
> node atlas5 \<br>
> attributes standby="off" \<br>
> utilization memory="20480"<br>
> node atlas6 \<br>
> attributes standby="off" \<br>
> utilization memory="20480"<br>
> primitive caladan ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/caladan.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="4608"<br>
> primitive indico ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/indico.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="5120"<br>
> primitive kerberos ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/qemu/kerberos.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="4608"<br>
> primitive lx0 ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/lx0.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="4608"<br>
> primitive mail0 ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/mail0.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="4608"<br>
> primitive mailman ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/mailman.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="5120"<br>
> primitive papi ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/papi.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="6144"<br>
> primitive radius ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/radius.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="4608"<br>
> primitive stonith-atlas0 stonith:ipmilan \<br>
> params hostname="atlas0" ipaddr="192.168.40.20"<br>
> port="623"<br>
> auth="md5" priv="admin" login="root" password="XXXXX" \<br>
> op start interval="0" timeout="120s" \<br>
> meta target-role="Started"<br>
> primitive stonith-atlas1 stonith:ipmilan \<br>
> params hostname="atlas1" ipaddr="192.168.40.21"<br>
> port="623"<br>
> auth="md5" priv="admin" login="root" password="XXXX" \<br>
> op start interval="0" timeout="120s" \<br>
> meta target-role="Started"<br>
> primitive stonith-atlas2 stonith:ipmilan \<br>
> params hostname="atlas2" ipaddr="192.168.40.22"<br>
> port="623"<br>
> auth="md5" priv="admin" login="root" password="XXXX" \<br>
> op start interval="0" timeout="120s" \<br>
> meta target-role="Started"<br>
> primitive stonith-atlas3 stonith:ipmilan \<br>
> params hostname="atlas3" ipaddr="192.168.40.23"<br>
> port="623"<br>
> auth="md5" priv="admin" login="root" password="XXXX" \<br>
> op start interval="0" timeout="120s" \<br>
> meta target-role="Started"<br>
> primitive stonith-atlas4 stonith:ipmilan \<br>
> params hostname="atlas4" ipaddr="192.168.40.24"<br>
> port="623"<br>
> auth="md5" priv="admin" login="root" password="XXXX" \<br>
> op start interval="0" timeout="120s" \<br>
> meta target-role="Started"<br>
> primitive stonith-atlas5 stonith:apcmastersnmp \<br>
> params ipaddr="192.168.40.252" port="161"<br>
> community="XXXX"<br>
> pcmk_host_list="atlas5" pcmk_host_check="static-list"<br>
> primitive stonith-atlas6 stonith:apcmastersnmp \<br>
> params ipaddr="192.168.40.252" port="161"<br>
> community="XXXX"<br>
> pcmk_host_list="atlas6" pcmk_host_check="static-list"<br>
> primitive w0 ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/w0.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="4608"<br>
> primitive webauth ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/webauth.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="4608"<br>
> primitive wwwd ocf:heartbeat:VirtualDomain \<br>
> params config="/etc/libvirt/crm/wwwd.xml"<br>
> hypervisor="qemu:///system" \<br>
> meta allow-migrate="true" target-role="Started"<br>
> is-managed="true" \<br>
> op start interval="0" timeout="120s" \<br>
> op stop interval="0" timeout="120s" \<br>
> op monitor interval="10s" timeout="40s" depth="0" \<br>
> op migrate_to interval="0" timeout="240s" on-fail="block"<br>
> \<br>
> op migrate_from interval="0" timeout="240s"<br>
> on-fail="block" \<br>
> utilization memory="5120"<br>
> location location-stonith-atlas0 stonith-atlas0 -inf: atlas0<br>
> location location-stonith-atlas1 stonith-atlas1 -inf: atlas1<br>
> location location-stonith-atlas2 stonith-atlas2 -inf: atlas2<br>
> location location-stonith-atlas3 stonith-atlas3 -inf: atlas3<br>
> location location-stonith-atlas4 stonith-atlas4 -inf: atlas4<br>
> location location-stonith-atlas5 stonith-atlas5 -inf: atlas5<br>
> location location-stonith-atlas6 stonith-atlas6 -inf: atlas6<br>
> property $id="cib-bootstrap-options" \<br>
> <br>
> dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \<br>
> cluster-infrastructure="openais" \<br>
> expected-quorum-votes="7" \<br>
> stonith-enabled="true" \<br>
> no-quorum-policy="stop" \<br>
> last-lrm-refresh="1340193431" \<br>
> symmetric-cluster="true" \<br>
> maintenance-mode="false" \<br>
> stop-all-resources="false" \<br>
> is-managed-default="true" \<br>
> placement-strategy="balanced"<br>
><br>
> # crm_verify -L -VV<br>
> [...]<br>
> crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
> w0<br>
> (Started atlas2)<br>
> crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
> stonith-atlas6 (Started atlas4)<br>
> crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
> stonith-atlas5 (Started atlas4)<br>
> crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
> stonith-atlas4 (Started atlas3)<br>
> crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
> stonith-atlas3 (Started atlas4)<br>
> crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
> stonith-atlas2 (Started atlas4)<br>
> crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
> stonith-atlas1 (Started atlas4)<br>
> crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
> stonith-atlas0 (Started atlas4)<br>
> crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Start<br>
> lx0<br>
> (atlas4)<br>
><br>
> I have tried to delete the resource and add again, did not help.<br>
> The corresponding log entries:<br>
><br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: info: delete_resource:<br>
> Removing<br>
> resource lx0 for 28654_crm_resource (internal) on atlas0<br>
> Jun 20 11:57:25 atlas4 lrmd: [17568]: debug: lrmd_rsc_destroy:<br>
> removing<br>
> resource lx0<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: debug: delete_rsc_entry:<br>
> sync:<br>
> Sending delete op for lx0<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: info: notify_deleted:<br>
> Notifying<br>
> 28654_crm_resource on atlas0 that lx0 was deleted<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: WARN:<br>
> decode_transition_key: Bad<br>
> UUID (crm-resource-28654) in sscanf result (3) for<br>
> 0:0:crm-resource-28654<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: debug:<br>
> create_operation_update:<br>
> send_direct_ack: Updating resouce lx0 after complete delete op<br>
> (interval=60000)<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: info: send_direct_ack:<br>
> ACK'ing<br>
> resource op lx0_delete_60000 from 0:0:crm-resource-28654:<br>
> lrm_invoke-lrmd-1340186245-16<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] mcasted<br>
> message added<br>
> to pending queue<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] mcasted<br>
> message added<br>
> to pending queue<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Delivering<br>
> 10d5 to 10d7<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Delivering<br>
> MCAST<br>
> message with seq 10d6 to pending delivery queue<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Delivering<br>
> MCAST<br>
> message with seq 10d7 to pending delivery queue<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Received<br>
> ringid(<a href="http://192.168.40.60:22264" target="_blank">192.168.40.60:22264</a>) seq 10d6<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Received<br>
> ringid(<a href="http://192.168.40.60:22264" target="_blank">192.168.40.60:22264</a>) seq 10d7<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: debug: notify_deleted:<br>
> Triggering a<br>
> refresh after 28654_crm_resource deleted lx0 from the LRM<br>
> Jun 20 11:57:25 atlas4 cib: [17567]: debug: cib_process_xpath:<br>
> Processing<br>
> cib_query op for<br>
> //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lr<br>
> m-refresh']<br>
> (/cib/configuration/crm_config/cluster_property_set/nvpair[6])<br>
><br>
><br>
> Jun 20 11:57:25 atlas4 lrmd: [17568]: debug:<br>
> on_msg_add_rsc:client [17571]<br>
> adds resource lx0<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Delivering<br>
> 149e to 149f<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Delivering<br>
> MCAST<br>
> message with seq 149f to pending delivery queue<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Received<br>
> ringid(<a href="http://192.168.40.60:22264" target="_blank">192.168.40.60:22264</a>) seq 14a0<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Delivering<br>
> 149f to 14a0<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] Delivering<br>
> MCAST<br>
> message with seq 14a0 to pending delivery queue<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] releasing<br>
> messages up<br>
> to and including 149e<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: info: do_lrm_rsc_op:<br>
> Performing<br>
> key=26:10266:7:e7426ec7-3bae-4a4b-a4ae-c3f80f17e058<br>
> op=lx0_monitor_0 )<br>
> Jun 20 11:57:25 atlas4 lrmd: [17568]: debug:<br>
> on_msg_perform_op:2396:<br>
> copying parameters for rsc lx0<br>
> Jun 20 11:57:25 atlas4 lrmd: [17568]: debug: on_msg_perform_op:<br>
> add an<br>
> operation operation monitor[35] on lx0 for client 17571, its<br>
> parameters:<br>
> crm_feature_set=[3.0.5] config=[/etc/libvirt/crm/lx0.xml]<br>
> CRM_meta_timeout=[20000] hypervisor=[qemu:///system] to the<br>
> operation<br>
> list.<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] releasing<br>
> messages up<br>
> to and including 149f<br>
> Jun 20 11:57:25 atlas4 lrmd: [17568]: info: rsc:lx0 probe[35]<br>
> (pid 30179)<br>
> Jun 20 11:57:25 atlas4 VirtualDomain[30179]: INFO: Domain name<br>
> "lx0" saved<br>
> to /var/run/resource-agents/VirtualDomain-lx0.state.<br>
> Jun 20 11:57:25 atlas4 corosync[17530]: [TOTEM ] releasing<br>
> messages up<br>
> to and including 14bc<br>
> Jun 20 11:57:25 atlas4 VirtualDomain[30179]: DEBUG: Virtual<br>
> domain lx0 is<br>
> currently shut off.<br>
> Jun 20 11:57:25 atlas4 lrmd: [17568]: WARN: Managed lx0:monitor<br>
> process<br>
> 30179 exited with return code 7.<br>
> Jun 20 11:57:25 atlas4 lrmd: [17568]: info: operation<br>
> monitor[35] on lx0<br>
> for client 17571: pid 30179 exited with return code 7<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: debug:<br>
> create_operation_update:<br>
> do_update_resource: Updating resouce lx0 after complete monitor<br>
> op<br>
> (interval=0)<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: info: process_lrm_event:<br>
> LRM<br>
> operation lx0_monitor_0 (call=35, rc=7, cib-update=61,<br>
> confirmed=true) not<br>
> running<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: debug:<br>
> update_history_cache:<br>
> Appending monitor op to history for 'lx0'<br>
> Jun 20 11:57:25 atlas4 crmd: [17571]: debug: get_xpath_object:<br>
> No match<br>
> for //cib_update_result//diff-added//crm_config in<br>
> /notify/cib_update_result/diff<br>
><br>
> What can be wrong in the setup/configuration? And what on the<br>
> earth<br>
> happened?<br>
><br>
> Best regards,<br>
> Jozsef<br>
> --<br>
> E-mail : <a href="mailto:kadlecsik.jozsef@wigner.mta.hu">kadlecsik.jozsef@wigner.mta.hu</a><br>
> PGP key: <a href="http://www.kfki.hu/%7Ekadlec/pgp_public_key.txt" target="_blank">http://www.kfki.hu/~kadlec/pgp_public_key.txt</a><br>
> Address: Wigner Research Centre for Physics, Hungarian Academy<br>
> of Sciences<br>
> H-1525 Budapest 114, POB. 49, Hungary<br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started:<br>
> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
><br>
><br>
> --<br>
> esta es mi vida e me la vivo hasta que dios quiera<br>
><br>
><br>
<br>
--<br>
E-mail : <a href="mailto:kadlecsik.jozsef@wigner.mta.hu">kadlecsik.jozsef@wigner.mta.hu</a><br>
PGP key: <a href="http://www.kfki.hu/%7Ekadlec/pgp_public_key.txt" target="_blank">http://www.kfki.hu/~kadlec/pgp_public_key.txt</a><br>
Address: Wigner Research Centre for Physics, Hungarian Academy of Sciences<br>
H-1525 Budapest 114, POB. 49, Hungary</div></div><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>