No subject


Mon Mar 29 03:57:18 EDT 2010


May 10 15:18:27 ha1 cibadmin: [29478]: info: Invoked: cibadmin -Ql
May 10 15:18:27 ha1 cibadmin: [29479]: info: Invoked: cibadmin -Ql
May 10 15:18:28 ha1 crm_shadow: [29536]: info: Invoked: crm_shadow -c
__crmshell.29455
May 10 15:18:28 ha1 cibadmin: [29537]: info: Invoked: cibadmin -p -U
May 10 15:18:28 ha1 crm_shadow: [29539]: info: Invoked: crm_shadow -C
__crmshell.29455 --force
May 10 15:18:28 ha1 cib: [8470]: info: cib_replace_notify: Replaced:
0.267.14 -> 0.269.1 from <null>
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: - <cib
epoch="267" num_updates="14" admin_epoch="0" />
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + <cib
epoch="269" num_updates="1" admin_epoch="0" >
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
<configuration >
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
<constraints >
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
<rsc_location id="nfs-group-with-pinggw" rsc="nfs-group"
__crm_diff_marker__="added:top" >
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
<rule boolean-op="or" id="nfs-group-with-pinggw-rule" score="-INFINITY" >
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
  <expression attribute="pinggw" id="nfs-group-with-pinggw-expression"
operation="not_defined" />
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
  <expression attribute="pinggw" id="nfs-group-with-pinggw-expression-0"
operation="lte" value="0" />
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
</rule>
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
</rsc_location>
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
</constraints>
May 10 15:18:28 ha1 crmd: [8474]: info: abort_transition_graph:
need_abort:59 - Triggered transition abort (complete=1) : Non-status change
May 10 15:18:28 ha1 attrd: [8472]: info: do_cib_replaced: Sending full
refresh
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: +
</configuration>
May 10 15:18:28 ha1 crmd: [8474]: info: need_abort: Aborting on change to
epoch
May 10 15:18:28 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: master-nfsdrbd:0 (10000)
May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + </cib>
May 10 15:18:28 ha1 crmd: [8474]: info: do_state_transition: State
transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL
origin=abort_transition_graph ]
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_replace for section 'all' (origin=local/crm_shadow/2,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 crmd: [8474]: info: do_state_transition: All 2 cluster
nodes are eligible to run resources.
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/203,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 crmd: [8474]: info: do_pe_invoke: Query 205: Requesting
the current CIB: S_POLICY_ENGINE
May 10 15:18:28 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: probe_complete (true)
May 10 15:18:28 ha1 cib: [29541]: info: write_cib_contents: Archived
previous version as /var/lib/heartbeat/crm/cib-47.raw
May 10 15:18:28 ha1 crmd: [8474]: info: do_state_transition: State
transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION
cause=C_FSA_INTERNAL origin=do_cib_replaced ]
May 10 15:18:28 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: terminate (<null>)
May 10 15:18:28 ha1 cib: [29541]: info: write_cib_contents: Wrote version
0.269.0 of the CIB to disk (digest: 8f92c20ff8f96cde0fa0c75cd3207caa)
May 10 15:18:28 ha1 crmd: [8474]: info: update_dc: Unset DC ha1
May 10 15:18:28 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: master-nfsdrbd:1 (<null>)
May 10 15:18:28 ha1 cib: [29541]: info: retrieveCib: Reading cluster
configuration from: /var/lib/heartbeat/crm/cib.FPnpLz (digest:
/var/lib/heartbeat/crm/cib.EsRWbp)
May 10 15:18:28 ha1 crmd: [8474]: info: do_state_transition: State
transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC
cause=C_FSA_INTERNAL origin=do_election_check ]
May 10 15:18:28 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: shutdown (<null>)
May 10 15:18:28 ha1 crmd: [8474]: info: do_dc_takeover: Taking over DC
status for this partition
May 10 15:18:28 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: pingd (100)
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_readwrite: We are now in
R/O mode
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_slave_all for section 'all' (origin=local/crmd/206,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_readwrite: We are now in
R/W mode
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_master for section 'all' (origin=local/crmd/207,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section cib (origin=local/crmd/208,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section crm_config (origin=local/crmd/210,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section crm_config (origin=local/crmd/212,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 crmd: [8474]: info: do_dc_join_offer_all: join-6:
Waiting on 2 outstanding join acks
May 10 15:18:28 ha1 crmd: [8474]: info: ais_dispatch: Membership 180: quorum
retained
May 10 15:18:28 ha1 crmd: [8474]: info: crm_ais_dispatch: Setting expected
votes to 2
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section crm_config (origin=local/crmd/215,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 crmd: [8474]: info: config_query_callback: Checking for
expired actions every 900000ms
May 10 15:18:28 ha1 crmd: [8474]: info: config_query_callback: Sending
expected-votes=2 to corosync
May 10 15:18:28 ha1 crmd: [8474]: info: update_dc: Set DC to ha1 (3.0.1)
May 10 15:18:28 ha1 crmd: [8474]: info: ais_dispatch: Membership 180: quorum
retained
May 10 15:18:28 ha1 crm_shadow: [29542]: info: Invoked: crm_shadow -D
__crmshell.29455 --force
May 10 15:18:28 ha1 crmd: [8474]: info: crm_ais_dispatch: Setting expected
votes to 2
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section crm_config (origin=local/crmd/218,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 crmd: [8474]: info: do_state_transition: State
transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED
cause=C_FSA_INTERNAL origin=check_join_state ]
May 10 15:18:28 ha1 crmd: [8474]: info: do_state_transition: All 2 cluster
nodes responded to the join offer.
May 10 15:18:28 ha1 crmd: [8474]: info: do_dc_join_finalize: join-6: Syncing
the CIB from ha1 to the rest of the cluster
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_sync for section 'all' (origin=local/crmd/219,
version=0.269.1): ok (rc=0)
May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/220,
version=0.269.1): ok (rc=0)
May 10 15:18:29 ha1 crmd: [8474]: info: do_dc_join_ack: join-6: Updating
node state to member for ha2
May 10 15:18:29 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/221,
version=0.269.1): ok (rc=0)
May 10 15:18:29 ha1 crmd: [8474]: info: do_dc_join_ack: join-6: Updating
node state to member for ha1
May 10 15:18:29 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_delete for section //node_state[@uname='ha2']/lrm
(origin=local/crmd/222, version=0.269.2): ok (rc=0)
May 10 15:18:29 ha1 crmd: [8474]: info: erase_xpath_callback: Deletion of
"//node_state[@uname='ha2']/lrm": ok (rc=0)
May 10 15:18:29 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_delete for section //node_state[@uname='ha1']/lrm
(origin=local/crmd/224, version=0.269.4): ok (rc=0)
May 10 15:18:29 ha1 crmd: [8474]: info: do_state_transition: State
transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED
cause=C_FSA_INTERNAL origin=check_join_state ]
May 10 15:18:29 ha1 crmd: [8474]: info: do_state_transition: All 2 cluster
nodes are eligible to run resources.
May 10 15:18:29 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section nodes (origin=local/crmd/226,
version=0.269.5): ok (rc=0)
May 10 15:18:29 ha1 crmd: [8474]: info: do_dc_join_final: Ensuring DC,
quorum and node attributes are up-to-date
May 10 15:18:29 ha1 crmd: [8474]: info: crm_update_quorum: Updating quorum
status to true (call=228)
May 10 15:18:29 ha1 attrd: [8472]: info: attrd_local_callback: Sending full
refresh (origin=crmd)
May 10 15:18:29 ha1 cib: [8470]: info: cib_process_request: Operation
complete: op cib_modify for section cib (origin=local/crmd/228,
version=0.269.5): ok (rc=0)
May 10 15:18:29 ha1 crmd: [8474]: info: abort_transition_graph:
do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled
May 10 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: master-nfsdrbd:0 (10000)
May 10 15:18:29 ha1 crmd: [8474]: info: do_pe_invoke: Query 229: Requesting
the current CIB: S_POLICY_ENGINE
May 10 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: probe_complete (true)
May 10 15:18:29 ha1 crmd: [8474]: info: erase_xpath_callback: Deletion of
"//node_state[@uname='ha1']/lrm": ok (rc=0)
May 10 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: terminate (<null>)
May 10 15:18:29 ha1 crmd: [8474]: info: te_update_diff: Detected LRM refresh
- 8 resources updated: Skipping all resource events
May 10 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: master-nfsdrbd:1 (<null>)
May 10 15:18:29 ha1 crmd: [8474]: info: abort_transition_graph:
te_update_diff:227 - Triggered transition abort (complete=1, tag=diff,
id=(null), magic=NA, cib=0.269.5) : LRM Refresh
May 10 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: shutdown (<null>)
May 10 15:18:29 ha1 crmd: [8474]: info: do_pe_invoke_callback: Invoking the
PE: query=229, ref=pe_calc-dc-1273497509-143, seq=180, quorate=1
May 10 15:18:29 ha1 pengine: [8473]: notice: unpack_config: On loss of CCM
Quorum: Ignore
May 10 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush
op to all hosts for: pingd (100)
May 10 15:18:29 ha1 crmd: [8474]: info: do_pe_invoke: Query 230: Requesting
the current CIB: S_POLICY_ENGINE
May 10 15:18:29 ha1 pengine: [8473]: info: unpack_config: Node scores: 'red'
= -INFINITY, 'yellow' = 0, 'green' = 0
May 10 15:18:29 ha1 crmd: [8474]: info: do_pe_invoke_callback: Invoking the
PE: query=230, ref=pe_calc-dc-1273497509-144, seq=180, quorate=1
May 10 15:18:29 ha1 pengine: [8473]: info: determine_online_status: Node ha1
is online
May 10 15:18:29 ha1 pengine: [8473]: notice: unpack_rsc_op: Operation
nfsdrbd:0_monitor_0 found resource nfsdrbd:0 active in master mode on ha1
May 10 15:18:29 ha1 pengine: [8473]: info: determine_online_status: Node ha2
is online
May 10 15:18:29 ha1 pengine: [8473]: notice: native_print: SitoWeb
 (ocf::heartbeat:apache):        Started ha1
May 10 15:18:29 ha1 pengine: [8473]: notice: clone_print:  Master/Slave Set:
NfsData
May 10 15:18:29 ha1 pengine: [8473]: notice: short_print:      Masters: [
ha1 ]
May 10 15:18:29 ha1 pengine: [8473]: notice: short_print:      Slaves: [ ha2
]
May 10 15:18:29 ha1 pengine: [8473]: notice: group_print:  Resource Group:
nfs-group
May 10 15:18:29 ha1 pengine: [8473]: notice: native_print:      ClusterIP
    (ocf::heartbeat:IPaddr2):       Started ha1
May 10 15:18:29 ha1 pengine: [8473]: notice: native_print:      lv_drbd0
   (ocf::heartbeat:LVM):   Started ha1
May 10 15:18:29 ha1 pengine: [8473]: notice: native_print:      NfsFS
(ocf::heartbeat:Filesystem):    Started ha1
May 10 15:18:29 ha1 pengine: [8473]: notice: native_print:      nfssrv
 (ocf::heartbeat:nfsserver):     Started ha1
May 10 15:18:29 ha1 cibadmin: [29543]: info: Invoked: cibadmin -Ql
May 10 15:18:29 ha1 pengine: [8473]: notice: native_print: nfsclient
 (ocf::heartbeat:Filesystem):    Started ha2
May 10 15:18:29 ha1 pengine: [8473]: notice: clone_print:  Clone Set:
cl-pinggw
May 10 15:18:29 ha1 pengine: [8473]: notice: short_print:      Started: [
ha1 ha2 ]
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: NfsData:
Rolling back scores from ClusterIP
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: NfsData:
Rolling back scores from ClusterIP
May 10 15:18:29 ha1 pengine: [8473]: info: master_color: Promoting nfsdrbd:0
(Master ha1)
May 10 15:18:29 ha1 pengine: [8473]: info: master_color: NfsData: Promoted 1
instances of a possible 1 to master
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: nfsclient:
Rolling back scores from ClusterIP
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: nfsclient:
Rolling back scores from lv_drbd0
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: nfsclient:
Rolling back scores from NfsFS
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: nfsclient:
Rolling back scores from ClusterIP
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: ClusterIP:
Rolling back scores from lv_drbd0
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: ClusterIP:
Rolling back scores from SitoWeb
May 10 15:18:29 ha1 pengine: [8473]: WARN: native_color: Resource ClusterIP
cannot run anywhere
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: lv_drbd0:
Rolling back scores from NfsFS
May 10 15:18:29 ha1 pengine: [8473]: WARN: native_color: Resource lv_drbd0
cannot run anywhere
May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: NfsFS:
Rolling back scores from nfssrv
May 10 15:18:29 ha1 pengine: [8473]: WARN: native_color: Resource NfsFS
cannot run anywhere
May 10 15:18:29 ha1 pengine: [8473]: WARN: native_color: Resource nfssrv
cannot run anywhere
May 10 15:18:29 ha1 pengine: [8473]: WARN: native_color: Resource SitoWeb
cannot run anywhere
May 10 15:18:29 ha1 pengine: [8473]: info: master_color: Promoting nfsdrbd:0
(Master ha1)
May 10 15:18:29 ha1 pengine: [8473]: info: master_color: NfsData: Promoted 1
instances of a possible 1 to master
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Stop resource
SitoWeb  (ha1)
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Leave resource
nfsdrbd:0       (Master ha1)
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Leave resource
nfsdrbd:1       (Slave ha2)
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Stop resource
ClusterIP        (ha1)
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Stop resource
lv_drbd0 (ha1)
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Stop resource NfsFS
   (ha1)
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Stop resource
nfssrv   (ha1)
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Stop resource
nfsclient        (Started ha2)
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Leave resource
pinggw:0        (Started ha1)
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Leave resource
pinggw:1        (Started ha2)

--00032555e49e903cdc04863d7801
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hello,<div>using pacemaker 1.0.8 on rh el 5 I have some problems understand=
ing the way ping clone works to setup monitoring of gw... even after readin=
g docs...</div><div><br></div><div>As soon as I run:</div><div>crm configur=
e location nfs-group-with-pinggw nfs-group rule -inf: not_defined pinggw or=
 pinggw lte 0</div>
<div><br></div><div>the resources go stopped and don&#39;t re-start....</di=
v><div><br></div><div>Then, as soon as I run</div><div><div>crm configure d=
elete nfs-group-with-pinggw</div></div><div><br></div><div>the resources of=
 the group start again...</div>
<div><br></div><div>config (part of it, actually) I try to apply is this:</=
div><div><div>group nfs-group ClusterIP lv_drbd0 NfsFS nfssrv \</div><div><=
span class=3D"Apple-tab-span" style=3D"white-space:pre">	</span>meta target=
-role=3D&quot;Started&quot;</div>
</div><div><div>ms NfsData nfsdrbd \</div><div><span class=3D"Apple-tab-spa=
n" style=3D"white-space:pre">	</span>meta master-max=3D&quot;1&quot; master=
-node-max=3D&quot;1&quot; clone-max=3D&quot;2&quot; clone-node-max=3D&quot;=
1&quot; notify=3D&quot;true&quot;</div>
</div><div><div>primitive pinggw ocf:pacemaker:ping \</div><div><span class=
=3D"Apple-tab-span" style=3D"white-space:pre">	</span>params host_list=3D&q=
uot;192.168.101.1&quot; multiplier=3D&quot;100&quot; \</div><div><span clas=
s=3D"Apple-tab-span" style=3D"white-space:pre">	</span>op start interval=3D=
&quot;0&quot; timeout=3D&quot;90&quot; \</div>
<div><span class=3D"Apple-tab-span" style=3D"white-space:pre">	</span>op st=
op interval=3D&quot;0&quot; timeout=3D&quot;100&quot;</div></div><div>clone=
 cl-pinggw pinggw \</div><div><div><span class=3D"Apple-tab-span" style=3D"=
white-space:pre">	</span>meta globally-unique=3D&quot;false&quot;</div>
<div>location nfs-group-with-pinggw nfs-group \</div><div><span class=3D"Ap=
ple-tab-span" style=3D"white-space:pre">	</span>rule $id=3D&quot;nfs-group-=
with-pinggw-rule&quot; -inf: not_defined pinggw or pinggw lte 0</div></div>=
<div>
<br></div><div>Is the location constraint to be done with ping resource or =
with its clone?</div><div>Is it a cause of the problem that I have also def=
ined an nfs client on the other node with:</div><div><br></div><div><div>
primitive nfsclient ocf:heartbeat:Filesystem \</div><div><span class=3D"App=
le-tab-span" style=3D"white-space:pre">	</span>params device=3D&quot;nfsha:=
/nfsdata/web&quot; directory=3D&quot;/nfsdata/web&quot; fstype=3D&quot;nfs&=
quot; \</div>
<div><span class=3D"Apple-tab-span" style=3D"white-space:pre">	</span>op st=
art interval=3D&quot;0&quot; timeout=3D&quot;60&quot; \</div><div><span cla=
ss=3D"Apple-tab-span" style=3D"white-space:pre">	</span>op stop interval=3D=
&quot;0&quot; timeout=3D&quot;60&quot;</div>
</div><div>colocation nfsclient_not_on_nfs-group -inf: nfs-group nfsclient<=
/div><div><div>order nfsclient_after_nfs-group inf: nfs-group nfsclient</di=
v></div><div><br></div><div>Thansk in advance,</div><div>Gianluca</div>
<div><br></div><div>From messages of the server=C2=A0running=C2=A0the nfs-g=
roup at that moment:</div><div><div>May 10 15:18:27 ha1 cibadmin: [29478]: =
info: Invoked: cibadmin -Ql=C2=A0</div><div>May 10 15:18:27 ha1 cibadmin: [=
29479]: info: Invoked: cibadmin -Ql=C2=A0</div>
<div>May 10 15:18:28 ha1 crm_shadow: [29536]: info: Invoked: crm_shadow -c =
__crmshell.29455=C2=A0</div><div>May 10 15:18:28 ha1 cibadmin: [29537]: inf=
o: Invoked: cibadmin -p -U=C2=A0</div><div>May 10 15:18:28 ha1 crm_shadow: =
[29539]: info: Invoked: crm_shadow -C __crmshell.29455 --force=C2=A0</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: cib_replace_notify: Replaced: 0=
.267.14 -&gt; 0.269.1 from &lt;null&gt;</div><div>May 10 15:18:28 ha1 cib: =
[8470]: info: log_data_element: cib:diff: - &lt;cib epoch=3D&quot;267&quot;=
 num_updates=3D&quot;14&quot; admin_epoch=3D&quot;0&quot; /&gt;</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + &=
lt;cib epoch=3D&quot;269&quot; num_updates=3D&quot;1&quot; admin_epoch=3D&q=
uot;0&quot; &gt;</div><div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_=
element: cib:diff: + =C2=A0 &lt;configuration &gt;</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + =
=C2=A0 =C2=A0 &lt;constraints &gt;</div><div>May 10 15:18:28 ha1 cib: [8470=
]: info: log_data_element: cib:diff: + =C2=A0 =C2=A0 =C2=A0 &lt;rsc_locatio=
n id=3D&quot;nfs-group-with-pinggw&quot; rsc=3D&quot;nfs-group&quot; __crm_=
diff_marker__=3D&quot;added:top&quot; &gt;</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 &lt;rule boolean-op=3D&quot;or&quot; id=3D&quot=
;nfs-group-with-pinggw-rule&quot; score=3D&quot;-INFINITY&quot; &gt;</div><=
div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &lt;expression attribute=3D&quot;pinggw&=
quot; id=3D&quot;nfs-group-with-pinggw-expression&quot; operation=3D&quot;n=
ot_defined&quot; /&gt;</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &lt;expression attribute=3D&quot;pinggw&=
quot; id=3D&quot;nfs-group-with-pinggw-expression-0&quot; operation=3D&quot=
;lte&quot; value=3D&quot;0&quot; /&gt;</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 &lt;/rule&gt;</div><div>May 10 15:18:28 ha1 cib=
: [8470]: info: log_data_element: cib:diff: + =C2=A0 =C2=A0 =C2=A0 &lt;/rsc=
_location&gt;</div><div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_ele=
ment: cib:diff: + =C2=A0 =C2=A0 &lt;/constraints&gt;</div>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: abort_transition_graph: need_a=
bort:59 - Triggered transition abort (complete=3D1) : Non-status change</di=
v><div>May 10 15:18:28 ha1 attrd: [8472]: info: do_cib_replaced: Sending fu=
ll refresh</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + =
=C2=A0 &lt;/configuration&gt;</div><div>May 10 15:18:28 ha1 crmd: [8474]: i=
nfo: need_abort: Aborting on change to epoch</div><div>May 10 15:18:28 ha1 =
attrd: [8472]: info: attrd_trigger_update: Sending flush op to all hosts fo=
r: master-nfsdrbd:0 (10000)</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: log_data_element: cib:diff: + &=
lt;/cib&gt;</div><div>May 10 15:18:28 ha1 crmd: [8474]: info: do_state_tran=
sition: State transition S_IDLE -&gt; S_POLICY_ENGINE [ input=3DI_PE_CALC c=
ause=3DC_FSA_INTERNAL origin=3Dabort_transition_graph ]</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation =
complete: op cib_replace for section &#39;all&#39; (origin=3Dlocal/crm_shad=
ow/2, version=3D0.269.1): ok (rc=3D0)</div><div>May 10 15:18:28 ha1 crmd: [=
8474]: info: do_state_transition: All 2 cluster nodes are eligible to run r=
esources.</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation =
complete: op cib_modify for section nodes (origin=3Dlocal/crmd/203, version=
=3D0.269.1): ok (rc=3D0)</div><div>May 10 15:18:28 ha1 crmd: [8474]: info: =
do_pe_invoke: Query 205: Requesting the current CIB: S_POLICY_ENGINE</div>
<div>May 10 15:18:28 ha1 attrd: [8472]: info: attrd_trigger_update: Sending=
 flush op to all hosts for: probe_complete (true)</div><div>May 10 15:18:28=
 ha1 cib: [29541]: info: write_cib_contents: Archived previous version as /=
var/lib/heartbeat/crm/cib-47.raw</div>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: do_state_transition: State tra=
nsition S_POLICY_ENGINE -&gt; S_ELECTION [ input=3DI_ELECTION cause=3DC_FSA=
_INTERNAL origin=3Ddo_cib_replaced ]</div><div>May 10 15:18:28 ha1 attrd: [=
8472]: info: attrd_trigger_update: Sending flush op to all hosts for: termi=
nate (&lt;null&gt;)</div>
<div>May 10 15:18:28 ha1 cib: [29541]: info: write_cib_contents: Wrote vers=
ion 0.269.0 of the CIB to disk (digest: 8f92c20ff8f96cde0fa0c75cd3207caa)</=
div><div>May 10 15:18:28 ha1 crmd: [8474]: info: update_dc: Unset DC ha1</d=
iv>
<div>May 10 15:18:28 ha1 attrd: [8472]: info: attrd_trigger_update: Sending=
 flush op to all hosts for: master-nfsdrbd:1 (&lt;null&gt;)</div><div>May 1=
0 15:18:28 ha1 cib: [29541]: info: retrieveCib: Reading cluster configurati=
on from: /var/lib/heartbeat/crm/cib.FPnpLz (digest: /var/lib/heartbeat/crm/=
cib.EsRWbp)</div>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: do_state_transition: State tra=
nsition S_ELECTION -&gt; S_INTEGRATION [ input=3DI_ELECTION_DC cause=3DC_FS=
A_INTERNAL origin=3Ddo_election_check ]</div><div>May 10 15:18:28 ha1 attrd=
: [8472]: info: attrd_trigger_update: Sending flush op to all hosts for: sh=
utdown (&lt;null&gt;)</div>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: do_dc_takeover: Taking over DC=
 status for this partition</div><div>May 10 15:18:28 ha1 attrd: [8472]: inf=
o: attrd_trigger_update: Sending flush op to all hosts for: pingd (100)</di=
v>
</div><div><div>May 10 15:18:28 ha1 cib: [8470]: info: cib_process_readwrit=
e: We are now in R/O mode</div><div>May 10 15:18:28 ha1 cib: [8470]: info: =
cib_process_request: Operation complete: op cib_slave_all for section &#39;=
all&#39; (origin=3Dlocal/crmd/206, version=3D0.269.1): ok (rc=3D0)</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: cib_process_readwrite: We are n=
ow in R/W mode</div><div>May 10 15:18:28 ha1 cib: [8470]: info: cib_process=
_request: Operation complete: op cib_master for section &#39;all&#39; (orig=
in=3Dlocal/crmd/207, version=3D0.269.1): ok (rc=3D0)</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation =
complete: op cib_modify for section cib (origin=3Dlocal/crmd/208, version=
=3D0.269.1): ok (rc=3D0)</div><div>May 10 15:18:28 ha1 cib: [8470]: info: c=
ib_process_request: Operation complete: op cib_modify for section crm_confi=
g (origin=3Dlocal/crmd/210, version=3D0.269.1): ok (rc=3D0)</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation =
complete: op cib_modify for section crm_config (origin=3Dlocal/crmd/212, ve=
rsion=3D0.269.1): ok (rc=3D0)</div><div>May 10 15:18:28 ha1 crmd: [8474]: i=
nfo: do_dc_join_offer_all: join-6: Waiting on 2 outstanding join acks</div>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: ais_dispatch: Membership 180: =
quorum retained</div><div>May 10 15:18:28 ha1 crmd: [8474]: info: crm_ais_d=
ispatch: Setting expected votes to 2</div><div>May 10 15:18:28 ha1 cib: [84=
70]: info: cib_process_request: Operation complete: op cib_modify for secti=
on crm_config (origin=3Dlocal/crmd/215, version=3D0.269.1): ok (rc=3D0)</di=
v>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: config_query_callback: Checkin=
g for expired actions every 900000ms</div><div>May 10 15:18:28 ha1 crmd: [8=
474]: info: config_query_callback: Sending expected-votes=3D2 to corosync</=
div>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: update_dc: Set DC to ha1 (3.0.=
1)</div><div>May 10 15:18:28 ha1 crmd: [8474]: info: ais_dispatch: Membersh=
ip 180: quorum retained</div><div>May 10 15:18:28 ha1 crm_shadow: [29542]: =
info: Invoked: crm_shadow -D __crmshell.29455 --force=C2=A0</div>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: crm_ais_dispatch: Setting expe=
cted votes to 2</div><div>May 10 15:18:28 ha1 cib: [8470]: info: cib_proces=
s_request: Operation complete: op cib_modify for section crm_config (origin=
=3Dlocal/crmd/218, version=3D0.269.1): ok (rc=3D0)</div>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: do_state_transition: State tra=
nsition S_INTEGRATION -&gt; S_FINALIZE_JOIN [ input=3DI_INTEGRATED cause=3D=
C_FSA_INTERNAL origin=3Dcheck_join_state ]</div><div>May 10 15:18:28 ha1 cr=
md: [8474]: info: do_state_transition: All 2 cluster nodes responded to the=
 join offer.</div>
<div>May 10 15:18:28 ha1 crmd: [8474]: info: do_dc_join_finalize: join-6: S=
yncing the CIB from ha1 to the rest of the cluster</div><div>May 10 15:18:2=
8 ha1 cib: [8470]: info: cib_process_request: Operation complete: op cib_sy=
nc for section &#39;all&#39; (origin=3Dlocal/crmd/219, version=3D0.269.1): =
ok (rc=3D0)</div>
<div>May 10 15:18:28 ha1 cib: [8470]: info: cib_process_request: Operation =
complete: op cib_modify for section nodes (origin=3Dlocal/crmd/220, version=
=3D0.269.1): ok (rc=3D0)</div><div>May 10 15:18:29 ha1 crmd: [8474]: info: =
do_dc_join_ack: join-6: Updating node state to member for ha2</div>
<div>May 10 15:18:29 ha1 cib: [8470]: info: cib_process_request: Operation =
complete: op cib_modify for section nodes (origin=3Dlocal/crmd/221, version=
=3D0.269.1): ok (rc=3D0)</div><div>May 10 15:18:29 ha1 crmd: [8474]: info: =
do_dc_join_ack: join-6: Updating node state to member for ha1</div>
<div>May 10 15:18:29 ha1 cib: [8470]: info: cib_process_request: Operation =
complete: op cib_delete for section //node_state[@uname=3D&#39;ha2&#39;]/lr=
m (origin=3Dlocal/crmd/222, version=3D0.269.2): ok (rc=3D0)</div><div>May 1=
0 15:18:29 ha1 crmd: [8474]: info: erase_xpath_callback: Deletion of &quot;=
//node_state[@uname=3D&#39;ha2&#39;]/lrm&quot;: ok (rc=3D0)</div>
<div>May 10 15:18:29 ha1 cib: [8470]: info: cib_process_request: Operation =
complete: op cib_delete for section //node_state[@uname=3D&#39;ha1&#39;]/lr=
m (origin=3Dlocal/crmd/224, version=3D0.269.4): ok (rc=3D0)</div><div>May 1=
0 15:18:29 ha1 crmd: [8474]: info: do_state_transition: State transition S_=
FINALIZE_JOIN -&gt; S_POLICY_ENGINE [ input=3DI_FINALIZED cause=3DC_FSA_INT=
ERNAL origin=3Dcheck_join_state ]</div>
<div>May 10 15:18:29 ha1 crmd: [8474]: info: do_state_transition: All 2 clu=
ster nodes are eligible to run resources.</div><div>May 10 15:18:29 ha1 cib=
: [8470]: info: cib_process_request: Operation complete: op cib_modify for =
section nodes (origin=3Dlocal/crmd/226, version=3D0.269.5): ok (rc=3D0)</di=
v>
<div>May 10 15:18:29 ha1 crmd: [8474]: info: do_dc_join_final: Ensuring DC,=
 quorum and node attributes are up-to-date</div><div>May 10 15:18:29 ha1 cr=
md: [8474]: info: crm_update_quorum: Updating quorum status to true (call=
=3D228)</div>
<div>May 10 15:18:29 ha1 attrd: [8472]: info: attrd_local_callback: Sending=
 full refresh (origin=3Dcrmd)</div><div>May 10 15:18:29 ha1 cib: [8470]: in=
fo: cib_process_request: Operation complete: op cib_modify for section cib =
(origin=3Dlocal/crmd/228, version=3D0.269.5): ok (rc=3D0)</div>
<div>May 10 15:18:29 ha1 crmd: [8474]: info: abort_transition_graph: do_te_=
invoke:191 - Triggered transition abort (complete=3D1) : Peer Cancelled</di=
v><div>May 10 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_update: Sendi=
ng flush op to all hosts for: master-nfsdrbd:0 (10000)</div>
<div>May 10 15:18:29 ha1 crmd: [8474]: info: do_pe_invoke: Query 229: Reque=
sting the current CIB: S_POLICY_ENGINE</div><div>May 10 15:18:29 ha1 attrd:=
 [8472]: info: attrd_trigger_update: Sending flush op to all hosts for: pro=
be_complete (true)</div>
<div>May 10 15:18:29 ha1 crmd: [8474]: info: erase_xpath_callback: Deletion=
 of &quot;//node_state[@uname=3D&#39;ha1&#39;]/lrm&quot;: ok (rc=3D0)</div>=
<div>May 10 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_update: Sending=
 flush op to all hosts for: terminate (&lt;null&gt;)</div>
<div>May 10 15:18:29 ha1 crmd: [8474]: info: te_update_diff: Detected LRM r=
efresh - 8 resources updated: Skipping all resource events</div><div>May 10=
 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_update: Sending flush op t=
o all hosts for: master-nfsdrbd:1 (&lt;null&gt;)</div>
<div>May 10 15:18:29 ha1 crmd: [8474]: info: abort_transition_graph: te_upd=
ate_diff:227 - Triggered transition abort (complete=3D1, tag=3Ddiff, id=3D(=
null), magic=3DNA, cib=3D0.269.5) : LRM Refresh</div><div>May 10 15:18:29 h=
a1 attrd: [8472]: info: attrd_trigger_update: Sending flush op to all hosts=
 for: shutdown (&lt;null&gt;)</div>
<div>May 10 15:18:29 ha1 crmd: [8474]: info: do_pe_invoke_callback: Invokin=
g the PE: query=3D229, ref=3Dpe_calc-dc-1273497509-143, seq=3D180, quorate=
=3D1</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice: unpack_config: =
On loss of CCM Quorum: Ignore</div>
</div><div><div>May 10 15:18:29 ha1 attrd: [8472]: info: attrd_trigger_upda=
te: Sending flush op to all hosts for: pingd (100)</div><div>May 10 15:18:2=
9 ha1 crmd: [8474]: info: do_pe_invoke: Query 230: Requesting the current C=
IB: S_POLICY_ENGINE</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: unpack_config: Node scores:=
 &#39;red&#39; =3D -INFINITY, &#39;yellow&#39; =3D 0, &#39;green&#39; =3D 0=
</div><div>May 10 15:18:29 ha1 crmd: [8474]: info: do_pe_invoke_callback: I=
nvoking the PE: query=3D230, ref=3Dpe_calc-dc-1273497509-144, seq=3D180, qu=
orate=3D1</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: determine_online_status: No=
de ha1 is online</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice: unp=
ack_rsc_op: Operation nfsdrbd:0_monitor_0 found resource nfsdrbd:0 active i=
n master mode on ha1</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: determine_online_status: No=
de ha2 is online</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice: nat=
ive_print: SitoWeb =C2=A0 =C2=A0 =C2=A0(ocf::heartbeat:apache): =C2=A0 =C2=
=A0 =C2=A0 =C2=A0Started ha1</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: notice: clone_print: =C2=A0Master=
/Slave Set: NfsData</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice: =
short_print: =C2=A0 =C2=A0 =C2=A0Masters: [ ha1 ]</div><div>May 10 15:18:29=
 ha1 pengine: [8473]: notice: short_print: =C2=A0 =C2=A0 =C2=A0Slaves: [ ha=
2 ]</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: notice: group_print: =C2=A0Resour=
ce Group: nfs-group</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice: =
native_print: =C2=A0 =C2=A0 =C2=A0ClusterIP =C2=A0 =C2=A0 =C2=A0 (ocf::hear=
tbeat:IPaddr2): =C2=A0 =C2=A0 =C2=A0 Started ha1</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: notice: native_print: =C2=A0 =C2=
=A0 =C2=A0lv_drbd0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(ocf::heartbeat:LVM): =C2=A0 =
Started ha1</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice: native_p=
rint: =C2=A0 =C2=A0 =C2=A0NfsFS =C2=A0 (ocf::heartbeat:Filesystem): =C2=A0 =
=C2=A0Started ha1</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: notice: native_print: =C2=A0 =C2=
=A0 =C2=A0nfssrv =C2=A0(ocf::heartbeat:nfsserver): =C2=A0 =C2=A0 Started ha=
1</div><div>May 10 15:18:29 ha1 cibadmin: [29543]: info: Invoked: cibadmin =
-Ql=C2=A0</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice: native_pri=
nt: nfsclient =C2=A0 =C2=A0(ocf::heartbeat:Filesystem): =C2=A0 =C2=A0Starte=
d ha2</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: notice: clone_print: =C2=A0Clone =
Set: cl-pinggw</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice: short=
_print: =C2=A0 =C2=A0 =C2=A0Started: [ ha1 ha2 ]</div><div>May 10 15:18:29 =
ha1 pengine: [8473]: info: native_merge_weights: NfsData: Rolling back scor=
es from ClusterIP</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: NfsDa=
ta: Rolling back scores from ClusterIP</div><div>May 10 15:18:29 ha1 pengin=
e: [8473]: info: master_color: Promoting nfsdrbd:0 (Master ha1)</div><div>
May 10 15:18:29 ha1 pengine: [8473]: info: master_color: NfsData: Promoted =
1 instances of a possible 1 to master</div><div>May 10 15:18:29 ha1 pengine=
: [8473]: info: native_merge_weights: nfsclient: Rolling back scores from C=
lusterIP</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: nfscl=
ient: Rolling back scores from lv_drbd0</div><div>May 10 15:18:29 ha1 pengi=
ne: [8473]: info: native_merge_weights: nfsclient: Rolling back scores from=
 NfsFS</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: nfscl=
ient: Rolling back scores from ClusterIP</div><div>May 10 15:18:29 ha1 peng=
ine: [8473]: info: native_merge_weights: ClusterIP: Rolling back scores fro=
m lv_drbd0</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: Clust=
erIP: Rolling back scores from SitoWeb</div><div>May 10 15:18:29 ha1 pengin=
e: [8473]: WARN: native_color: Resource ClusterIP cannot run anywhere</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: lv_dr=
bd0: Rolling back scores from NfsFS</div><div>May 10 15:18:29 ha1 pengine: =
[8473]: WARN: native_color: Resource lv_drbd0 cannot run anywhere</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: native_merge_weights: NfsFS=
: Rolling back scores from nfssrv</div><div>May 10 15:18:29 ha1 pengine: [8=
473]: WARN: native_color: Resource NfsFS cannot run anywhere</div><div>
May 10 15:18:29 ha1 pengine: [8473]: WARN: native_color: Resource nfssrv ca=
nnot run anywhere</div><div>May 10 15:18:29 ha1 pengine: [8473]: WARN: nati=
ve_color: Resource SitoWeb cannot run anywhere</div><div>May 10 15:18:29 ha=
1 pengine: [8473]: info: master_color: Promoting nfsdrbd:0 (Master ha1)</di=
v>
<div>May 10 15:18:29 ha1 pengine: [8473]: info: master_color: NfsData: Prom=
oted 1 instances of a possible 1 to master</div><div>May 10 15:18:29 ha1 pe=
ngine: [8473]: notice: LogActions: Stop resource SitoWeb =C2=A0(ha1)</div><=
div>
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Leave resource nfs=
drbd:0 =C2=A0 =C2=A0 =C2=A0 (Master ha1)</div><div>May 10 15:18:29 ha1 peng=
ine: [8473]: notice: LogActions: Leave resource nfsdrbd:1 =C2=A0 =C2=A0 =C2=
=A0 (Slave ha2)</div><div>
May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Stop resource Clus=
terIP =C2=A0 =C2=A0 =C2=A0 =C2=A0(ha1)</div><div>May 10 15:18:29 ha1 pengin=
e: [8473]: notice: LogActions: Stop resource lv_drbd0 (ha1)</div><div>May 1=
0 15:18:29 ha1 pengine: [8473]: notice: LogActions: Stop resource NfsFS =C2=
=A0 =C2=A0(ha1)</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Stop resource=
 nfssrv =C2=A0 (ha1)</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice:=
 LogActions: Stop resource nfsclient =C2=A0 =C2=A0 =C2=A0 =C2=A0(Started ha=
2)</div><div>May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Leave=
 resource pinggw:0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(Started ha1)</div>
<div>May 10 15:18:29 ha1 pengine: [8473]: notice: LogActions: Leave resourc=
e pinggw:1 =C2=A0 =C2=A0 =C2=A0 =C2=A0(Started ha2)</div></div><div><br></d=
iv>

--00032555e49e903cdc04863d7801--



More information about the Pacemaker mailing list