<div dir="ltr"><div><div>No it is not a typo... I have tried backport but the version is still 1.2.0.<br></div></div><div><br></div><div>I think the easiest way is to upgrade my system.<br><br></div><div>Thank you<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-01-17 9:27 GMT+01:00 Jan Friesse <span dir="ltr"><<a href="mailto:jfriesse@redhat.com" target="_blank">jfriesse@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi all,<br>
<br>
I have a two node cluster with the following details:<br>
- Ubuntu 10.04.4 LTS (I know its old…)<br>
- corosync 1.2.0<br>
</blockquote>
<br></span>
Isn't this typo? I mean, 1.2.0 is ... ancient and full of already fixed bugs.<div><div class="h5"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
- pacemaker 1.0.8+hg15494-2ubuntu2<br>
<br>
Following configuration is applied to corosync:<br>
<br>
totem {<br>
version: 2<br>
token: 3000<br>
token_retransmits_before_<wbr>loss_const: 10<br>
join: 60<br>
consensus: 5000<br>
vsftype: none<br>
max_messages: 20<br>
clear_node_high_bit: yes<br>
secauth: off<br>
threads: 0<br>
rrp_mode: none<br>
cluster_name: firewall-ha<br>
<br>
interface {<br>
ringnumber: 0<br>
bindnetaddr: 192.168.211.1<br>
broadcast : yes<br>
mcastport: 5405<br>
ttl : 1<br>
}<br>
<br>
transport: udpu<br>
}<br>
<br>
nodelist {<br>
node {<br>
ring0_addr: 192.168.211.1<br>
name: net1<br>
nodeid: 1<br>
}<br>
node {<br>
ring0_addr: 192.168.211.2<br>
name: net2<br>
nodeid: 2<br>
}<br>
}<br>
<br>
quorum {<br>
provider: corosync_votequorum<br>
two_node: 1<br>
}<br>
<br>
amf {<br>
mode: disabled<br>
}<br>
<br>
service {<br>
ver: 0<br>
name: pacemaker<br>
}<br>
<br>
aisexec {<br>
user: root<br>
group: root<br>
}<br>
<br>
logging {<br>
fileline: off<br>
to_stderr: yes<br>
to_logfile: yes<br>
to_syslog: yes<br>
logfile: /var/log/corosync/corosync.log<br>
syslog_facility: daemon<br>
debug: off<br>
timestamp: on<br>
logger_subsys {<br>
subsys: AMF<br>
debug: off<br>
tags: enter|leave|trace1|trace2|trac<wbr>e3|trace4|trace6<br>
}<br>
}<br>
<br>
</blockquote>
<br></div></div>
Actually config file most likely doesn't work as you expected. Like a nodelist - this is 2.x concept and unsupported by 1.x. Same applies to corosync_votequorum. Transport udpu is not implemented in 1.2.0 (it was added in 1.3.0).<br>
<br>
I would recommend to use some backports repo and upgrade.<br>
<br>
Regards,<br>
Honza<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
Here is an output of crm status after starting coro sync on both nodes:<br>
============<br>
Last updated: Mon Jan 16 21:24:18 2017<br>
Stack: openais<br>
Current DC: net1 - partition with quorum<br>
Version: 1.0.8-042548a451fce8400660f603<wbr>1f4da6f0223dd5dd<br>
2 Nodes configured, 2 expected votes<br>
0 Resources configured.<br>
============<br>
<br>
Online: [ net1 net2 ]<br>
<br>
Now if I kill net2 with:<br>
killall -9 corosync<br>
<br>
The primary host don’t « see » anything, the cluster still appear to be online on net1:<br>
============<br>
Last updated: Mon Jan 16 21:25:25 2017<br>
Stack: openais<br>
Current DC: net1 - partition with quorum<br>
Version: 1.0.8-042548a451fce8400660f603<wbr>1f4da6f0223dd5dd<br>
2 Nodes configured, 2 expected votes<br>
0 Resources configured.<br>
============<br>
<br>
Online: [ net1 net2 ]<br>
<br>
I just see this part in the logs:<br>
Jan 16 21:35:21 corosync [TOTEM ] A processor failed, forming new configuration.<br>
<br>
And then, when I start corosync on net2, cluster stays offline:<br>
============<br>
Last updated: Mon Jan 16 21:38:13 2017<br>
Stack: openais<br>
Current DC: NONE<br>
2 Nodes configured, 2 expected votes<br>
0 Resources configured.<br>
============<br>
<br>
OFFLINE: [ net1 net2 ]<br>
<br>
I have to kill corosync on both nodes, and start on both node together to get back online.<br>
<br>
<br>
When the two nodes are up, I can see trafic with tcpdump:<br>
21:41:49.653780 IP 192.168.211.1.5404 > 255.255.255.255.5405: UDP, length 82<br>
21:41:49.678846 IP 192.168.211.1.5404 > 192.168.211.2.5405: UDP, length 70<br>
21:41:49.680339 IP 192.168.211.2.5404 > 192.168.211.1.5405: UDP, length 70<br>
21:41:49.889424 IP 192.168.211.1.5404 > 255.255.255.255.5405: UDP, length 82<br>
21:41:49.910492 IP 192.168.211.1.5404 > 192.168.211.2.5405: UDP, length 70<br>
21:41:49.911990 IP 192.168.211.2.5404 > 192.168.211.1.5405: UDP, length 70<br>
<br>
Here is the output of the state of the ring on net1:<br>
corosync-cfgtool -s<br>
Printing ring status.<br>
Local node ID 30648512<br>
RING ID 0<br>
id = 192.168.211.1<br>
status = ring 0 active with no faults<br>
<br>
And net2:<br>
Printing ring status.<br>
Local node ID 47425728<br>
RING ID 0<br>
id = 192.168.211.2<br>
status = ring 0 active with no faults<br>
<br>
Here is the log on net1 when I start the cluster on both nodes:<br>
Jan 16 21:41:52 net1 crmd: [15288]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!<br>
Jan 16 21:41:52 net1 crmd: [15288]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING<br>
Jan 16 21:41:52 net1 crmd: [15288]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]<br>
Jan 16 21:41:52 net1 crmd: [15288]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]<br>
Jan 16 21:41:52 net1 crmd: [15288]: info: do_te_control: Registering TE UUID: 53d7e000-3468-4548-b9f9-5bdb9a<wbr>c9bfc7<br>
Jan 16 21:41:52 net1 crmd: [15288]: WARN: cib_client_add_notify_callback<wbr>: Callback already present<br>
Jan 16 21:41:52 net1 crmd: [15288]: info: set_graph_functions: Setting custom graph functions<br>
Jan 16 21:41:52 net1 crmd: [15288]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses<br>
Jan 16 21:41:52 net1 crmd: [15288]: info: do_dc_takeover: Taking over DC status for this partition<br>
Jan 16 21:41:52 net1 cib: [15284]: info: cib_process_readwrite: We are now in R/W mode<br>
Jan 16 21:41:52 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/6, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:52 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/7, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:53 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/9, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: join_make_offer: Making join offers based on membership 36<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: ais_dispatch: Membership 36: quorum retained<br>
Jan 16 21:41:53 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/11, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: crm_ais_dispatch: Setting expected votes to 2<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: config_query_callback: Checking for expired actions every 900000ms<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: config_query_callback: Sending expected-votes=2 to corosync<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: update_dc: Set DC to net1 (3.0.1)<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: ais_dispatch: Membership 36: quorum retained<br>
Jan 16 21:41:53 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/14, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: crm_ais_dispatch: Setting expected votes to 2<br>
Jan 16 21:41:53 net1 crmd: [15288]: info: te_connect_stonith: Attempting connection to fencing daemon...<br>
Jan 16 21:41:53 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/17, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: te_connect_stonith: Connected<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: All 2 cluster nodes responded to the join offer.<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_dc_join_finalize: join-1: Syncing the CIB from net1 to the rest of the cluster<br>
Jan 16 21:41:54 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/18, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:54 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/19, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:54 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/20, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: update_attrd: Connecting to attrd...<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: find_hash_entry: Creating hash entry for terminate<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: find_hash_entry: Creating hash entry for shutdown<br>
Jan 16 21:41:54 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='net1']/tr<wbr>ansient_attributes (origin=local/crmd/21, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: erase_xpath_callback: Deletion of "//node_state[@uname='net1']/t<wbr>ransient_attributes": ok (rc=0)<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: crm_new_peer: Node net2 now has id: 47425728<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_dc_join_ack: join-1: Updating node state to member for net2<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: crm_new_peer: Node 47425728 is now known as net2<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_dc_join_ack: join-1: Updating node state to member for net1<br>
Jan 16 21:41:54 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='net2']/tr<wbr>ansient_attributes (origin=net2/crmd/7, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:54 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='net2']/lr<wbr>m (origin=local/crmd/22, version=0.51.0): ok (rc=0)<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: erase_xpath_callback: Deletion of "//node_state[@uname='net2']/l<wbr>rm": ok (rc=0)<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: crm_update_quorum: Updating quorum status to true (call=28)<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_pe_invoke: Query 29: Requesting the current CIB: S_POLICY_ENGINE<br>
Jan 16 21:41:54 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='net1']/lr<wbr>m (origin=local/crmd/24, version=0.51.1): ok (rc=0)<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: erase_xpath_callback: Deletion of "//node_state[@uname='net1']/l<wbr>rm": ok (rc=0)<br>
Jan 16 21:41:54 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/26, version=0.51.2): ok (rc=0)<br>
Jan 16 21:41:54 net1 cib: [15284]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="51" num_updates="2" /><br>
Jan 16 21:41:54 net1 cib: [15284]: info: log_data_element: cib:diff: + <cib dc-uuid="net1" admin_epoch="0" epoch="52" num_updates="1" /><br>
Jan 16 21:41:54 net1 cib: [15284]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/28, version=0.52.1): ok (rc=0)<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: need_abort: Aborting on change to admin_epoch<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_pe_invoke: Query 30: Requesting the current CIB: S_POLICY_ENGINE<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_pe_invoke_callback: Invoking the PE: query=30, ref=pe_calc-dc-1484599314-11, seq=36, quorate=1<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: attrd_local_callback: Sending full refresh (origin=crmd)<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)<br>
Jan 16 21:41:54 net1 pengine: [15287]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: determine_online_status: Node net2 is online<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: determine_online_status: Node net1 is online<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: unpack_graph: Unpacked transition 0: 2 actions in 2 synapses<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1484599314-11) derived from /var/lib/pengine/pe-input-713.<wbr>bz2<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: te_rsc_command: Initiating action 2: probe_complete probe_complete on net1 (local) - no waiting<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on net2 - no waiting<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: run_graph: ==============================<wbr>======================<br>
Jan 16 21:41:54 net1 crmd: [15288]: notice: run_graph: Transition 0 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-inp<wbr>ut-713.bz2): Complete<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: te_graph_trigger: Transition 0 is now complete<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: notify_crmd: Transition 0 status: done - <null><br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: Starting PEngine Recheck Timer<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: find_hash_entry: Creating hash entry for probe_complete<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)<br>
Jan 16 21:41:54 net1 attrd: [15286]: info: attrd_perform_update: Sent update 10: probe_complete=true<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=net1, magic=NA, cib=0.52.2) : Transient attribute: update<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_pe_invoke: Query 31: Requesting the current CIB: S_POLICY_ENGINE<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_pe_invoke_callback: Invoking the PE: query=31, ref=pe_calc-dc-1484599314-14, seq=36, quorate=1<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=net2, magic=NA, cib=0.52.3) : Transient attribute: update<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_pe_invoke: Query 32: Requesting the current CIB: S_POLICY_ENGINE<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_pe_invoke_callback: Invoking the PE: query=32, ref=pe_calc-dc-1484599314-15, seq=36, quorate=1<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-713.<wbr>bz2<br>
Jan 16 21:41:54 net1 pengine: [15287]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: determine_online_status: Node net2 is online<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: determine_online_status: Node net1 is online<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: handle_response: pe_calc calculation pe_calc-dc-1484599314-14 is obsolete<br>
Jan 16 21:41:54 net1 cib: [15311]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-52.<wbr>raw<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-714.<wbr>bz2<br>
Jan 16 21:41:54 net1 pengine: [15287]: notice: unpack_config: On loss of CCM Quorum: Ignore<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: determine_online_status: Node net2 is online<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: determine_online_status: Node net1 is online<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: unpack_graph: Unpacked transition 2: 0 actions in 0 synapses<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1484599314-15) derived from /var/lib/pengine/pe-input-715.<wbr>bz2<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: run_graph: ==============================<wbr>======================<br>
Jan 16 21:41:54 net1 crmd: [15288]: notice: run_graph: Transition 2 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-inp<wbr>ut-715.bz2): Complete<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: te_graph_trigger: Transition 2 is now complete<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: notify_crmd: Transition 2 status: done - <null><br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
Jan 16 21:41:54 net1 crmd: [15288]: info: do_state_transition: Starting PEngine Recheck Timer<br>
Jan 16 21:41:54 net1 cib: [15311]: info: write_cib_contents: Wrote version 0.52.0 of the CIB to disk (digest: 44f7626e0420b36260b8c67e9e576a<wbr>7e)<br>
Jan 16 21:41:54 net1 pengine: [15287]: info: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/pengine/pe-input-715.<wbr>bz2<br>
Jan 16 21:41:54 net1 cib: [15311]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Zpw<wbr>uYr (digest: /var/lib/heartbeat/crm/cib.zVG<wbr>2jb)<br>
Jan 16 21:50:50 net1 cib: [15284]: info: cib_stats: Processed 48 operations (2708.00us average, 0% utilization) in the last 10min<br>
<br>
Any help is appreciated !<br>
<br>
Thanks<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br></div></div>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/m<wbr>ailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc<wbr>/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
</blockquote>
<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/m<wbr>ailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc<wbr>/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br></div>