<html><head></head><body>Maybe he's looking for a reason as to why his stonith is failing. You are basically just repeating to him his stonith is failing...and he already knows because it says so like 20 times in the logs he posted. You got too caught up on giving him how to post in mailing list tutorials to actually try and help him..if this stonith was failing for you..what would your next move be? <br>
<br>
<br>
<br><br><div class="gmail_quote">On April 9, 2014 12:10:12 PM EDT, "Campbell, Gene" <gene.campbell@intel.com> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">Thanks for the response. I hope you donĀ¹t mind a couple questions along<br />the way to understanding this issue.<br /><br />We have storage attached to vm5<br />Power is cut to vm5<br />Failover to vm6 happens and storage is made available there<br />vm5 reboots<br /><br />Can you tell Where fencing is happening in this picture? Will keep<br />reading docs, and looking at logs, but anything think you do to help would<br />be much appreciated.<br /><br />Thanks<br />Gene<br /><br /><br /><br />On 4/8/14, 2:29 PM, "Digimer" <lists@alteeve.ca> wrote:<br /><br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;">Looks like your fencing (stonith) failed.<br /><br />On 08/04/14 05:25 PM, Campbell, Gene wrote:<br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #ad7fa8; padding-left: 1ex;"> Hello fine folks in Pacemaker land. Hopefully you could share
your<br />insight into this little problem for us.<br /><br /> We have a intermittent problem with failover.<br /><br /> two node cluster<br /> first node power is cut<br /> failover begins to second node<br /> first node reboots<br /> crm_mon -1 on the rebooted node is PENDING (never goes to ONLINE)<br /><br /> Example output from vm5<br /> Node lotus-4vm5: pending<br /> Online: [ lotus-4vm6 ]<br /><br /> Example output from vm6<br /> Online: [ lotus-4vm5 lotus-4vm6 ]<br /><br /> Environment<br /> Centos 6.5 on KVM vms<br /> Pacemaker 1.1.10<br /> Corosync 1.4.1<br /><br /> vm5 /var/log/messages<br /> Apr 8 09:54:07 lotus-4vm5 pacemaker: Starting Pacemaker Cluster Manager<br /> Apr 8 09:54:07 lotus-4vm5 pacemakerd[1783]: notice: main: Starting<br />Pacemaker 1.1.10-14.el6_5.2 (Build: 368c726): generated-manpages<br />agent-manpages ascii-docs publican-docs ncurses libqb-logging libqb-ipc<br />nagios corosync-plugin cman<br /> Apr 8 09:54:07 lotus-4vm5 pacemakerd[1783]:
notice: get_node_name:<br />Defaulting to uname -n for the local classic openais (with plugin) node<br />name<br /> Apr 8 09:54:07 lotus-4vm5 corosync[1364]: [pcmk ] WARN:<br />route_ais_message: Sending message to local.stonith-ng failed: ipc<br />delivery failed (rc=-2)<br /> Apr 8 09:54:07 lotus-4vm5 corosync[1364]: [pcmk ] WARN:<br />route_ais_message: Sending message to local.stonith-ng failed: ipc<br />delivery failed (rc=-2)<br /> Apr 8 09:54:07 lotus-4vm5 corosync[1364]: [pcmk ] WARN:<br />route_ais_message: Sending message to local.stonith-ng failed: ipc<br />delivery failed (rc=-2)<br /> Apr 8 09:54:07 lotus-4vm5 corosync[1364]: [pcmk ] WARN:<br />route_ais_message: Sending message to local.stonith-ng failed: ipc<br />delivery failed (rc=-2)<br /> Apr 8 09:54:07 lotus-4vm5 corosync[1364]: [pcmk ] WARN:<br />route_ais_message: Sending message to local.stonith-ng failed: ipc<br />delivery failed (rc=-2)<br /> Apr 8 09:54:07 lotus-4vm5 corosync[1364]:
[pcmk ] WARN:<br />route_ais_message: Sending message to local.stonith-ng failed: ipc<br />delivery failed (rc=-2)<br /> Apr 8 09:54:07 lotus-4vm5 attrd[1792]: notice: crm_cluster_connect:<br />Connecting to cluster infrastructure: classic openais (with plugin)<br /> Apr 8 09:54:07 lotus-4vm5 crmd[1794]: notice: main: CRM Git Version:<br />368c726<br /> Apr 8 09:54:07 lotus-4vm5 attrd[1792]: notice: get_node_name:<br />Defaulting to uname -n for the local classic openais (with plugin) node<br />name<br /> Apr 8 09:54:07 lotus-4vm5 corosync[1364]: [pcmk ] info: pcmk_ipc:<br />Recorded connection 0x20b6280 for attrd/0<br /> Apr 8 09:54:07 lotus-4vm5 attrd[1792]: notice: get_node_name:<br />Defaulting to uname -n for the local classic openais (with plugin) node<br />name<br /> Apr 8 09:54:07 lotus-4vm5 stonith-ng[1790]: notice:<br />crm_cluster_connect: Connecting to cluster infrastructure: classic<br />openais (with plugin)<br /> Apr 8 09:54:08 lotus-4vm5
cib[1789]: notice: crm_cluster_connect:<br />Connecting to cluster infrastructure: classic openais (with plugin)<br /> Apr 8 09:54:08 lotus-4vm5 corosync[1364]: [pcmk ] WARN:<br />route_ais_message: Sending message to local.stonith-ng failed: ipc<br />delivery failed (rc=-2)<br /> Apr 8 09:54:08 lotus-4vm5 attrd[1792]: notice: main: Starting<br />mainloop...<br /> Apr 8 09:54:08 lotus-4vm5 stonith-ng[1790]: notice: get_node_name:<br />Defaulting to uname -n for the local classic openais (with plugin) node<br />name<br /> Apr 8 09:54:08 lotus-4vm5 corosync[1364]: [pcmk ] info: pcmk_ipc:<br />Recorded connection 0x20ba600 for stonith-ng/0<br /> Apr 8 09:54:08 lotus-4vm5 cib[1789]: notice: get_node_name:<br />Defaulting to uname -n for the local classic openais (with plugin) node<br />name<br /> Apr 8 09:54:08 lotus-4vm5 corosync[1364]: [pcmk ] info: pcmk_ipc:<br />Recorded connection 0x20be980 for cib/0<br /> Apr 8 09:54:08 lotus-4vm5 corosync[1364]: [pcmk ]
info: pcmk_ipc:<br />Sending membership update 24 to cib<br /> Apr 8 09:54:08 lotus-4vm5 stonith-ng[1790]: notice: get_node_name:<br />Defaulting to uname -n for the local classic openais (with plugin) node<br />name<br /> Apr 8 09:54:08 lotus-4vm5 cib[1789]: notice: get_node_name:<br />Defaulting to uname -n for the local classic openais (with plugin) node<br />name<br /> Apr 8 09:54:08 lotus-4vm5 cib[1789]: notice:<br />plugin_handle_membership: Membership 24: quorum acquired<br /> Apr 8 09:54:08 lotus-4vm5 cib[1789]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm5[3176140298] - state is now<br />member (was (null))<br /> Apr 8 09:54:08 lotus-4vm5 cib[1789]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm6[3192917514] - state is now<br />member (was (null))<br /> Apr 8 09:54:08 lotus-4vm5 crmd[1794]: notice: crm_cluster_connect:<br />Connecting to cluster infrastructure: classic openais (with plugin)<br
/> Apr 8 09:54:08 lotus-4vm5 crmd[1794]: notice: get_node_name:<br />Defaulting to uname -n for the local classic openais (with plugin) node<br />name<br /> Apr 8 09:54:08 lotus-4vm5 corosync[1364]: [pcmk ] info: pcmk_ipc:<br />Recorded connection 0x20c2d00 for crmd/0<br /> Apr 8 09:54:08 lotus-4vm5 corosync[1364]: [pcmk ] info: pcmk_ipc:<br />Sending membership update 24 to crmd<br /> Apr 8 09:54:08 lotus-4vm5 crmd[1794]: notice: get_node_name:<br />Defaulting to uname -n for the local classic openais (with plugin) node<br />name<br /> Apr 8 09:54:08 lotus-4vm5 crmd[1794]: notice:<br />plugin_handle_membership: Membership 24: quorum acquired<br /> Apr 8 09:54:08 lotus-4vm5 crmd[1794]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm5[3176140298] - state is now<br />member (was (null))<br /> Apr 8 09:54:08 lotus-4vm5 crmd[1794]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm6[3192917514] - state is
now<br />member (was (null))<br /> Apr 8 09:54:08 lotus-4vm5 crmd[1794]: notice: do_started: The local<br />CRM is operational<br /> Apr 8 09:54:08 lotus-4vm5 crmd[1794]: notice: do_state_transition:<br />State transition S_STARTING -> S_PENDING [ input=I_PENDING<br />cause=C_FSA_INTERNAL origin=do_started ]<br /> Apr 8 09:54:09 lotus-4vm5 stonith-ng[1790]: notice: setup_cib:<br />Watching for stonith topology changes<br /> Apr 8 09:54:09 lotus-4vm5 stonith-ng[1790]: notice: unpack_config:<br />On loss of CCM Quorum: Ignore<br /> Apr 8 09:54:10 lotus-4vm5 stonith-ng[1790]: notice:<br />stonith_device_register: Added 'st-fencing' to the device list (1 active<br />devices)<br /> Apr 8 09:54:10 lotus-4vm5 cib[1789]: notice:<br />cib_server_process_diff: Not applying diff 0.31.21 -> 0.31.22 (sync in<br />progress)<br /> Apr 8 09:54:29 lotus-4vm5 crmd[1794]: warning: do_log: FSA: Input<br />I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING<br />
Apr 8 09:56:29 lotus-4vm5 crmd[1794]: error: crm_timer_popped:<br />Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION!<br />(120000ms)<br /> Apr 8 09:56:29 lotus-4vm5 crmd[1794]: notice: do_state_transition:<br />State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC<br />cause=C_TIMER_POPPED origin=crm_timer_popped ]<br /> Apr 8 09:56:29 lotus-4vm5 crmd[1794]: warning: do_log: FSA: Input<br />I_RELEASE_DC from do_election_count_vote() received in state<br />S_INTEGRATION<br /> Apr 8 09:56:29 lotus-4vm5 crmd[1794]: warning: join_query_callback:<br />No DC for join-1<br /><br /><br /> vm6 /var/log/messages<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ] notice:<br />pcmk_peer_update: Transitional membership event on ring 16: memb=1,<br />new=0, lost=0<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: memb: lotus-4vm6 3192917514<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ]
notice:<br />pcmk_peer_update: Stable membership event on ring 16: memb=2, new=1,<br />lost=0<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />update_member: Node 3176140298/lotus-4vm5 is now: member<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: NEW: lotus-4vm5 3176140298<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: MEMB: lotus-4vm5 3176140298<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: MEMB: lotus-4vm6 3192917514<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />send_member_notification: Sending membership update 16 to 2 children<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [TOTEM ] A processor<br />joined or left the membership and a new membership was formed.<br /> Apr 8 09:52:51 lotus-4vm6 crmd[2496]: notice:<br />plugin_handle_membership: Membership 16: quorum acquired<br /> Apr 8 09:52:51
lotus-4vm6 crmd[2496]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm5[3176140298] - state is now<br />member (was lost)<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />update_member: 0x1284140 Node 3176140298 (lotus-4vm5) born on: 16<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />send_member_notification: Sending membership update 16 to 2 children<br /> Apr 8 09:52:51 lotus-4vm6 cib[2491]: notice:<br />plugin_handle_membership: Membership 16: quorum acquired<br /> Apr 8 09:52:51 lotus-4vm6 cib[2491]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm5[3176140298] - state is now<br />member (was lost)<br /> Apr 8 09:52:51 lotus-4vm6 corosync[2442]: [CPG ] chosen downlist:<br />sender r(0) ip(<a href="http://10.14.80.189">10.14.80.189</a>) r(1) ip(<a href="http://10.128.0.189">10.128.0.189</a>) ; members(old:1<br />left:0)<br /> Apr 8 09:52:51 lotus-4vm6
corosync[2442]: [MAIN ] Completed service<br />synchronization, ready to provide service.<br /> Apr 8 09:52:57 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:53:14 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:53:15 lotus-4vm6 stonith-ng[2492]: warning: parse_host_line:<br />Could not parse (38 47): "console"<br /> Apr 8 09:53:20 lotus-4vm6 corosync[2442]: [TOTEM ] A processor<br />failed, forming new configuration.<br /> Apr 8 09:53:21 lotus-4vm6 stonith-ng[2492]: notice: log_operation:<br />Operation 'reboot' [3306] (call 2 from crmd.2496) for host 'lotus-4vm5'<br />with device 'st-fencing' returned: 0 (OK)<br /> Apr 8 09:53:21 lotus-4vm6 crmd[2496]: notice: erase_xpath_callback:<br />Deletion of "//node_state[@uname='lotus-4vm5']/lrm": Timer expired<br />(rc=-62)<br /> Apr 8 09:53:26
lotus-4vm6 corosync[2442]: [pcmk ] notice:<br />pcmk_peer_update: Transitional membership event on ring 20: memb=1,<br />new=0, lost=1<br /> Apr 8 09:53:26 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: memb: lotus-4vm6 3192917514<br /> Apr 8 09:53:26 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: lost: lotus-4vm5 3176140298<br /> Apr 8 09:53:26 lotus-4vm6 corosync[2442]: [pcmk ] notice:<br />pcmk_peer_update: Stable membership event on ring 20: memb=1, new=0,<br />lost=0<br /> Apr 8 09:53:26 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: MEMB: lotus-4vm6 3192917514<br /> Apr 8 09:53:26 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />ais_mark_unseen_peer_dead: Node lotus-4vm5 was not seen in the previous<br />transition<br /> Apr 8 09:53:26 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />update_member: Node 3176140298/lotus-4vm5 is now: lost<br /> Apr 8 09:53:26 lotus-4vm6 corosync[2442]: [pcmk ] info:<br
/>send_member_notification: Sending membership update 20 to 2 children<br /> Apr 8 09:53:26 lotus-4vm6 corosync[2442]: [TOTEM ] A processor<br />joined or left the membership and a new membership was formed.<br /> Apr 8 09:53:26 lotus-4vm6 cib[2491]: notice:<br />plugin_handle_membership: Membership 20: quorum lost<br /> Apr 8 09:53:26 lotus-4vm6 cib[2491]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm5[3176140298] - state is now<br />lost (was member)<br /> Apr 8 09:53:26 lotus-4vm6 crmd[2496]: notice:<br />plugin_handle_membership: Membership 20: quorum lost<br /> Apr 8 09:53:26 lotus-4vm6 crmd[2496]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm5[3176140298] - state is now<br />lost (was member)<br /> Apr 8 09:53:34 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:53:43 lotus-4vm6 dhclient[1012]: DHCPREQUEST on
eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:54:01 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] notice:<br />pcmk_peer_update: Transitional membership event on ring 24: memb=1,<br />new=0, lost=0<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: memb: lotus-4vm6 3192917514<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] notice:<br />pcmk_peer_update: Stable membership event on ring 24: memb=2, new=1,<br />lost=0<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />update_member: Node 3176140298/lotus-4vm5 is now: member<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: NEW: lotus-4vm5 3176140298<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: MEMB:
lotus-4vm5 3176140298<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />pcmk_peer_update: MEMB: lotus-4vm6 3192917514<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />send_member_notification: Sending membership update 24 to 2 children<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [TOTEM ] A processor<br />joined or left the membership and a new membership was formed.<br /> Apr 8 09:54:04 lotus-4vm6 crmd[2496]: notice:<br />plugin_handle_membership: Membership 24: quorum acquired<br /> Apr 8 09:54:04 lotus-4vm6 crmd[2496]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm5[3176140298] - state is now<br />member (was lost)<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />update_member: 0x1284140 Node 3176140298 (lotus-4vm5) born on: 24<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [pcmk ] info:<br />send_member_notification: Sending membership update 24 to 2 children<br /> Apr
8 09:54:04 lotus-4vm6 cib[2491]: notice:<br />plugin_handle_membership: Membership 24: quorum acquired<br /> Apr 8 09:54:04 lotus-4vm6 cib[2491]: notice: crm_update_peer_state:<br />plugin_handle_membership: Node lotus-4vm5[3176140298] - state is now<br />member (was lost)<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [CPG ] chosen downlist:<br />sender r(0) ip(<a href="http://10.14.80.190">10.14.80.190</a>) r(1) ip(<a href="http://10.128.0.190">10.128.0.190</a>) ; members(old:2<br />left:1)<br /> Apr 8 09:54:04 lotus-4vm6 corosync[2442]: [MAIN ] Completed service<br />synchronization, ready to provide service.<br /> Apr 8 09:54:04 lotus-4vm6 stonith-ng[2492]: notice: remote_op_done:<br />Operation reboot of lotus-4vm5 by lotus-4vm6 for<br />crmd.2496@lotus-4vm6.ae82b411<mailto:crmd.2496@lotus-4vm6.ae82b411>: OK<br /> Apr 8 09:54:04 lotus-4vm6 crmd[2496]: notice:<br />tengine_stonith_callback: Stonith operation<br
/>2/13:0:0:f325afae-64b0-4812-a897-70556ab1e806: OK (0)<br /> Apr 8 09:54:04 lotus-4vm6 crmd[2496]: notice:<br />tengine_stonith_notify: Peer lotus-4vm5 was terminated (reboot) by<br />lotus-4vm6 for lotus-4vm6: OK (ref=ae82b411-b07a-4235-be55-5a30a00b323b)<br />by client crmd.2496<br /> Apr 8 09:54:04 lotus-4vm6 crmd[2496]: notice: crm_update_peer_state:<br />send_stonith_update: Node lotus-4vm5[3176140298] - state is now lost<br />(was member)<br /> Apr 8 09:54:04 lotus-4vm6 crmd[2496]: notice: run_graph: Transition<br />0 (Complete=1, Pending=0, Fired=0, Skipped=7, Incomplete=0,<br />Source=/var/lib/pacemaker/pengine/pe-warn-25.bz2): Stopped<br /> Apr 8 09:54:04 lotus-4vm6 attrd[2494]: notice: attrd_local_callback:<br />Sending full refresh (origin=crmd)<br /> Apr 8 09:54:04 lotus-4vm6 attrd[2494]: notice: attrd_trigger_update:<br />Sending flush op to all hosts for: probe_complete (true)<br /> Apr 8 09:54:05 lotus-4vm6 pengine[2495]: notice: unpack_config:
On<br />loss of CCM Quorum: Ignore<br /> Apr 8 09:54:05 lotus-4vm6 pengine[2495]: notice: LogActions: Start<br />st-fencing#011(lotus-4vm6)<br /> Apr 8 09:54:05 lotus-4vm6 pengine[2495]: notice: LogActions: Start<br />MGS_607d26#011(lotus-4vm6)<br /> Apr 8 09:54:05 lotus-4vm6 pengine[2495]: notice: process_pe_message:<br />Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-912.bz2<br /> Apr 8 09:54:05 lotus-4vm6 crmd[2496]: notice: te_rsc_command:<br />Initiating action 5: start st-fencing_start_0 on lotus-4vm6 (local)<br /> Apr 8 09:54:05 lotus-4vm6 crmd[2496]: notice: te_rsc_command:<br />Initiating action 6: start MGS_607d26_start_0 on lotus-4vm6 (local)<br /> Apr 8 09:54:05 lotus-4vm6 stonith-ng[2492]: notice:<br />stonith_device_register: Device 'st-fencing' already existed in device<br />list (1 active devices)<br /> Apr 8 09:54:05 lotus-4vm6 kernel: LDISKFS-fs warning (device sda):<br />ldiskfs_multi_mount_protect: MMP interval 42 higher than
expected,<br />please wait.<br /> Apr 8 09:54:05 lotus-4vm6 kernel:<br /> Apr 8 09:54:10 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:54:11 lotus-4vm6 crmd[2496]: warning: get_rsc_metadata: No<br />metadata found for fence_chroma::stonith:heartbeat: Input/output error<br />(-5)<br /> Apr 8 09:54:11 lotus-4vm6 crmd[2496]: notice: process_lrm_event: LRM<br />operation st-fencing_start_0 (call=24, rc=0, cib-update=89,<br />confirmed=true) ok<br /> Apr 8 09:54:11 lotus-4vm6 crmd[2496]: warning: crmd_cs_dispatch:<br />Recieving messages from a node we think is dead: lotus-4vm5[-1118826998]<br /> Apr 8 09:54:24 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:54:31 lotus-4vm6 crmd[2496]: notice:<br />do_election_count_vote: Election 2 (current: 2, owner: lotus-4vm5):<br />Processed vote from
lotus-4vm5 (Peer is not part of our cluster)<br /> Apr 8 09:54:34 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:54:46 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:54:48 lotus-4vm6 kernel: LDISKFS-fs (sda): recovery complete<br /> Apr 8 09:54:48 lotus-4vm6 kernel: LDISKFS-fs (sda): mounted filesystem<br />with ordered data mode. quota=on. Opts:<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ [ ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ { ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ "args": [ ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br
/>MGS_607d26_start_0:3444:stderr [ "mount", ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ "-t", ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ "lustre", ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [<br />"/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk1", ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ "/mnt/MGS" ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ ], ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ "rc": 0, ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ "stderr":
"", ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ "stdout": "" ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ } ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ ] ]<br /> Apr 8 09:54:48 lotus-4vm6 lrmd[2493]: notice: operation_finished:<br />MGS_607d26_start_0:3444:stderr [ ]<br /> Apr 8 09:54:48 lotus-4vm6 crmd[2496]: notice: process_lrm_event: LRM<br />operation MGS_607d26_start_0 (call=26, rc=0, cib-update=94,<br />confirmed=true) ok<br /> Apr 8 09:54:49 lotus-4vm6 crmd[2496]: notice: run_graph: Transition<br />1 (Complete=2, Pending=0, Fired=0, Skipped=1, Incomplete=0,<br />Source=/var/lib/pacemaker/pengine/pe-input-912.bz2): Stopped<br /> Apr 8 09:54:49 lotus-4vm6 attrd[2494]: notice: attrd_local_callback:<br />Sending full refresh (origin=crmd)<br /> Apr 8
09:54:49 lotus-4vm6 attrd[2494]: notice: attrd_trigger_update:<br />Sending flush op to all hosts for: probe_complete (true)<br /> Apr 8 09:54:50 lotus-4vm6 pengine[2495]: notice: unpack_config: On<br />loss of CCM Quorum: Ignore<br /> Apr 8 09:54:50 lotus-4vm6 pengine[2495]: notice: process_pe_message:<br />Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-913.bz2<br /> Apr 8 09:54:50 lotus-4vm6 crmd[2496]: notice: te_rsc_command:<br />Initiating action 9: monitor MGS_607d26_monitor_5000 on lotus-4vm6<br />(local)<br /> Apr 8 09:54:51 lotus-4vm6 crmd[2496]: notice: process_lrm_event: LRM<br />operation MGS_607d26_monitor_5000 (call=30, rc=0, cib-update=102,<br />confirmed=false) ok<br /> Apr 8 09:54:51 lotus-4vm6 crmd[2496]: notice: run_graph: Transition<br />2 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0,<br />Source=/var/lib/pacemaker/pengine/pe-input-913.bz2): Complete<br /> Apr 8 09:54:51 lotus-4vm6 crmd[2496]: notice:
do_state_transition:<br />State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS<br />cause=C_FSA_INTERNAL origin=notify_crmd ]<br /> Apr 8 09:55:07 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:55:23 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:55:38 lotus-4vm6 kernel: Lustre: Evicted from MGS (at<br /><a href="http://10.14.80.190">10.14.80.190</a>@tcp) after server handle changed from 0x7acffb201664d0a4 to<br />0x9a6b02eee57f3dba<br /> Apr 8 09:55:38 lotus-4vm6 kernel: Lustre: MGC<a href="http://10.14.80.189">10.14.80.189</a>@tcp:<br />Connection restored to MGS (at 0@lo)<br /> Apr 8 09:55:42 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:55:58 lotus-4vm6 dhclient[1012]: DHCPREQUEST on
eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:56:12 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:56:26 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:56:31 lotus-4vm6 crmd[2496]: warning: crmd_ha_msg_filter:<br />Another DC detected: lotus-4vm5 (op=join_offer)<br /> Apr 8 09:56:31 lotus-4vm6 crmd[2496]: notice: do_state_transition:<br />State transition S_IDLE -> S_ELECTION [ input=I_ELECTION<br />cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]<br /> Apr 8 09:56:31 lotus-4vm6 crmd[2496]: notice: do_state_transition:<br />State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC<br />cause=C_FSA_INTERNAL origin=do_election_check ]<br /> Apr 8 09:56:31 lotus-4vm6 crmd[2496]: notice:<br />do_election_count_vote: Election 3
(current: 3, owner: lotus-4vm6):<br />Processed no-vote from lotus-4vm5 (Peer is not part of our cluster)<br /> Apr 8 09:56:36 lotus-4vm6 dhclient[1012]: DHCPREQUEST on eth0 to<br /><a href="http://10.14.80.1">10.14.80.1</a> port 67 (xid=0x78d16782)<br /> Apr 8 09:56:37 lotus-4vm6 crmd[2496]: warning: get_rsc_metadata: No<br />metadata found for fence_chroma::stonith:heartbeat: Input/output error<br />(-5)<br /> Apr 8 09:56:37 lotus-4vm6 attrd[2494]: notice: attrd_local_callback:<br />Sending full refresh (origin=crmd)<br /> Apr 8 09:56:37 lotus-4vm6 attrd[2494]: notice: attrd_trigger_update:<br />Sending flush op to all hosts for: probe_complete (true)<br /> Apr 8 09:56:38 lotus-4vm6 pengine[2495]: notice: unpack_config: On<br />loss of CCM Quorum: Ignore<br /> Apr 8 09:56:38 lotus-4vm6 pengine[2495]: notice: process_pe_message:<br />Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-914.bz2<br /> Apr 8 09:56:38 lotus-4vm6 crmd[2496]: notice: run_graph:
Transition<br />3 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0,<br />Source=/var/lib/pacemaker/pengine/pe-input-914.bz2): Complete<br /> Apr 8 09:56:38 lotus-4vm6 crmd[2496]: notice: do_state_transition:<br />State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS<br />cause=C_FSA_INTERNAL origin=notify_crmd ]<br /><br /> Thank you very much<br /> Gene<br /><br /><br /><hr /><br /> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br /> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br /><br /> Project Home: <a href="http://www.clusterlabs.org">http://www.clusterlabs.org</a><br /> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br /> Bugs: <a href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a></blockquote><br /><br /><br />-- <br />Digimer<br />Papers and Projects: <a
href="https://alteeve.ca/w">https://alteeve.ca/w</a>/<br />What if the cure for cancer is trapped in the mind of a person without<br />access to education?<br /><br /><hr /><br />Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br /><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br /><br />Project Home: <a href="http://www.clusterlabs.org">http://www.clusterlabs.org</a><br />Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br />Bugs: <a href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a><br /></blockquote><br /><br /><hr /><br />Pacemaker mailing list: Pacemaker@oss.clusterlabs.org<br /><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br /><br />Project Home: <a
href="http://www.clusterlabs.org">http://www.clusterlabs.org</a><br />Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br />Bugs: <a href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a><br /></pre></blockquote></div></body></html>