<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 12 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0cm;
margin-right:0cm;
margin-bottom:0cm;
margin-left:36.0pt;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:1649095382;
mso-list-type:hybrid;
mso-list-template-ids:833125610 67698703 67698713 67698715 67698703 67698713 67698715 67698703 67698713 67698715;}
@list l0:level1
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level2
{mso-level-tab-stop:72.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level3
{mso-level-tab-stop:108.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level4
{mso-level-tab-stop:144.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level5
{mso-level-tab-stop:180.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level6
{mso-level-tab-stop:216.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level7
{mso-level-tab-stop:252.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level8
{mso-level-tab-stop:288.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level9
{mso-level-tab-stop:324.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l1
{mso-list-id:1737892318;
mso-list-type:hybrid;
mso-list-template-ids:912822638 -326048276 67698691 67698693 67698689 67698691 67698693 67698689 67698691 67698693;}
@list l1:level1
{mso-level-start-at:0;
mso-level-number-format:bullet;
mso-level-text:-;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-font-family:Calibri;}
@list l1:level2
{mso-level-tab-stop:72.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l1:level3
{mso-level-tab-stop:108.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l1:level4
{mso-level-tab-stop:144.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l1:level5
{mso-level-tab-stop:180.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l1:level6
{mso-level-tab-stop:216.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l1:level7
{mso-level-tab-stop:252.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l1:level8
{mso-level-tab-stop:288.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l1:level9
{mso-level-tab-stop:324.0pt;
mso-level-number-position:left;
text-indent:-18.0pt;}
ol
{margin-bottom:0cm;}
ul
{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span lang="EN-IE">Hi,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">pacemaker:1.1.6-2ubuntu3, corosync:1.4.2-2, drbd8-utils 2:8.3.11-0ubuntu1<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">I have a three node setup, with two nodes running DRBD, resource-level fencing enabled (‘resource-and-stonith’) and obviously stonith configured for each node. In my current test case, I bring down network interface on
the DRBD primary/master node (using ifdown eth0, for example), which sometimes leads to split-brain when the isolated node rejoins the cluster – the serious problem is that upon rejoining, the isolated node is promoted to DRBD primary (despite the original
fencing constraint) , which opens us up to data-loss for updates that occurred while that node was down.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">The exact problem scenario is as follows:<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l1 level1 lfo1"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">Alice: DRBD Primary/Master, Bob: Secondary/Slave, Jim: Quorum node, Epoch=100<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l1 level1 lfo1"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">ifdown eth0 on Alice<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l1 level1 lfo1"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">Alice detects loss of network if, sets itself up as DC, carries out some CIB updates (see log snippet below) that raises the epoch level, say Epoch=102<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l1 level1 lfo1"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">Alice is shot via stonith.<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l1 level1 lfo1"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">Bob adds fencing rule to CIB to prevent promotion of DRBD on any other node, Epoch=101<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l1 level1 lfo1"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">When Alice comes back and rejoins the cluster, the DC decides to sync to Alice CIB, thereby removing the fencing rule prematurely (i.e. before the drbd devices have resynched).<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l1 level1 lfo1"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">In some cases: Alice is promoted to Primary/Master and fences resource to prevent promotion on any other node.<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l1 level1 lfo1"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">We now have split-brain and potential loss of data.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">So some questions on the above:<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo2"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">1.<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">My initial feeling was that the isolated node, Alice, (which has no quorum) should not be updating a CIB that could potentially override the sane part of the cluster. Is that a fair comment?<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo2"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">2.<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">Is this issue just particular to my use of ‘ifdown ethX’ to disable the network? This is hinted at here:
<a href="https://github.com/corosync/corosync/wiki/Corosync-and-ifdown-on-active-network-interface">
https://github.com/corosync/corosync/wiki/Corosync-and-ifdown-on-active-network-interface</a> Has this issue been addressed, or will it be in the future?<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo2"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">3.<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE"> If ‘ifdown ethX is not valid’, what is the best alternative that mimics what might happen in real world? I have tried blocking connections using iptables rules, dropping all incoming and outoing packets; initial
testing appears to show different corosync behaviour that would hopefully not lead to my problem scenario, but I’m still in the process of confirming. I have also carried out some cable pulls and not run into issues yet, but this problem can be intermittent,
so really needs an automated way to test many times.<o:p></o:p></span></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo2"><![if !supportLists]><span lang="EN-IE"><span style="mso-list:Ignore">4.<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span lang="EN-IE">The log snippet below from the isolated node shows that it updates the CIB twice sometime after detecting loss of network interface. Why does this happen? I believe that ultimately it is these CIB updates that
increment the epoch, which leads to this CIB overriding the cluster later.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">I have also tried a no-quorum-policy of ‘suicide’ in an attempt to prevent CIB updates by the Alice, but it didn’t make a different. Note that to facilitate log collection and analysis, I have added a delay to the stonith
reset operation, but I have also set the timeout on the crm-fence-peer script to ensure that it is greater than this ‘deadtime’.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Any advice on this would be greatly appreciated.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Thanks,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Tom<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><span lang="EN-IE">Log snippet showing isolated node updating the CIB, which results in epoch being incremented two times:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:54 stratus18 corosync[1268]: [TOTEM ] A processor failed, forming new configuration.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:54 stratus18 corosync[1268]: [TOTEM ] The network interface is down.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:54 stratus18 crm-fence-peer.sh[20758]: TOMTEST-DEBUG: modified version<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:54 stratus18 crm-fence-peer.sh[20758]: invoked for tomtest<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:54 stratus18 crm-fence-peer.sh[20761]: TOMTEST-DEBUG: modified version<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:54 stratus18 crm-fence-peer.sh[20761]: invoked for tomtest<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 stonith-ng: [1276]: info: stonith_command: Processed st_execute from lrmd: rc=-1<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 external/ipmi[20806]: [20816]: ERROR: error executing ipmitool: Connect failed: Network is unreachable#015 Unable to get Chassis Power Status#015<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crm-fence-peer.sh[20758]: Call cib_query failed (-41): Remote node did not respond<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crm-fence-peer.sh[20761]: Call cib_query failed (-41): Remote node did not respond<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 ntpd[1062]: Deleting interface #7 eth0, 192.168.185.150#123, interface stats: received=0, sent=0, dropped=0, active_time=912 secs<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 ntpd[1062]: Deleting interface #4 eth0, fe80::7ae7:d1ff:fe22:5270#123, interface stats: received=0, sent=0, dropped=0, active_time=6080 secs<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 ntpd[1062]: Deleting interface #3 eth0, 192.168.185.118#123, interface stats: received=52, sent=53, dropped=0, active_time=6080 secs<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 ntpd[1062]: 192.168.8.97 interface 192.168.185.118 -> (none)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 ntpd[1062]: peers refreshed<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 2728: memb=1, new=0, lost=2<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: pcmk_peer_update: memb: .unknown. 16777343<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: pcmk_peer_update: lost: stratus18 1991878848<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: pcmk_peer_update: lost: stratus20 2025433280<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 2728: memb=1, new=0, lost=0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_member: Creating entry for node 16777343 born on 2728<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_member: Node 16777343/unknown is now: member<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: pcmk_peer_update: MEMB: .pending. 16777343<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] ERROR: pcmk_peer_update: Something strange happened: 1<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: ais_mark_unseen_peer_dead: Node stratus17 was not seen in the previous transition<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_member: Node 1975101632/stratus17 is now: lost<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: ais_mark_unseen_peer_dead: Node stratus18 was not seen in the previous transition<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_member: Node 1991878848/stratus18 is now: lost<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: ais_mark_unseen_peer_dead: Node stratus20 was not seen in the previous transition<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_member: Node 2025433280/stratus20 is now: lost<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] WARN: pcmk_update_nodeid: Detected local node id change: 1991878848 -> 16777343<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: destroy_ais_node: Destroying entry for node 1991878848<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] notice: ais_remove_peer: Removed dead peer 1991878848 from the membership list<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: ais_remove_peer: Sending removal of 1991878848 to 2 children<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_member: 0x13d9520 Node 16777343 now known as stratus18 (was: (null))<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_member: Node stratus18 now has 1 quorum votes (was 0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_member: Node stratus18 now has process list: 00000000000000000000000000111312 (1118994)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: send_member_notification: Sending membership update 2728 to 2 children<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_member: 0x13d9520 Node 16777343 ((null)) born on: 2708<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [TOTEM ] A processor joined or left the membership and a new membership was formed.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: crm_get_peer: Node stratus18 now has id: 16777343<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: ais_dispatch_message: Membership 2728: quorum retained<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: ais_dispatch_message: Removing peer 1991878848/1991878848<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: reap_crm_member: Peer 1991878848 is unknown<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: notice: ais_dispatch_message: Membership 2728: quorum lost<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: crm_update_peer: Node stratus17: id=1975101632 state=lost (new) addr=r(0) ip(192.168.185.117) votes=1 born=2724 seen=2724 proc=00000000000000000000000000111312<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: crm_update_peer: Node stratus20: id=2025433280 state=lost (new) addr=r(0) ip(192.168.185.120) votes=1 born=4 seen=2724 proc=00000000000000000000000000111312<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: crm_get_peer: Node stratus18 now has id: 1991878848<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [CPG ] chosen downlist: sender r(0) ip(127.0.0.1) ; members(old:3 left:3)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [MAIN ] Completed service synchronization, ready to provide service.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: crm_get_peer: Node stratus18 now has id: 16777343<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: ais_dispatch_message: Membership 2728: quorum retained<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: ais_dispatch_message: Removing peer 1991878848/1991878848<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: reap_crm_member: Peer 1991878848 is unknown<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: notice: ais_dispatch_message: Membership 2728: quorum lost<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: ais_status_callback: status: stratus17 is now lost (was member)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: crm_update_peer: Node stratus17: id=1975101632 state=lost (new) addr=r(0) ip(192.168.185.117) votes=1 born=2724 seen=2724 proc=00000000000000000000000000111312<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: ais_status_callback: status: stratus20 is now lost (was member)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: crm_update_peer: Node stratus20: id=2025433280 state=lost (new) addr=r(0) ip(192.168.185.120) votes=1 born=4 seen=2724 proc=00000000000000000000000000111312<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: WARN: check_dead_member: Our DC node (stratus20) left the cluster<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=check_dead_member ]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: update_dc: Unset DC stratus20<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_te_control: Registering TE UUID: 6e335eff-5e48-4fc1-9003-0537ae948dfd<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: set_graph_functions: Setting custom graph functions<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_dc_takeover: Taking over DC status for this partition<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_readwrite: We are now in R/W mode<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/57, version=0.76.46): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/58, version=0.76.47): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: crm_get_peer: Node stratus18 now has id: 16777343<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/60, version=0.76.48): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: join_make_offer: Making join offers based on membership 2728<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: ais_dispatch_message: Membership 2728: quorum still lost<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/62, version=0.76.49): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: crmd_ais_dispatch: Setting expected votes to 2<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: update_dc: Set DC to stratus18 (3.0.5)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: config_query_callback: Checking for expired actions every 900000ms<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: config_query_callback: Sending expected-votes=3 to corosync<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: ais_dispatch_message: Membership 2728: quorum still lost<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 corosync[1268]: [pcmk ] info: update_expected_votes: Expected quorum votes 2 -> 3<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <cib admin_epoch="0" epoch="76" num_updates="49" ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <configuration ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <crm_config ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <nvpair value="3" id="cib-bootstrap-options-expected-quorum-votes" /><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - </cluster_property_set><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - </crm_config><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - </configuration><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - </cib><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <cib admin_epoch="0" cib-last-written="Wed Jul 10 13:25:58 2013" crm_feature_set="3.0.5" epoch="77" have-quorum="1" num_updates="1" update-client="crmd" update-origin="stratus17"
validate-with="pacemaker-1.2" dc-uuid="stratus20" ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <configuration ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <crm_config ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2" /><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + </cluster_property_set><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + </crm_config><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + </configuration><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + </cib><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/65, version=0.77.1): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: crmd_ais_dispatch: Setting expected votes to 3<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_state_transition: All 1 cluster nodes responded to the join offer.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_dc_join_finalize: join-1: Syncing the CIB from stratus18 to the rest of the cluster<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <cib admin_epoch="0" epoch="77" num_updates="1" ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <configuration ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <crm_config ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <cluster_property_set id="cib-bootstrap-options" ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - <nvpair value="2" id="cib-bootstrap-options-expected-quorum-votes" /><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - </cluster_property_set><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - </crm_config><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - </configuration><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: - </cib><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <cib admin_epoch="0" cib-last-written="Wed Jul 10 13:42:55 2013" crm_feature_set="3.0.5" epoch="78" have-quorum="1" num_updates="1" update-client="crmd" update-origin="stratus18"
validate-with="pacemaker-1.2" dc-uuid="stratus20" ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <configuration ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <crm_config ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <cluster_property_set id="cib-bootstrap-options" ><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="3" /><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + </cluster_property_set><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + </crm_config><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + </configuration><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib:diff: + </cib><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/68, version=0.78.1): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/69, version=0.78.1): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 lrmd: [1278]: info: stonith_api_device_metadata: looking up external/ipmi/heartbeat metadata<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/70, version=0.78.2): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_dc_join_ack: join-1: Updating node state to member for stratus18<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='stratus18']/lrm (origin=local/crmd/71, version=0.78.3): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: erase_xpath_callback: Deletion of "//node_state[@uname='stratus18']/lrm": ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: crm_update_quorum: Updating quorum status to false (call=75)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: abort_transition_graph: do_te_invoke:167 - Triggered transition abort (complete=1) : Peer Cancelled<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_pe_invoke: Query 76: Requesting the current CIB: S_POLICY_ENGINE<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 attrd: [1279]: notice: attrd_local_callback: Sending full refresh (origin=crmd)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 attrd: [1279]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/73, version=0.78.5): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: WARN: match_down_event: No match for shutdown action on stratus17<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: te_update_diff: Stonith/shutdown of stratus17 not matched<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: abort_transition_graph: te_update_diff:215 - Triggered transition abort (complete=1, tag=node_state, id=stratus17, magic=NA, cib=0.78.6) : Node failure<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: WARN: match_down_event: No match for shutdown action on stratus20<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: te_update_diff: Stonith/shutdown of stratus20 not matched<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: abort_transition_graph: te_update_diff:215 - Triggered transition abort (complete=1, tag=node_state, id=stratus20, magic=NA, cib=0.78.6) : Node failure<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_pe_invoke: Query 77: Requesting the current CIB: S_POLICY_ENGINE<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 crmd: [1281]: info: do_pe_invoke: Query 78: Requesting the current CIB: S_POLICY_ENGINE<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:55 stratus18 cib: [1277]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/75, version=0.78.7): ok (rc=0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:56 stratus18 crmd: [1281]: info: do_pe_invoke_callback: Invoking the PE: query=78, ref=pe_calc-dc-1373460176-49, seq=2728, quorate=0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:56 stratus18 attrd: [1279]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_tomtest:0 (10000)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:56 stratus18 pengine: [1280]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:56 stratus18 pengine: [1280]: WARN: pe_fence_node: Node stratus17 will be fenced because it is un-expectedly down<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:56 stratus18 pengine: [1280]: WARN: determine_online_status: Node stratus17 is unclean<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:56 stratus18 pengine: [1280]: WARN: pe_fence_node: Node stratus20 will be fenced because it is un-expectedly down<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:56 stratus18 pengine: [1280]: WARN: determine_online_status: Node stratus20 is unclean<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:56 stratus18 pengine: [1280]: notice: unpack_rsc_op: Hard error - drbd_tomtest:0_last_failure_0 failed with rc=5: Preventing ms_drbd_tomtest from re-starting on stratus20<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-IE">Jul 10 13:42:56 stratus18 pengine: [1280]: notice: unpack_rsc_op: Hard error - tomtest_mysql_SERVICE_last_failure_0 failed with rc=5: Preventing tomtest_mysql_SERVICE from re-starting on stratus20<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>