<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#ffffff">
On 02/12/2012 04:55 PM, Andreas Kurz wrote:
<blockquote>
<pre wrap> op monitor role="Master" interval="30s"
op monitor role="Slave" interval="31s"
</pre>
<pre wrap>ipmi fencing device capable of
fencing more than one node?</pre>
</blockquote>
Andreas-<br>
<br>
I applied both changes you mentioned and the behavior still exists.
Here is my current configuration:<br>
<br>
<blockquote><tt>node nodea \<br>
attributes standby="off"<br>
node nodeb \<br>
attributes standby="off"<br>
primitive ClusterIP ocf:heartbeat:IPaddr2 \<br>
params ip="192.168.1.3" cidr_netmask="32" \<br>
op monitor interval="30s"<br>
primitive datafs ocf:heartbeat:Filesystem \<br>
params device="/dev/drbd0" directory="/data" fstype="ext3" \<br>
meta target-role="Started"<br>
primitive drbd0 ocf:linbit:drbd \<br>
params drbd_resource="drbd0" \<br>
op monitor interval="31s" role="Slave" \<br>
op monitor interval="30s" role="Master"<br>
primitive drbd1 ocf:linbit:drbd \<br>
params drbd_resource="drbd1" \<br>
op monitor interval="31s" role="Slave" \<br>
op monitor interval="30s" role="Master"<br>
primitive fence-nodea stonith:fence_ipmilan \<br>
params pcmk_host_list="nodeb" ipaddr="xxx.xxx.xxx.xxx"
login="xxxxxxx" passwd="xxxxxxxx" lanplus="1" timeout="4"
auth="md5" \<br>
op monitor interval="60s"<br>
primitive fence-nodeb stonith:fence_ipmilan \<br>
params pcmk_host_list="nodea" ipaddr="xxx.xxx.xxx.xxx"
login="xxxxxxx" passwd="xxxxxxxx" lanplus="1" timeout="4"
auth="md5" \<br>
op monitor interval="60s"<br>
primitive httpd ocf:heartbeat:apache \<br>
params configfile="/etc/httpd/conf/httpd.conf" \<br>
op monitor interval="1min"<br>
primitive patchfs ocf:heartbeat:Filesystem \<br>
params device="/dev/drbd1" directory="/patch" fstype="ext3"
\<br>
meta target-role="Started"<br>
group web datafs patchfs ClusterIP httpd<br>
ms drbd0clone drbd0 \<br>
meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true" target-role="Master"<br>
ms drbd1clone drbd1 \<br>
meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true" target-role="Master"<br>
location fence-on-nodea fence-nodea \<br>
rule $id="fence-on-nodea-rule" -inf: #uname ne nodea<br>
location fence-on-nodeb fence-nodeb \<br>
rule $id="fence-on-nodeb-rule" -inf: #uname ne nodeb<br>
colocation datafs-with-drbd0 inf: web drbd0clone:Master<br>
colocation patchfs-with-drbd1 inf: web drbd1clone:Master<br>
order datafs-after-drbd0 inf: drbd0clone:promote web:start<br>
order patchfs-after-drbd1 inf: drbd1clone:promote web:start<br>
property $id="cib-bootstrap-options" \<br>
dc-version="1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558"
\<br>
cluster-infrastructure="openais" \<br>
expected-quorum-votes="2" \<br>
stonith-enabled="false" \<br>
no-quorum-policy="ignore" \<br>
last-lrm-refresh="1328556424"<br>
rsc_defaults $id="rsc-options" \<br>
resource-stickiness="100"</tt><br>
</blockquote>
If the cluster is fully down, I start corosync and pacemaker on one
node, the cluster fences the other node but the services do not come
up until the cluster-recheck-interval occurs. I have attached the
corosync.log from this latest test.<br>
<br>
-Davin<br>
</body>
</html>