[Pacemaker] Problem in Stonith configuration

Andreas Kurz andreas at hastexo.com
Mon Oct 17 09:08:16 EDT 2011


Hello,

On 10/17/2011 12:34 PM, neha chatrath wrote:
> Hello,
> I am configuring a 2 node cluster with following configuration:
> 
> *[root at MCG1 init.d]# crm configure show
> 
> node $id="16738ea4-adae-483f-9d79-b0ecce8050f4" mcg2 \
> attributes standby="off"
> 
> node $id="3d507250-780f-414a-b674-8c8d84e345cd" mcg1 \
> attributes standby="off"
> 
> primitive ClusterIP ocf:heartbeat:IPaddr \
> params ip="192.168.1.204" cidr_netmask="255.255.255.0" nic="eth0:1" \
> 
> op monitor interval="40s" timeout="20s" \
> meta target-role="Started"
> 
> primitive app1_fencing stonith:suicide \
> op monitor interval="90" \
> meta target-role="Started"
> 
> primitive myapp1 ocf:heartbeat:Redundancy \
> op monitor interval="60s" role="Master" timeout="30s" on-fail="standby" \
> op monitor interval="40s" role="Slave" timeout="40s" on-fail="restart"
> 
> primitive myapp2 ocf:mcg:Redundancy_myapp2 \
> op monitor interval="60" role="Master" timeout="30" on-fail="standby" \
> op monitor interval="40" role="Slave" timeout="40" on-fail="restart"
> 
> primitive myapp3 ocf:mcg:red_app3 \
> op monitor interval="60" role="Master" timeout="30" on-fail="fence" \
> op monitor interval="40" role="Slave" timeout="40" on-fail="restart"
> 
> ms ms_myapp1 myapp1 \
> meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
> notify="true"
> 
> ms ms_myapp2 myapp2 \
> meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
> notify="true"
> 
> ms ms_myapp3 myapp3 \
> meta master-max="1" master-max-node="1" clone-max="2" clone-node-max="1"
> notify="true"
> 
> colocation myapp1_col inf: ClusterIP ms_myapp1:Master
> 
> colocation myapp2_col inf: ClusterIP ms_myapp2:Master
> 
> colocation myapp3_col inf: ClusterIP ms_myapp3:Master
> 
> order myapp1_order inf: ms_myapp1:promote ClusterIP:start
> 
> order myapp2_order inf: ms_myapp2:promote ms_myapp1:start
> 
> order myapp3_order inf: ms_myapp3:promote ms_myapp2:start
> 
> property $id="cib-bootstrap-options" \
> dc-version="1.0.11-db98485d06ed3fe0fe236509f023e1bd4a5566f1" \
> cluster-infrastructure="Heartbeat" \
> stonith-enabled="true" \
> no-quorum-policy="ignore"
> 
> rsc_defaults $id="rsc-options" \
> resource-stickiness="100" \
> migration-threshold="3"
> *
> I start Heartbeat demon only one of the nodes e.g. mcg1. But none of the
> resources (myapp, myapp1 etc) gets started even on this node.
> Following is the output of "*crm_mon -f *" command:
> 
> *Last updated: Mon Oct 17 10:19:22 2011
> Stack: Heartbeat
> Current DC: mcg1 (3d507250-780f-414a-b674-8c8d84e345cd)- partition with
> quorum
> Version: 1.0.11-db98485d06ed3fe0fe236509f023e1bd4a5566f1
> 2 Nodes configured, unknown expected votes
> 5 Resources configured.
> ============
> Node mcg2 (16738ea4-adae-483f-9d79-b0ecce8050f4): UNCLEAN (offline)

The cluster is waiting for a successful fencing event before starting
all resources .. the only way to be sure the second node runs no resources.

Since you are using suicide pluging this will never happen if Heartbeat
is not started on that node. If this is only a _test_setup_ go with ssh
or even null stonith plugin ... never use them on production systems!

Regards,
Andreas

-- 
Need help with Pacemaker?
http://www.hastexo.com/now


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 286 bytes
Desc: OpenPGP digital signature
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20111017/f71622e7/attachment-0003.sig>


More information about the Pacemaker mailing list