[Pacemaker] Help for my first test

Arthur B. Olsen ABO at ft.fo
Sun Feb 27 07:53:39 EST 2011


>-----Original message-----
>To:    pacemaker at oss.clusterlabs.org;
>From:    Arthur B. Olsen <ABO at ft.fo>
>Sent:    Sat 26-02-2011 13:58
>Subject:    [Pacemaker] Help for my first test
>> Im a Pacemaker newbie. I have read the docs and googled some around.
>>But still 
>> i can't connect the dots.
>> 
>> I have two nodes.
>> 
>> Node1 (192.168.0.1)
>> Node2 (192.168.0.2)
>> 
>> I want a failover ipaddress (192.168.0.3), so that when Node1 is up it
>>has 
>> eth0:0 with 192.168.0.3.
>> And if Node1 fails, Node2 get the eth0:0 with 192.168.0.3.
>> And when Node1 comes up again it gets the shared ip back.
>
>
>Arthur,
>
>You need to setup a node preference for auto-failback of a resource.
>Also, when a node in a 2 node cluster fails you lose quorum and the
>cluster cannot function unless there is quorum.
>It is highly recommended that you add a third node for quorum/voting to
>prevent a "split brain" situation.
>(Especially if you have a possibity of data corruption.)
>
>If this is not possible you have to set the no-quorum-policy to ignore to
>enable the cluster to continue functioning if quorum is lost.
>"crm configure property no-quorum-policy=ignore"
>
>You should also configure fencing devices like an APC powerswitch
>(preffered way) or e.g. an IPMI device (almost all servers support this)
>
>Regards,
>Robert van Leeuwen

Yes, i understand this now with quorum. Of course it's pointless with only
two nodes. So i set up at third node, and disabled stonith for now, and
everything is working just fine. Which leads to my second question.

My final production setup will be:

2 Nodes running Varnish cache load balancer, if one goes down the other
takes both ip addresses.

2 Nodes running NFS and DRBD, with a floating ipaddress, if nfs01 goes
down nfs02 takes the ipaddress, DRBD and NFS. When nfs01 comes up it takes
over again.

N Nodes running apache,

And something runnin mysql, don't know excactly how this will be.

The question is, is it possible to all server to be part of the same
cluster with completely different roles running completly different
services, all managed by the same corosync settings. Then i would never
lose quorum.









More information about the Pacemaker mailing list