[ClusterLabs] Moving Related Servers

Ken Gaillot kgaillot at redhat.com
Wed Apr 20 14:01:06 UTC 2016


On 04/20/2016 12:44 AM, ‪H Yavari‬ ‪ wrote:
> You got my situation right. But I couldn't find any method to do this?
> 
> I should create one cluster with 4 node or 2 cluster with 2 node ? How I
> restrict the cluster nodes to each other!!?

Your last questions made me think of multi-site clustering using booth.
I think this might be the best solution for you.

You can configure two independent pacemaker clusters of 2 nodes each,
then use booth to ensure that one cluster has the resources at any time.
See:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617279413776

This is usually done with clusters at physically separate locations, but
there's no problem with using it with two clusters in one location.

Alternatively, going along more traditional lines such as what Klaus and
I have mentioned, you could use rules and node attributes to keep the
resources where desired. You could write a custom resource agent that
would set a custom node attribute for the matching node (the start
action should set the attribute to 1, and the stop action should set the
attribute to 0; if the resource was on App 1, you'd set the attribute
for App 3, and if the resource was on App 4, you'd set the attribute for
App 4). Colocate that resource with your floating IP, and use a rule to
locate service X where the custom node attribute is 1. See:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#ap-ocf

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617279376656

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617356537136

> 
> ------------------------------------------------------------------------
> *From:* Klaus Wenninger <kwenning at redhat.com>
> *To:* users at clusterlabs.org
> *Sent:* Wednesday, 20 April 2016, 9:56:05
> *Subject:* Re: [ClusterLabs] Moving Related Servers
> 
> On 04/19/2016 04:32 PM, Ken Gaillot wrote:
>> On 04/18/2016 10:05 PM, ‪H Yavari‬ ‪ wrote:
>>> Hi,
>>>
>>> This is servers maps:
>>>
>>> App 3---------> App 1    (Active)
>>>
>>> App 4 ---------> App 2  (Standby)
>>>
>>>
>>> Now App1 and App2 are in a cluster with IP failover.
>>>
>>> I need when IP failover will run and App2 will be Active node, service
>>> "X" on server App3 will be stop and App 4 will be Active node.
>>> In the other words, App1 works only with App3 and App 2 works with App 4.
>>>
>>> I have a web application on App1 and some services on App 3 (this is
>>> same for App2 and App 4)
>> This is a difficult situation to model. In particular, you could only
>> have a dependency one way -- so if we could get App 3 to fail over if
>> App 1 fails, we couldn't model the other direction (App 1 failing over
>> if App 3 fails). If each is dependent on the other, there's no way to
>> start one first.
>>
>> Is there a technical reason App 3 can work only with App 1?
>>
>> Is it possible for service "X" to stay running on both App 3 and App 4
>> all the time? If so, this becomes easier.
> Just another try to understand what you are aiming for:
> 
> You have a 2-node-cluster at the moment consisting of the nodes
> App1 & App2.
> You configured something like a master/slave-group to realize
> an active/standby scenario.
> 
> To get the servers App3 & App4 into the game we would make
> them additional pacemaker-nodes (App3 & App4).
> You now have a service X that could be running either on App3 or
> App4 (which is easy by e.g. making it dependent on a node attribute)
> and it should be running on App3 when the service-group is active
> (master in pacemaker terms) on App1 and on App4 when the
> service-group is active on App2.
> 
> The standard thing would be to collocate a service with the master-role
> (see all the DRBD examples for instance).
> We would now need a locate-x when master is located-y rule instead
> of collocation.
> I don't know any way to directly specify this.
> One - ugly though - way around I could imagine would be:
> 
> - locate service X1 on App3
> - locate service X2 on App4
> - dummy service Y1 is located App1 and collocated with master-role
> - dummy service Y2 is located App2 and collocated with master-role
> - service X1 depends on Y1
> - service X2 depends on Y2
> 
> If that somehow reflects your situation the key question now would
> probably be if pengine would make the group on App2 master
> if service X1 fails on App3. I would guess yes but I'm not sure.
> 
> Regards,
> Klaus
> 
>>> Sorry for heavy description.
>>>
>>>
>>> ------------------------------------------------------------------------
>>> *From:* Ken Gaillot <kgaillot at redhat.com <mailto:kgaillot at redhat.com>>
>>> *To:* users at clusterlabs.org <mailto:users at clusterlabs.org>
>>> **
>>> On 04/18/2016 02:34 AM, ‪H Yavari‬ ‪ wrote:
>>>
>>>> Hi,
>>>>
>>>> I have 4 CentOS servers (App1,App2.App3 and App4). I created a cluster
>>>> for App1 and App2 with a IP float and it works well.
>>>> In our infrastructure App1 works only with App3 and App2 only works with
>>>> App4. I mean we have 2 server sets (App1 and App3) , (App2 and App4).
>>>> So I want when server app1 is down and app2 will Online node, App3 will
>>>> offline too and App4 will Online and vice versa, I mean when App3 is
>>>> down and App4 will Online, App1 will offline too.
>>>>
>>>>
>>>> How can I do with pacemaker ? we have our self service on servers so how
>>>> can I user Pacemaker for monitoring these services?
>>>>
>>>> Thanks for reply.
>>>>
>>>> Regards.
>>>> H.Yavari
>>>
>>> I'm not sure I understand your requirements.
>>>
>>> There's no way to tell one node to leave the cluster when another node
>>> is down, and it would be a bad idea if you could: the nodes could never
>>> start up, because each would wait to see the other before starting; and
>>> in your cluster, two nodes shutting down would make the cluster lose
>>> quorum, so the other nodes would refuse to run any resources.
>>>
>>> However, it is usually possible to use constraints to enforce any
>>> desired behavior. So even those the node might not leave the cluster,
>>> you could make the cluster not place any resources on that node.
>>>
>>> Can you give more information about your resources and what nodes they
>>> are allowed to run on? What makes App1 and App3 dependent on each other?




More information about the Users mailing list