[ClusterLabs] Pacemaker Cluster help

Andrei Borzenkov arvidjaar at gmail.com
Tue Jun 1 13:48:33 EDT 2021


On 01.06.2021 18:20, kgaillot at redhat.com wrote:
> On Thu, 2021-05-27 at 20:46 +0300, Andrei Borzenkov wrote:
>> On 27.05.2021 15:36, Nathan Mazarelo wrote:
>>> Is there a way to have pacemaker resource groups failover if all
>>> floating IP resources are unavailable?
>>>     
>>> I want to have multiple floating IPs in a resource group that will
>>> only failover if all IPs cannot work. Each floating IP is on a
>>> different subnet and can be used by the application I have. If a
>>> floating IP is unavailable it will use the next available floating
>>> IP.
>>> Resource Group: floating_IP
>>>
>>> floating-IP
>>>
>>> floating-IP2
>>>
>>> floating-IP3      
>>> For example, right now if a floating-IP resource fails the whole
>>> resource group will failover to a different node. What I want is to
>>> have pacemaker failover the resource group only if all three
>>> resources are unavailable. Is this possible?
>>>
>>
>> Yes. See "Moving Resources Due to Connectivity Changes" in pacemaker
>> explained.
> 
> I don't think that will work when the IP resources themselves are
> what's desired to be affected.
> 

I guess this need more precise explanation from OP what "floating IP is
unavailable" means. Personally I do not see any point in having local IP
without connectivity. If the question is really just "fail only if all
resources failed" then the obvious answer is resource set with
require-all=false and it does not matter what type resources are.

> My first thought is that a resource group is probably not the right
> model, since there is not likely to be an ordering relationship among
> the IPs, just colocation. I'd use separate colocations for IP2 and IP3
> with IP1 instead. However, that is not completely symmetrical -- if IP1
> *can't* be assigned to a node for any reason (e.g. meeting its failure
> threshold on all nodes), then the other IPs can't either.
> 
> To keep the IPs failing over as soon as one of them fails, the closest
> approach I can think of is the new critical resource feature, which is
> just coming out in the 2.1.0 release and so probably not an option
> here. Marking IP2 and IP3 as noncritical would allow those to stop on
> failure, and only if IP1 also failed would they be started elsewhere.
> However again it's not completely symmetric, all IPs would fail over if
> IP1 fails.
> 
> Basically, there's no way to treat a set of resources exactly equally.
> Pacemaker has to assign one of them to a node first, then assign the
> others relative to it.
> 
> There are some feature requests that are related, but no one's
> volunteered to do them yet:
> 
>  https://bugs.clusterlabs.org/show_bug.cgi?id=5052
>  https://bugs.clusterlabs.org/show_bug.cgi?id=5320
> 



More information about the Users mailing list