[ClusterLabs] Antw: [EXT] Re: constraining multiple cloned resources to the same node

john tillman johnt at panix.com
Wed Mar 16 10:01:35 EDT 2022


Thank you.  I have considered the dual-master approach.  With a single VIP
controlling the connection point for NFS a dual master configuration would
work I believe.  But before I do that I want to make sure I can't get all
my drbd resources to promote on the same node through cluster
configuration.  I have yet to try using the "set" construct but the links
I've read (courtesy of "Jehan-Guillaume de Rorthais") make me optimistic.

Regards,
John


> I wonder:
> Is it possible to write some rule that sets the score to become master on
> a
> specific node higher than on another node?
> Maybe the solution is to run DRBD in dual-primary configuration ;-)
> So the "master" is always on the right node.
>
> Regards,
> Ulrich
>
>
>>>> "john tillman" <johnt at panix.com> schrieb am 15.03.2022 um 19:53 in
> Nachricht
> <8b7bdc8a0fc243d4ceba44415e0b3dc0.squirrel at mail.panix.com>:
>>>  On 15.03.2022 19:35, john tillman wrote:
>>>> Hello,
>>>>
>>>> I'm trying to guarantee that all my cloned drbd resources start on the
>>>> same node and I can't figure out the syntax of the constraint to do
>>>> it.
>>>>
>>>> I could nominate one of the drbd resources as a "leader" and have all
>>>> the
>>>> others follow it.  But then if something happens to that leader the
>>>> others
>>>> are without constraint.
>>>>
>>>
>>> Colocation is asymmetric. Resource B is colocated with resource A, so
>>> pacemaker decides placement of resource A first. If resource A cannot
>>> run anywhere (which is probably what you mean under "something happens
>>> to that leader"), resource B cannot run anywhere. This is true also for
>>> resources inside resource set.
>>>
>>> I do not think pacemaker supports "always run these resources together,
>>> no matter how many resources can run".
>>>
>>
>>
>> Huh, no way to get all the masters to start on the same node.
>> Interesting.
>>
>> The set construct has a boolean field "require‑all".  I'll try that
>> before
>> I give up.
>>
>> Could I create a resource (some systemd service) that all the masters
>> are
>> colocated with?  Feels like a hack but would it work?
>>
>> Thank you for the response.
>>
>> ‑John
>>
>>
>>>> I tried adding them to a group but got a syntax error from pcs saying
>>>> that
>>>> I wasn't allowed to add cloned resources to a group.
>>>>
>>>> If anyone is interested, it started from this example:
>>>>
>>
> https://edmondcck.medium.com/setup‑a‑highly‑available‑nfs‑cluster‑with‑disk‑encryptio
>
>> n‑using‑luks‑drbd‑corosync‑and‑pacemaker‑a96a5bdffcf8
>>>> There's a DRBD partition that gets mounted onto a local directory.
>>>> The
>>>> local directory is then mounted onto an exported directory (mount
>>>> ‑‑bind).
>>>>  Then the nfs service (samba too) get started and finally the VIP.
>>>>
>>>> Please note that while I have 3 DRBD resources currently, that number
>>>> may
>>>> increase after the initial configuration is performed.
>>>>
>>>> I would just like to know a mechanism to make sure all the DRBD
>>>> resources
>>>> are colocated.  Any suggestions welcome.
>>>>
>>>> [root at nas00 ansible]# pcs resource
>>>>   * Clone Set: drbdShare‑clone [drbdShare] (promotable):
>>>>     * Masters: [ nas00 ]
>>>>     * Slaves: [ nas01 ]
>>>>   * Clone Set: drbdShareRead‑clone [drbdShareRead] (promotable):
>>>>     * Masters: [ nas00 ]
>>>>     * Slaves: [ nas01 ]
>>>>   * Clone Set: drbdShareWrite‑clone [drbdShareWrite] (promotable):
>>>>     * Masters: [ nas00 ]
>>>>     * Slaves: [ nas01 ]
>>>>   * localShare    (ocf::heartbeat:Filesystem):     Started nas00
>>>>   * localShareRead    (ocf::heartbeat:Filesystem):     Started nas00
>>>>   * localShareWrite   (ocf::heartbeat:Filesystem):     Started nas00
>>>>   * nfsShare      (ocf::heartbeat:Filesystem):     Started nas00
>>>>   * nfsShareRead      (ocf::heartbeat:Filesystem):     Started nas00
>>>>   * nfsShareWrite     (ocf::heartbeat:Filesystem):     Started nas00
>>>>   * nfsService  (systemd:nfs‑server):    Started nas00
>>>>   * smbService  (systemd:smb):   Started nas00
>>>>   * vipN      (ocf::heartbeat:IPaddr2):        Started nas00
>>>>
>>>> [root at nas00 ansible]# pcs constraint show ‑‑all
>>>> Location Constraints:
>>>> Ordering Constraints:
>>>>   promote drbdShare‑clone then start localShare (kind:Mandatory)
>>>>   promote drbdShareRead‑clone then start localShareRead
>>>> (kind:Mandatory)
>>>>   promote drbdShareWrite‑clone then start localShareWrite
>>>> (kind:Mandatory)
>>>>   start localShare then start nfsShare (kind:Mandatory)
>>>>   start localShareRead then start nfsShareRead (kind:Mandatory)
>>>>   start localShareWrite then start nfsShareWrite (kind:Mandatory)
>>>>   start nfsShare then start nfsService (kind:Mandatory)
>>>>   start nfsShareRead then start nfsService (kind:Mandatory)
>>>>   start nfsShareWrite then start nfsService (kind:Mandatory)
>>>>   start nfsService then start smbService (kind:Mandatory)
>>>>   start nfsService then start vipN (kind:Mandatory)
>>>> Colocation Constraints:
>>>>   localShare with drbdShare‑clone (score:INFINITY)
>>>> (with‑rsc‑role:Master)
>>>>   localShareRead with drbdShareRead‑clone (score:INFINITY)
>>>> (with‑rsc‑role:Master)
>>>>   localShareWrite with drbdShareWrite‑clone (score:INFINITY)
>>>> (with‑rsc‑role:Master)
>>>>   nfsShare with localShare (score:INFINITY)
>>>>   nfsShareRead with localShareRead (score:INFINITY)
>>>>   nfsShareWrite with localShareWrite (score:INFINITY)
>>>>   nfsService with nfsShare (score:INFINITY)
>>>>   nfsService with nfsShareRead (score:INFINITY)
>>>>   nfsService with nfsShareWrite (score:INFINITY)
>>>>   smbService with nfsShare (score:INFINITY)
>>>>   smbService with nfsShareRead (score:INFINITY)
>>>>   smbService with nfsShareWrite (score:INFINITY)
>>>>   vipN with nfsService (score:INFINITY)
>>>> Ticket Constraints:
>>>>
>>>> Thank you for your time and attention.
>>>>
>>>> ‑John
>>>>
>>>>
>>>> _______________________________________________
>>>> Manage your subscription:
>>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>>
>>>> ClusterLabs home: https://www.clusterlabs.org/
>>>
>>> _______________________________________________
>>> Manage your subscription:
>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> ClusterLabs home: https://www.clusterlabs.org/
>>>
>>>
>>
>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
>
>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>




More information about the Users mailing list