You can try creating a dummy resource and colocate all clones with it.<div><br></div><div>Best Regards,</div><div>Strahil Nikolov<br> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Tue, Mar 15, 2022 at 20:53, john tillman</div><div><johnt@panix.com> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> > On 15.03.2022 19:35, john tillman wrote:<br clear="none">>> Hello,<br clear="none">>><br clear="none">>> I'm trying to guarantee that all my cloned drbd resources start on the<br clear="none">>> same node and I can't figure out the syntax of the constraint to do it.<br clear="none">>><br clear="none">>> I could nominate one of the drbd resources as a "leader" and have all<br clear="none">>> the<br clear="none">>> others follow it.  But then if something happens to that leader the<br clear="none">>> others<br clear="none">>> are without constraint.<br clear="none">>><br clear="none">><br clear="none">> Colocation is asymmetric. Resource B is colocated with resource A, so<br clear="none">> pacemaker decides placement of resource A first. If resource A cannot<br clear="none">> run anywhere (which is probably what you mean under "something happens<br clear="none">> to that leader"), resource B cannot run anywhere. This is true also for<br clear="none">> resources inside resource set.<br clear="none">><br clear="none">> I do not think pacemaker supports "always run these resources together,<br clear="none">> no matter how many resources can run".<br clear="none">><br clear="none"><br clear="none"><br clear="none">Huh, no way to get all the masters to start on the same node.  Interesting.<br clear="none"><br clear="none">The set construct has a boolean field "require-all".  I'll try that before<br clear="none">I give up.<br clear="none"><br clear="none">Could I create a resource (some systemd service) that all the masters are<br clear="none">colocated with?  Feels like a hack but would it work?<br clear="none"><br clear="none">Thank you for the response.<br clear="none"><br clear="none">-John<br clear="none"><br clear="none"><br clear="none">>> I tried adding them to a group but got a syntax error from pcs saying<br clear="none">>> that<br clear="none">>> I wasn't allowed to add cloned resources to a group.<br clear="none">>><br clear="none">>> If anyone is interested, it started from this example:<br clear="none">>> <a shape="rect" href="https://edmondcck.medium.com/setup-a-highly-available-nfs-cluster-with-disk-encryption-using-luks-drbd-corosync-and-pacemaker-a96a5bdffcf8" target="_blank">https://edmondcck.medium.com/setup-a-highly-available-nfs-cluster-with-disk-encryption-using-luks-drbd-corosync-and-pacemaker-a96a5bdffcf8</a><br clear="none">>> There's a DRBD partition that gets mounted onto a local directory.  The<br clear="none">>> local directory is then mounted onto an exported directory (mount<br clear="none">>> --bind).<br clear="none">>>  Then the nfs service (samba too) get started and finally the VIP.<br clear="none">>><br clear="none">>> Please note that while I have 3 DRBD resources currently, that number<br clear="none">>> may<br clear="none">>> increase after the initial configuration is performed.<br clear="none">>><br clear="none">>> I would just like to know a mechanism to make sure all the DRBD<br clear="none">>> resources<br clear="none">>> are colocated.  Any suggestions welcome.<br clear="none">>><br clear="none">>> [<a shape="rect" ymailto="mailto:root@nas00" href="mailto:root@nas00">root@nas00</a> ansible]# pcs resource<br clear="none">>>   * Clone Set: drbdShare-clone [drbdShare] (promotable):<br clear="none">>>     * Masters: [ nas00 ]<br clear="none">>>     * Slaves: [ nas01 ]<br clear="none">>>   * Clone Set: drbdShareRead-clone [drbdShareRead] (promotable):<br clear="none">>>     * Masters: [ nas00 ]<br clear="none">>>     * Slaves: [ nas01 ]<br clear="none">>>   * Clone Set: drbdShareWrite-clone [drbdShareWrite] (promotable):<br clear="none">>>     * Masters: [ nas00 ]<br clear="none">>>     * Slaves: [ nas01 ]<br clear="none">>>   * localShare    (ocf::heartbeat:Filesystem):     Started nas00<br clear="none">>>   * localShareRead    (ocf::heartbeat:Filesystem):     Started nas00<br clear="none">>>   * localShareWrite   (ocf::heartbeat:Filesystem):     Started nas00<br clear="none">>>   * nfsShare      (ocf::heartbeat:Filesystem):     Started nas00<br clear="none">>>   * nfsShareRead      (ocf::heartbeat:Filesystem):     Started nas00<br clear="none">>>   * nfsShareWrite     (ocf::heartbeat:Filesystem):     Started nas00<br clear="none">>>   * nfsService  (systemd:nfs-server):    Started nas00<br clear="none">>>   * smbService  (systemd:smb):   Started nas00<br clear="none">>>   * vipN      (ocf::heartbeat:IPaddr2):        Started nas00<br clear="none">>><br clear="none">>> [<a shape="rect" ymailto="mailto:root@nas00" href="mailto:root@nas00">root@nas00</a> ansible]# pcs constraint show --all<br clear="none">>> Location Constraints:<br clear="none">>> Ordering Constraints:<br clear="none">>>   promote drbdShare-clone then start localShare (kind:Mandatory)<br clear="none">>>   promote drbdShareRead-clone then start localShareRead (kind:Mandatory)<br clear="none">>>   promote drbdShareWrite-clone then start localShareWrite<br clear="none">>> (kind:Mandatory)<br clear="none">>>   start localShare then start nfsShare (kind:Mandatory)<br clear="none">>>   start localShareRead then start nfsShareRead (kind:Mandatory)<br clear="none">>>   start localShareWrite then start nfsShareWrite (kind:Mandatory)<br clear="none">>>   start nfsShare then start nfsService (kind:Mandatory)<br clear="none">>>   start nfsShareRead then start nfsService (kind:Mandatory)<br clear="none">>>   start nfsShareWrite then start nfsService (kind:Mandatory)<br clear="none">>>   start nfsService then start smbService (kind:Mandatory)<br clear="none">>>   start nfsService then start vipN (kind:Mandatory)<br clear="none">>> Colocation Constraints:<br clear="none">>>   localShare with drbdShare-clone (score:INFINITY)<br clear="none">>> (with-rsc-role:Master)<br clear="none">>>   localShareRead with drbdShareRead-clone (score:INFINITY)<br clear="none">>> (with-rsc-role:Master)<br clear="none">>>   localShareWrite with drbdShareWrite-clone (score:INFINITY)<br clear="none">>> (with-rsc-role:Master)<br clear="none">>>   nfsShare with localShare (score:INFINITY)<br clear="none">>>   nfsShareRead with localShareRead (score:INFINITY)<br clear="none">>>   nfsShareWrite with localShareWrite (score:INFINITY)<br clear="none">>>   nfsService with nfsShare (score:INFINITY)<br clear="none">>>   nfsService with nfsShareRead (score:INFINITY)<br clear="none">>>   nfsService with nfsShareWrite (score:INFINITY)<br clear="none">>>   smbService with nfsShare (score:INFINITY)<br clear="none">>>   smbService with nfsShareRead (score:INFINITY)<br clear="none">>>   smbService with nfsShareWrite (score:INFINITY)<br clear="none">>>   vipN with nfsService (score:INFINITY)<br clear="none">>> Ticket Constraints:<br clear="none">>><br clear="none">>> Thank you for your time and attention.<br clear="none">>><br clear="none">>> -John<br clear="none">>><br clear="none">>><br clear="none">>> _______________________________________________<br clear="none">>> Manage your subscription:<br clear="none">>> <a shape="rect" href="https://lists.clusterlabs.org/mailman/listinfo/users" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br clear="none">>><br clear="none">>> ClusterLabs home: <a shape="rect" href="https://www.clusterlabs.org/" target="_blank">https://www.clusterlabs.org/</a><div class="yqt6673988549" id="yqtfd73795"><br clear="none">><br clear="none">> _______________________________________________<br clear="none">> Manage your subscription:<br clear="none">> <a shape="rect" href="https://lists.clusterlabs.org/mailman/listinfo/users" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br clear="none">><br clear="none">> ClusterLabs home: <a shape="rect" href="https://www.clusterlabs.org/" target="_blank">https://www.clusterlabs.org/</a><br clear="none">><br clear="none">><br clear="none"><br clear="none"><br clear="none">_______________________________________________<br clear="none">Manage your subscription:<br clear="none"><a shape="rect" href="https://lists.clusterlabs.org/mailman/listinfo/users" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br clear="none"><br clear="none">ClusterLabs home: <a shape="rect" href="https://www.clusterlabs.org/" target="_blank">https://www.clusterlabs.org/</a><br clear="none"></div> </div> </blockquote></div>