Also, check what 'drbdadm' has to tell you. Both nodes should be in sync, otherwise pacemaker will prevent the failover.<div id="yMail_cursorElementTracker_1636915459763"><br></div><div id="yMail_cursorElementTracker_1636915459990">Best Regards,</div><div id="yMail_cursorElementTracker_1636915463758">Strahil Nikolov<br> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Sun, Nov 14, 2021 at 20:09, Andrei Borzenkov</div><div id="yMail_cursorElementTracker_1636915416921"><arvidjaar@gmail.com> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> On 14.11.2021 19:47, Neil McFadyen wrote:<br clear="none">> I have a Ubuntu 20.04 drbd nfs pacemaker/corosync setup for 2 nodes,  it<br clear="none">> was working fine before but now I can't get the 2nd node to show as a slave<br clear="none">> under the Clone Set.  So if I do a failover both nodes show as stopped.<br clear="none">> <br clear="none">> <a shape="rect" ymailto="mailto:root@testnfs30" href="mailto:root@testnfs30">root@testnfs30</a>:/etc/drbd.d# crm status<br clear="none">> Cluster Summary:<br clear="none">>   * Stack: corosync<br clear="none">>   * Current DC: testnfs32 (version 2.0.3-4b1f869f0f) - partition with quorum<br clear="none">>   * Last updated: Sun Nov 14 11:35:09 2021<br clear="none">>   * Last change:  Sun Nov 14 10:31:41 2021 by root via cibadmin on testnfs30<br clear="none">>   * 2 nodes configured<br clear="none">>   * 5 resource instances configured<br clear="none">> <br clear="none">> Node List:<br clear="none">>   * Node testnfs32: standby<br clear="none"><br clear="none">This means - no resource will be started on this node. If this is not intentional, return node to onlilne (crm node online testnfs32).<br clear="none"><br clear="none">>   * Online: [ testnfs30 ]<br clear="none">> <br clear="none">> Full List of Resources:<br clear="none">>   * Resource Group: HA:<br clear="none">>     * vip       (ocf::heartbeat:IPaddr2):        Started testnfs30<br clear="none">>     * fs_nfs    (ocf::heartbeat:Filesystem):     Started testnfs30<br clear="none">>     * nfs       (ocf::heartbeat:nfsserver):      Started testnfs30<br clear="none">>   * Clone Set: ms_drbd_nfs [drbd_nfs] (promotable):<br clear="none">>     * Masters: [ testnfs30 ]<br clear="none">>     * Stopped: [ testnfs32 ]<br clear="none">> <br clear="none">> This used to show as<br clear="none">> * Slaves: [ testnfs32 ]<br clear="none">> <br clear="none">> testnfs30# cat /proc/drbd<br clear="none">> version: 8.4.11 (api:1/proto:86-101)<br clear="none">> srcversion: FC3433D849E3B88C1E7B55C<br clear="none">>  0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----<br clear="none">>     ns:352 nr:368 dw:720 dr:4221 al:6 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f<br clear="none">> oos:0<br clear="none">> <br clear="none">> <br clear="none">> testnfs30:/etc/drbd.d# drbdadm status<br clear="none">> nfs1 role:Primary<br clear="none">>   volume:1 disk:UpToDate<br clear="none">>   peer role:Secondary<br clear="none">>     volume:1 replication:Established peer-disk:UpToDate<br clear="none">> <br clear="none">> <a shape="rect" ymailto="mailto:root@testnfs30" href="mailto:root@testnfs30">root@testnfs30</a>:/etc/drbd.d# crm config show<br clear="none">> node 1: testnfs30 \<br clear="none">>         attributes standby=off<br clear="none">> node 2: testnfs32 \<br clear="none">>         attributes standby=on<br clear="none">> primitive drbd_nfs ocf:linbit:drbd \<br clear="none">>         params drbd_resource=nfs1 \<br clear="none">>         op monitor interval=31s timeout=20s role=Slave \<br clear="none">>         op monitor interval=30s timeout=20s role=Master<br clear="none">> primitive fs_nfs Filesystem \<br clear="none">>         params device="/dev/drbd0" directory="/nfs1srv" fstype=ext4<br clear="none">> options="noatime,nodiratime" \<br clear="none">>         op start interval=0 timeout=60 \<br clear="none">>         op stop interval=0 timeout=120 \<br clear="none">>         op monitor interval=15s timeout=60s<br clear="none">> primitive nfs nfsserver \<br clear="none">>         params nfs_init_script="/etc/init.d/nfs-kernel-server"<br clear="none">> nfs_shared_infodir="/nfs1srv/nfs_shared" nfs_ip=172.17.1.35 \<br clear="none">>         op monitor interval=5s<br clear="none">> primitive vip IPaddr2 \<br clear="none">>         params ip=172.17.1.35 cidr_netmask=16 nic=bond0 \<br clear="none">>         op monitor interval=20s timeout=20s \<br clear="none">>         op start interval=0s timeout=20s \<br clear="none">>         op stop interval=0s timeout=20s<br clear="none">> group HA vip fs_nfs nfs \<br clear="none">>         meta target-role=Started<br clear="none">> ms ms_drbd_nfs drbd_nfs \<br clear="none">>         meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1<br clear="none">> notify=true<br clear="none">> order fs-nfs-before-nfs inf: fs_nfs:start nfs:start<br clear="none">> order ip-before-ms-drbd-nfs Mandatory: vip:start ms_drbd_nfs:promote<br clear="none">> location loc ms_drbd_nfs 100: testnfs30<br clear="none">> order ms-drbd-nfs-before-fs-nfs Mandatory: ms_drbd_nfs:promote fs_nfs:start<br clear="none">> colocation ms-drbd-nfs-with-ha inf: ms_drbd_nfs:Master HA<br clear="none">> property cib-bootstrap-options: \<br clear="none">>         have-watchdog=false \<br clear="none">>         dc-version=2.0.3-4b1f869f0f \<br clear="none">>         cluster-infrastructure=corosync \<br clear="none">>         cluster-name=debian \<br clear="none">>         no-quorum-policy=ignore \<br clear="none">>         stonith-enabled=false<br clear="none">> <br clear="none">> I noticed that this line was added since last time I checked so I removed<br clear="none">> it but that didn't help'<br clear="none">> <br clear="none">> location drbd-fence-by-handler-nfs1-ms_drbd_nfs ms_drbd_nfs \<br clear="none">>         rule $role=Master -inf: #uname ne testnfs32<br clear="none">> <br clear="none">> <br clear="none">> _______________________________________________<br clear="none">> Manage your subscription:<br clear="none">> <a shape="rect" href="https://lists.clusterlabs.org/mailman/listinfo/users" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br clear="none">> <br clear="none">> ClusterLabs home: <a shape="rect" href="https://www.clusterlabs.org/" target="_blank">https://www.clusterlabs.org/</a><div class="yqt0838631265" id="yqtfd71085"><br clear="none">> <br clear="none"><br clear="none">_______________________________________________<br clear="none">Manage your subscription:<br clear="none"><a shape="rect" href="https://lists.clusterlabs.org/mailman/listinfo/users" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br clear="none"><br clear="none">ClusterLabs home: <a shape="rect" href="https://www.clusterlabs.org/" target="_blank">https://www.clusterlabs.org/</a><br clear="none"></div> </div> </blockquote></div>