[ClusterLabs] CentOS 8 & drbd 9, two drbd devices and colocation
fatcharly at gmx.de
fatcharly at gmx.de
Tue Jan 19 07:06:52 EST 2021
Thanks Ken, I will give it a try.
best regards and stay healthy
fatcharly
> Gesendet: Montag, 18. Januar 2021 um 22:49 Uhr
> Von: "Ken Gaillot" <kgaillot at redhat.com>
> An: "Cluster Labs - All topics related to open-source clustering welcomed" <users at clusterlabs.org>
> Betreff: Re: [ClusterLabs] CentOS 8 & drbd 9, two drbd devices and colocation
>
> On Mon, 2021-01-18 at 18:43 +0100, fatcharly at gmx.de wrote:
> > Hi again,
> >
> > I need some help to figure out how to let a two node cluster with two
> > drbd-devices start the master-devices on the same node.
> > How can I configure colocation to work that way ? I tried to bond one
> > drbd-device with the other but that didn't work out quite well.
>
> Once you have the DRBD itself working, you can create a colocation
> constraint specifying just the master role. In pcs it's "pcs constraint
> colocation add Master <rsc1> with Master <rsc2>".
>
> Keep in mind that the dependent resource (<rsc1>'s master role in this
> case) will not be able to start if the other resource is not active.
>
> > This is my config:
> > > > I'm installing a 2 node pacemaker/drbd cluster on a CentOS 8.3.
> > > > I'm using this versions:
> > > >
> > > > kmod-drbd90-9.0.25-2.el8_3.elrepo.x86_64
> > > > drbd90-utils-9.13.1-1.el8.elrepo.x86_64
> > > >
> > > > pacemaker-cluster-libs-2.0.4-6.el8.x86_64
> > > > pacemaker-cli-2.0.4-6.el8.x86_64
> > > > pacemaker-schemas-2.0.4-6.el8.noarch
> > > > pacemaker-2.0.4-6.el8.x86_64
> > > > pacemaker-libs-2.0.4-6.el8.x86_64
> > > >
> > > > clusternode-names are lisbon and susanne
> > > >
> > > > Status of the cluster:
> > > > Cluster Summary:
> > > > * Stack: corosync
> > > > * Current DC: lisbon (version 2.0.4-6.el8-2deceaa3ae) -
> > > > partition with quorum
> > > > * Last updated: Mon Jan 18 16:30:21 2021
> > > > * Last change: Mon Jan 18 16:30:17 2021 by root via cibadmin
> > > > on lisbon
> > > > * 2 nodes configured
> > > > * 7 resource instances configured
> > > >
> > > > Node List:
> > > > * Online: [ lisbon susanne ]
> > > >
> > > > Active Resources:
> > > > * HA-IP_1 (ocf::heartbeat:IPaddr2): Started susanne
> > > > * Clone Set: drbd_database-clone [drbd_database] (promotable):
> > > > * Masters: [ susanne ]
> > > > * Slaves: [ lisbon ]
> > > > * fs_database (ocf::heartbeat:Filesystem): Started susanne
> > > > * Clone Set: drbd_logsfiles-clone [drbd_logsfiles]
> > > > (promotable):
> > > > * Masters: [ susanne ]
> > > > * fs_logfiles (ocf::heartbeat:Filesystem): Started susanne
> > > >
> > > > drbdadm status
> > > >
> > > > [root at susanne ~]# drbdadm status
> > > > drbd1 role:Primary
> > > > disk:UpToDate
> > > > lisbon role:Secondary
> > > > peer-disk:UpToDate
> > > >
> > > > drbd2 role:Primary
> > > > disk:UpToDate
> > > > lisbon connection:Connecting
> > > >
> > > > [root at lisbon ~]# drbdadm status
> > > > drbd1 role:Secondary
> > > > disk:UpToDate
> > > > susanne role:Primary
> > > > peer-disk:UpToDate
> > > >
> > > >
> > > > cluster-config:
> > > > Cluster Name: mysql_cluster
> > > > Corosync Nodes:
> > > > susanne lisbon
> > > > Pacemaker Nodes:
> > > > lisbon susanne
> > > >
> > > > Resources:
> > > > Resource: HA-IP_1 (class=ocf provider=heartbeat type=IPaddr2)
> > > > Attributes: cidr_netmask=24 ip=192.168.18.150
> > > > Operations: monitor interval=15s (HA-IP_1-monitor-interval-15s)
> > > > start interval=0s timeout=20s (HA-IP_1-start-
> > > > interval-0s)
> > > > stop interval=0s timeout=20s (HA-IP_1-stop-
> > > > interval-0s)
> > > > Clone: drbd_database-clone
> > > > Meta Attrs: clone-max=2 clone-node-max=1 notify=true
> > > > promotable=true promoted-max=1 promoted-node-max=1
> > > > Resource: drbd_database (class=ocf provider=linbit type=drbd)
> > > > Attributes: drbd_resource=drbd1
> > > > Operations: demote interval=0s timeout=90 (drbd_database-
> > > > demote-interval-0s)
> > > > monitor interval=20 role=Slave timeout=20
> > > > (drbd_database-monitor-interval-20)
> > > > monitor interval=10 role=Master timeout=20
> > > > (drbd_database-monitor-interval-10)
> > > > notify interval=0s timeout=90 (drbd_database-
> > > > notify-interval-0s)
> > > > promote interval=0s timeout=90 (drbd_database-
> > > > promote-interval-0s)
> > > > reload interval=0s timeout=30 (drbd_database-
> > > > reload-interval-0s)
> > > > start interval=0s timeout=240 (drbd_database-
> > > > start-interval-0s)
> > > > stop interval=0s timeout=100 (drbd_database-stop-
> > > > interval-0s)
> > > > Resource: fs_database (class=ocf provider=heartbeat
> > > > type=Filesystem)
> > > > Attributes: device=/dev/drbd1 directory=/mnt/clusterfs1
> > > > fstype=ext4
> > > > Operations: monitor interval=20s timeout=40s (fs_database-
> > > > monitor-interval-20s)
> > > > start interval=0s timeout=60s (fs_database-start-
> > > > interval-0s)
> > > > stop interval=0s timeout=60s (fs_database-stop-
> > > > interval-0s)
> > > > Clone: drbd_logsfiles-clone
> > > > Meta Attrs: clone-max=2 clone-node-max=1 notify=true
> > > > promotable=true promoted-max=1 promoted-node-max=1
> > > > Resource: drbd_logsfiles (class=ocf provider=linbit type=drbd)
> > > > Attributes: drbd_resource=drbd2
> > > > Operations: demote interval=0s timeout=90 (drbd_logsfiles-
> > > > demote-interval-0s)
> > > > monitor interval=20 role=Slave timeout=20
> > > > (drbd_logsfiles-monitor-interval-20)
> > > > monitor interval=10 role=Master timeout=20
> > > > (drbd_logsfiles-monitor-interval-10)
> > > > notify interval=0s timeout=90 (drbd_logsfiles-
> > > > notify-interval-0s)
> > > > promote interval=0s timeout=90 (drbd_logsfiles-
> > > > promote-interval-0s)
> > > > reload interval=0s timeout=30 (drbd_logsfiles-
> > > > reload-interval-0s)
> > > > start interval=0s timeout=240 (drbd_logsfiles-
> > > > start-interval-0s)
> > > > stop interval=0s timeout=100 (drbd_logsfiles-stop-
> > > > interval-0s)
> > > > Resource: fs_logfiles (class=ocf provider=heartbeat
> > > > type=Filesystem)
> > > > Attributes: device=/dev/drbd2 directory=/mnt/clusterfs2
> > > > fstype=ext4
> > > > Operations: monitor interval=20s timeout=40s (fs_logfiles-
> > > > monitor-interval-20s)
> > > > start interval=0s timeout=60s (fs_logfiles-start-
> > > > interval-0s)
> > > > stop interval=0s timeout=60s (fs_logfiles-stop-
> > > > interval-0s)
> > > >
> > > > Stonith Devices:
> > > > Fencing Levels:
> > > >
> > > > Location Constraints:
> > > > Ordering Constraints:
> > > > start drbd_database-clone then start fs_database
> > > > (kind:Mandatory) (id:order-drbd_database-clone-fs_database-
> > > > mandatory)
> > > > start drbd_logsfiles-clone then start fs_logfiles
> > > > (kind:Mandatory) (id:order-drbd_logsfiles-clone-fs_logfiles-
> > > > mandatory)
> > > > Colocation Constraints:
> > > > fs_database with drbd_database-clone (score:INFINITY) (with-
> > > > rsc-role:Master) (id:colocation-fs_database-drbd_database-clone-
> > > > INFINITY)
> > > > fs_logfiles with drbd_logsfiles-clone (score:INFINITY) (with-
> > > > rsc-role:Master) (id:colocation-fs_logfiles-drbd_logsfiles-clone-
> > > > INFINITY)
> >
> > ERROR:>>>>> drbd_logsfiles-clone with drbd_database-clone
> > (score:INFINITY) (with-rsc-role:Master) (id:colocation-
> > drbd_logsfiles-clone-drbd_database-clone-INFINITY)
> > > > Ticket Constraints:
> > > >
> > > > Alerts:
> > > > No alerts defined
> > > >
> > > > Resources Defaults:
> > > > No defaults set
> > > > Operations Defaults:
> > > > No defaults set
> > > >
> > > > Cluster Properties:
> > > > cluster-infrastructure: corosync
> > > > cluster-name: mysql_cluster
> > > > dc-version: 2.0.4-6.el8-2deceaa3ae
> > > > have-watchdog: false
> > > > last-lrm-refresh: 1610382881
> > > > stonith-enabled: false
> > > >
> > > > Tags:
> > > > No tags defined
> > > >
> > > > Quorum:
> > > > Options:
> >
> >
> >
> > Any suggestions are welcome
> >
> > stay safe and healty
> >
> > fatcharly
> >
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
> >
> --
> Ken Gaillot <kgaillot at redhat.com>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
More information about the Users
mailing list