[ClusterLabs] Antw: [EXT] CentOS 8 & drbd 9, two drbd devices and colocation
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Tue Jan 19 02:01:26 EST 2021
Hi!
It should be easy (I guess), but when requiring both masters to be on the same
node, can't you do with one DRBD device (something like putting a LVM VG on
that and proivide two LVs)?
Regards,
Ulrich
>>> <fatcharly at gmx.de> schrieb am 18.01.2021 um 18:43 in Nachricht
<trinity-8d5b7455-9df6-4cfa-9c24-931a12d0e322-1610991804216 at 3c-app-gmx-bs17>:
> Hi again,
>
> I need some help to figure out how to let a two node cluster with two
> drbd‑devices start the master‑devices on the same node.
> How can I configure colocation to work that way ? I tried to bond one
> drbd‑device with the other but that didn't work out quite well.
>
> This is my config:
>> > I'm installing a 2 node pacemaker/drbd cluster on a CentOS 8.3. I'm using
> this versions:
>> >
>> > kmod‑drbd90‑9.0.25‑2.el8_3.elrepo.x86_64
>> > drbd90‑utils‑9.13.1‑1.el8.elrepo.x86_64
>> >
>> > pacemaker‑cluster‑libs‑2.0.4‑6.el8.x86_64
>> > pacemaker‑cli‑2.0.4‑6.el8.x86_64
>> > pacemaker‑schemas‑2.0.4‑6.el8.noarch
>> > pacemaker‑2.0.4‑6.el8.x86_64
>> > pacemaker‑libs‑2.0.4‑6.el8.x86_64
>> >
>> > clusternode‑names are lisbon and susanne
>> >
>> > Status of the cluster:
>> > Cluster Summary:
>> > * Stack: corosync
>> > * Current DC: lisbon (version 2.0.4‑6.el8‑2deceaa3ae) ‑ partition with
> quorum
>> > * Last updated: Mon Jan 18 16:30:21 2021
>> > * Last change: Mon Jan 18 16:30:17 2021 by root via cibadmin on
lisbon
>> > * 2 nodes configured
>> > * 7 resource instances configured
>> >
>> > Node List:
>> > * Online: [ lisbon susanne ]
>> >
>> > Active Resources:
>> > * HA‑IP_1 (ocf::heartbeat:IPaddr2): Started susanne
>> > * Clone Set: drbd_database‑clone [drbd_database] (promotable):
>> > * Masters: [ susanne ]
>> > * Slaves: [ lisbon ]
>> > * fs_database (ocf::heartbeat:Filesystem): Started susanne
>> > * Clone Set: drbd_logsfiles‑clone [drbd_logsfiles] (promotable):
>> > * Masters: [ susanne ]
>> > * fs_logfiles (ocf::heartbeat:Filesystem): Started susanne
>> >
>> > drbdadm status
>> >
>> > [root at susanne ~]# drbdadm status
>> > drbd1 role:Primary
>> > disk:UpToDate
>> > lisbon role:Secondary
>> > peer‑disk:UpToDate
>> >
>> > drbd2 role:Primary
>> > disk:UpToDate
>> > lisbon connection:Connecting
>> >
>> > [root at lisbon ~]# drbdadm status
>> > drbd1 role:Secondary
>> > disk:UpToDate
>> > susanne role:Primary
>> > peer‑disk:UpToDate
>> >
>> >
>> > cluster‑config:
>> > Cluster Name: mysql_cluster
>> > Corosync Nodes:
>> > susanne lisbon
>> > Pacemaker Nodes:
>> > lisbon susanne
>> >
>> > Resources:
>> > Resource: HA‑IP_1 (class=ocf provider=heartbeat type=IPaddr2)
>> > Attributes: cidr_netmask=24 ip=192.168.18.150
>> > Operations: monitor interval=15s (HA‑IP_1‑monitor‑interval‑15s)
>> > start interval=0s timeout=20s (HA‑IP_1‑start‑interval‑0s)
>> > stop interval=0s timeout=20s (HA‑IP_1‑stop‑interval‑0s)
>> > Clone: drbd_database‑clone
>> > Meta Attrs: clone‑max=2 clone‑node‑max=1 notify=true promotable=true
> promoted‑max=1 promoted‑node‑max=1
>> > Resource: drbd_database (class=ocf provider=linbit type=drbd)
>> > Attributes: drbd_resource=drbd1
>> > Operations: demote interval=0s timeout=90
> (drbd_database‑demote‑interval‑0s)
>> > monitor interval=20 role=Slave timeout=20
> (drbd_database‑monitor‑interval‑20)
>> > monitor interval=10 role=Master timeout=20
> (drbd_database‑monitor‑interval‑10)
>> > notify interval=0s timeout=90
> (drbd_database‑notify‑interval‑0s)
>> > promote interval=0s timeout=90
> (drbd_database‑promote‑interval‑0s)
>> > reload interval=0s timeout=30
> (drbd_database‑reload‑interval‑0s)
>> > start interval=0s timeout=240
(drbd_database‑start‑interval‑0s)
>> > stop interval=0s timeout=100
(drbd_database‑stop‑interval‑0s)
>> > Resource: fs_database (class=ocf provider=heartbeat type=Filesystem)
>> > Attributes: device=/dev/drbd1 directory=/mnt/clusterfs1 fstype=ext4
>> > Operations: monitor interval=20s timeout=40s
> (fs_database‑monitor‑interval‑20s)
>> > start interval=0s timeout=60s
(fs_database‑start‑interval‑0s)
>> > stop interval=0s timeout=60s
(fs_database‑stop‑interval‑0s)
>> > Clone: drbd_logsfiles‑clone
>> > Meta Attrs: clone‑max=2 clone‑node‑max=1 notify=true promotable=true
> promoted‑max=1 promoted‑node‑max=1
>> > Resource: drbd_logsfiles (class=ocf provider=linbit type=drbd)
>> > Attributes: drbd_resource=drbd2
>> > Operations: demote interval=0s timeout=90
> (drbd_logsfiles‑demote‑interval‑0s)
>> > monitor interval=20 role=Slave timeout=20
> (drbd_logsfiles‑monitor‑interval‑20)
>> > monitor interval=10 role=Master timeout=20
> (drbd_logsfiles‑monitor‑interval‑10)
>> > notify interval=0s timeout=90
> (drbd_logsfiles‑notify‑interval‑0s)
>> > promote interval=0s timeout=90
> (drbd_logsfiles‑promote‑interval‑0s)
>> > reload interval=0s timeout=30
> (drbd_logsfiles‑reload‑interval‑0s)
>> > start interval=0s timeout=240
> (drbd_logsfiles‑start‑interval‑0s)
>> > stop interval=0s timeout=100
(drbd_logsfiles‑stop‑interval‑0s)
>> > Resource: fs_logfiles (class=ocf provider=heartbeat type=Filesystem)
>> > Attributes: device=/dev/drbd2 directory=/mnt/clusterfs2 fstype=ext4
>> > Operations: monitor interval=20s timeout=40s
> (fs_logfiles‑monitor‑interval‑20s)
>> > start interval=0s timeout=60s
(fs_logfiles‑start‑interval‑0s)
>> > stop interval=0s timeout=60s
(fs_logfiles‑stop‑interval‑0s)
>> >
>> > Stonith Devices:
>> > Fencing Levels:
>> >
>> > Location Constraints:
>> > Ordering Constraints:
>> > start drbd_database‑clone then start fs_database (kind:Mandatory)
> (id:order‑drbd_database‑clone‑fs_database‑mandatory)
>> > start drbd_logsfiles‑clone then start fs_logfiles (kind:Mandatory)
> (id:order‑drbd_logsfiles‑clone‑fs_logfiles‑mandatory)
>> > Colocation Constraints:
>> > fs_database with drbd_database‑clone (score:INFINITY)
(with‑rsc‑role:Master)
> (id:colocation‑fs_database‑drbd_database‑clone‑INFINITY)
>> > fs_logfiles with drbd_logsfiles‑clone (score:INFINITY)
> (with‑rsc‑role:Master)
> (id:colocation‑fs_logfiles‑drbd_logsfiles‑clone‑INFINITY)
> ERROR:>>>>> drbd_logsfiles‑clone with drbd_database‑clone (score:INFINITY)
> (with‑rsc‑role:Master)
> (id:colocation‑drbd_logsfiles‑clone‑drbd_database‑clone‑INFINITY)
>> > Ticket Constraints:
>> >
>> > Alerts:
>> > No alerts defined
>> >
>> > Resources Defaults:
>> > No defaults set
>> > Operations Defaults:
>> > No defaults set
>> >
>> > Cluster Properties:
>> > cluster‑infrastructure: corosync
>> > cluster‑name: mysql_cluster
>> > dc‑version: 2.0.4‑6.el8‑2deceaa3ae
>> > have‑watchdog: false
>> > last‑lrm‑refresh: 1610382881
>> > stonith‑enabled: false
>> >
>> > Tags:
>> > No tags defined
>> >
>> > Quorum:
>> > Options:
>
>
>
> Any suggestions are welcome
>
> stay safe and healty
>
> fatcharly
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
More information about the Users
mailing list