[ClusterLabs] CentOS 8 & drbd 9 second slave is not started
fatcharly at gmx.de
fatcharly at gmx.de
Tue Jan 19 07:14:42 EST 2021
Hi Brent,
I now use the drbd without starting the drbd service. All is managed by the pcs, everything gets mounted and unmounted as expected.
In the beginning I was dealing with a lot of problems caused by the selinux and a wrong used constrain.
Now everything works as I know it from my other cluster which are build on CentOS 5/6/7.
Now in the beginning I also work with a disabled firewall because the system is in a secured dmz.
Is there anything I can do for you ?
best regards and stay healthy
fatcharly
> Gesendet: Montag, 18. Januar 2021 um 22:16 Uhr
> Von: "Brent Jensen" <jeneral9 at gmail.com>
> An: fatcharly at gmx.de
> Betreff: Re: [ClusterLabs] CentOS 8 & drbd 9 second slave is not started
>
> Are you getting the the cluster to switch over when doing a 'pcs node
> standby' on the promoted node? I have a super basic config w/ your same
> software versions and I cannot get the slave to promote (constantly
> looping errors such as "Refusing to be Primary while peer is not
> outdated", until I do a drbdadm up <resource> on the standby node). I
> appreciate your input. Brent
>
> On 1/18/2021 9:49 AM, fatcharly at gmx.de wrote:
> > Sorry Guys,
> >
> > problem found, it was a colocation constrain of the drbd-devices.
> >
> > best regards
> >
> > fatcharly
> >
> >
> >
> >> Gesendet: Montag, 18. Januar 2021 um 16:52 Uhr
> >> Von: fatcharly at gmx.de
> >> An: "clusterlabs" <users at clusterlabs.org>
> >> Betreff: [ClusterLabs] CentOS 8 & drbd 9 second slave is not started
> >>
> >> Hi,
> >>
> >> I'm trying to install a 2 node pacemaker/drbd cluster on a CentOS 8.3. I'm using this versions:
> >>
> >> kmod-drbd90-9.0.25-2.el8_3.elrepo.x86_64
> >> drbd90-utils-9.13.1-1.el8.elrepo.x86_64
> >>
> >> pacemaker-cluster-libs-2.0.4-6.el8.x86_64
> >> pacemaker-cli-2.0.4-6.el8.x86_64
> >> pacemaker-schemas-2.0.4-6.el8.noarch
> >> pacemaker-2.0.4-6.el8.x86_64
> >> pacemaker-libs-2.0.4-6.el8.x86_64
> >>
> >> clusternode-names are lisbon and susanne
> >>
> >> There are two drbd-resources configured which are paired with filesystem-resources. Both are working in simple master/slave configuration.
> >> When I start up the cluster one resource is promoted to a master and a slave, the other is just getting on one the master started, but the slave is not starting up.
> >>
> >> Status of the cluster:
> >> Cluster Summary:
> >> * Stack: corosync
> >> * Current DC: lisbon (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum
> >> * Last updated: Mon Jan 18 16:30:21 2021
> >> * Last change: Mon Jan 18 16:30:17 2021 by root via cibadmin on lisbon
> >> * 2 nodes configured
> >> * 7 resource instances configured
> >>
> >> Node List:
> >> * Online: [ lisbon susanne ]
> >>
> >> Active Resources:
> >> * HA-IP_1 (ocf::heartbeat:IPaddr2): Started susanne
> >> * Clone Set: drbd_database-clone [drbd_database] (promotable):
> >> * Masters: [ susanne ]
> >> * Slaves: [ lisbon ]
> >> * fs_database (ocf::heartbeat:Filesystem): Started susanne
> >> * Clone Set: drbd_logsfiles-clone [drbd_logsfiles] (promotable):
> >> * Masters: [ susanne ]
> >> * fs_logfiles (ocf::heartbeat:Filesystem): Started susanne
> >>
> >> drbdadm status
> >>
> >> [root at susanne ~]# drbdadm status
> >> drbd1 role:Primary
> >> disk:UpToDate
> >> lisbon role:Secondary
> >> peer-disk:UpToDate
> >>
> >> drbd2 role:Primary
> >> disk:UpToDate
> >> lisbon connection:Connecting
> >>
> >> [root at lisbon ~]# drbdadm status
> >> drbd1 role:Secondary
> >> disk:UpToDate
> >> susanne role:Primary
> >> peer-disk:UpToDate
> >>
> >>
> >> cluster-config:
> >> Cluster Name: mysql_cluster
> >> Corosync Nodes:
> >> susanne lisbon
> >> Pacemaker Nodes:
> >> lisbon susanne
> >>
> >> Resources:
> >> Resource: HA-IP_1 (class=ocf provider=heartbeat type=IPaddr2)
> >> Attributes: cidr_netmask=24 ip=192.168.18.150
> >> Operations: monitor interval=15s (HA-IP_1-monitor-interval-15s)
> >> start interval=0s timeout=20s (HA-IP_1-start-interval-0s)
> >> stop interval=0s timeout=20s (HA-IP_1-stop-interval-0s)
> >> Clone: drbd_database-clone
> >> Meta Attrs: clone-max=2 clone-node-max=1 notify=true promotable=true promoted-max=1 promoted-node-max=1
> >> Resource: drbd_database (class=ocf provider=linbit type=drbd)
> >> Attributes: drbd_resource=drbd1
> >> Operations: demote interval=0s timeout=90 (drbd_database-demote-interval-0s)
> >> monitor interval=20 role=Slave timeout=20 (drbd_database-monitor-interval-20)
> >> monitor interval=10 role=Master timeout=20 (drbd_database-monitor-interval-10)
> >> notify interval=0s timeout=90 (drbd_database-notify-interval-0s)
> >> promote interval=0s timeout=90 (drbd_database-promote-interval-0s)
> >> reload interval=0s timeout=30 (drbd_database-reload-interval-0s)
> >> start interval=0s timeout=240 (drbd_database-start-interval-0s)
> >> stop interval=0s timeout=100 (drbd_database-stop-interval-0s)
> >> Resource: fs_database (class=ocf provider=heartbeat type=Filesystem)
> >> Attributes: device=/dev/drbd1 directory=/mnt/clusterfs1 fstype=ext4
> >> Operations: monitor interval=20s timeout=40s (fs_database-monitor-interval-20s)
> >> start interval=0s timeout=60s (fs_database-start-interval-0s)
> >> stop interval=0s timeout=60s (fs_database-stop-interval-0s)
> >> Clone: drbd_logsfiles-clone
> >> Meta Attrs: clone-max=2 clone-node-max=1 notify=true promotable=true promoted-max=1 promoted-node-max=1
> >> Resource: drbd_logsfiles (class=ocf provider=linbit type=drbd)
> >> Attributes: drbd_resource=drbd2
> >> Operations: demote interval=0s timeout=90 (drbd_logsfiles-demote-interval-0s)
> >> monitor interval=20 role=Slave timeout=20 (drbd_logsfiles-monitor-interval-20)
> >> monitor interval=10 role=Master timeout=20 (drbd_logsfiles-monitor-interval-10)
> >> notify interval=0s timeout=90 (drbd_logsfiles-notify-interval-0s)
> >> promote interval=0s timeout=90 (drbd_logsfiles-promote-interval-0s)
> >> reload interval=0s timeout=30 (drbd_logsfiles-reload-interval-0s)
> >> start interval=0s timeout=240 (drbd_logsfiles-start-interval-0s)
> >> stop interval=0s timeout=100 (drbd_logsfiles-stop-interval-0s)
> >> Resource: fs_logfiles (class=ocf provider=heartbeat type=Filesystem)
> >> Attributes: device=/dev/drbd2 directory=/mnt/clusterfs2 fstype=ext4
> >> Operations: monitor interval=20s timeout=40s (fs_logfiles-monitor-interval-20s)
> >> start interval=0s timeout=60s (fs_logfiles-start-interval-0s)
> >> stop interval=0s timeout=60s (fs_logfiles-stop-interval-0s)
> >>
> >> Stonith Devices:
> >> Fencing Levels:
> >>
> >> Location Constraints:
> >> Ordering Constraints:
> >> start drbd_database-clone then start fs_database (kind:Mandatory) (id:order-drbd_database-clone-fs_database-mandatory)
> >> start drbd_logsfiles-clone then start fs_logfiles (kind:Mandatory) (id:order-drbd_logsfiles-clone-fs_logfiles-mandatory)
> >> Colocation Constraints:
> >> fs_database with drbd_database-clone (score:INFINITY) (with-rsc-role:Master) (id:colocation-fs_database-drbd_database-clone-INFINITY)
> >> fs_logfiles with drbd_logsfiles-clone (score:INFINITY) (with-rsc-role:Master) (id:colocation-fs_logfiles-drbd_logsfiles-clone-INFINITY)
> >> drbd_logsfiles-clone with drbd_database-clone (score:INFINITY) (with-rsc-role:Master) (id:colocation-drbd_logsfiles-clone-drbd_database-clone-INFINITY)
> >> Ticket Constraints:
> >>
> >> Alerts:
> >> No alerts defined
> >>
> >> Resources Defaults:
> >> No defaults set
> >> Operations Defaults:
> >> No defaults set
> >>
> >> Cluster Properties:
> >> cluster-infrastructure: corosync
> >> cluster-name: mysql_cluster
> >> dc-version: 2.0.4-6.el8-2deceaa3ae
> >> have-watchdog: false
> >> last-lrm-refresh: 1610382881
> >> stonith-enabled: false
> >>
> >> Tags:
> >> No tags defined
> >>
> >> Quorum:
> >> Options:
> >>
> >>
> >> Any suggestions are welcome
> >>
> >> stay safe and healty
> >>
> >> fatcharly
> >> _______________________________________________
> >> Manage your subscription:
> >> https://lists.clusterlabs.org/mailman/listinfo/users
> >>
> >> ClusterLabs home: https://www.clusterlabs.org/
> >>
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
>
> --
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
>
More information about the Users
mailing list