[ClusterLabs] Mysql upgrade in DRBD setup

Attila Megyeri amegyeri at minerva-soft.com
Mon Oct 16 02:39:21 EDT 2017


hi Ken, 

My problem with the scenario you described is the following:

On the central side, if I use M-S replication, the master binlog information will be different on the master and the slave. Therefore, if a failover occurs, remote sites will have difficulties with the "change master" operation (binlog file and position differ on the two hosts). This was the reason for choosing drbd for the central master.

Yes, Galera could be an option, but that would require some redesign and also the experience is missing...

What about the following, for the DRBD upgrade:

- I would upgrade the active node normally, causing a small downtime. (Cluster in maintenance mode)
- Then, when the master is up and running again, I would mount a local dummy  mysql dir on the slave (content does not matter), and perform the upgrade of the secondary node. (program files would be upgraded, just as the dummy database I don't care about)
- Thenf finally, I would attempt a failover to the secondary, just to test if all is fine.

Besides the small downtime, I don't see any significant risks in this approach, do you?

Thanks,
Attila


-----Original Message-----
From: Ken Gaillot [mailto:kgaillot at redhat.com] 
Sent: Friday, October 13, 2017 9:03 PM
To: Cluster Labs - All topics related to open-source clustering welcomed <users at clusterlabs.org>
Subject: Re: [ClusterLabs] Mysql upgrade in DRBD setup

On Fri, 2017-10-13 at 17:35 +0200, Attila Megyeri wrote:
> Hi Ken, Kristián,
> 
> 
> Thanks - I am familiar with the native replication, and we use that as 
> well.
> But in this scenario I have to use DRBD. (There is a DRBD Mysql 
> cluster that is a central site, which is replicated to many sites 
> using native replication, and all sites have DRBD clusters as well - 
> In this setup I have to use DRBD for high availability).
> 
> 
> Anyway - I thought there is a better approach for the DRBD-replicated 
> Mysql than what I outlined.
> What I am concerned about, is what will happen if I upgrade the active 
> node (let's say I'm okay with the downtime) - when I fail over to the 
> other node, where the program files and the data files are on 
> different versions...And when I start upgrading that.
> 
> Any experience anyone?
> 
> @Kristián: my experience shows that If I try to update mysql without a 
> mounted data fs - it will fail terribly... So the only option is to 
> upgrade the mounted, and active instance - but the issue is the 
> version difference (prog vs. data)

Exactly -- which is why I'd still go with native replication for this, too. It just adds a step in the upgrade process I outlined earlier:
repoint all the other sites' mysql instances to the second central server after it is upgraded (before or after it is made master, doesn't matter). I'm assuming only the master is allowed to write.

Another alternative would be to use galera for multi-master (at least for the two servers at the central site).

Also, it's still possible to use DRBD beneath a native replication setup, but you'd have to replicate both the master and slave data (using only one at a time on any given server). This makes more sense if the mysql servers are running inside VMs or containers that can migrate between the physical machines.

> 
> Thanks!
> 
> 
> 
> -----Original Message-----
> From: Ken Gaillot [mailto:kgaillot at redhat.com]
> Sent: Thursday, October 12, 2017 9:22 PM
> To: Cluster Labs - All topics related to open-source clustering 
> welcomed <users at clusterlabs.org>
> Subject: Re: [ClusterLabs] Mysql upgrade in DRBD setup
> 
> On Thu, 2017-10-12 at 18:51 +0200, Attila Megyeri wrote:
> > Hi all,
> >  
> > What is the recommended mysql server upgrade methodology in case of 
> > an active/passive DRBD storage?
> > (Ubuntu is the platform)
> 
> If you want to minimize downtime in a MySQL upgrade, your best bet is 
> to use MySQL native replication rather than replicate the storage.
> 
> 1. starting point: node1 = master, node2 = slave 2. stop mysql on 
> node2, upgrade, start mysql again, ensure OK 3. switch master to
> node2 and slave to node1, ensure OK 4. stop mysql on node1, upgrade, 
> start mysql again, ensure OK
> 
> You might have a small window where the database is read-only while 
> you switch masters (you can keep it to a few seconds if you arrange 
> things well), but other than that, you won't have any downtime, even 
> if some part of the upgrade gives you trouble.
> 
> >  
> > 1)      On the passive node the mysql data directory is not mounted, 
> > so the backup fails (some postinstall jobs will attempt to perform 
> > manipulations on certain files in the data directory).
> > 2)      If the upgrade is done on the active node, it will restart 
> > the service (with the service restart, not in a crm managed 
> > fassion…), which is not a very good option (downtime in a HA 
> > solution). Not to mention, that it will update some files in the 
> > mysql data directory, which can cause strange issues if the A/P pair 
> > is changed – since on the other node the program code will still be 
> > the old one, while the data dir is already upgraded.
> >  
> > Any hints are welcome!
> >  
> > Thanks,
> > Attila
> >  
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org 
> > http://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org Getting started: 
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> > pdf
> > Bugs: http://bugs.clusterlabs.org
> 
> --
> Ken Gaillot <kgaillot at redhat.com>
> 
> _______________________________________________
> Users mailing list: Users at clusterlabs.org http://lists.clusterlabs.or 
> g/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> _______________________________________________
> Users mailing list: Users at clusterlabs.org 
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
--
Ken Gaillot <kgaillot at redhat.com>

_______________________________________________
Users mailing list: Users at clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


More information about the Users mailing list