[ClusterLabs] NFS in different subnets

Strahil Nikolov hunter86_bg at yahoo.com
Sat Apr 18 02:48:46 EDT 2020

On April 18, 2020 8:43:51 AM GMT+03:00, Digimer <lists at alteeve.ca> wrote:
>For what it's worth; A lot of HA specialists spent a lot of time trying
>to find the simplest _reliable_ way to do multi-site/geo-replicated HA.
>I am certain you'll find a simpler solution, but I would also wager
>when it counts, it's going to let you down.
>The only way to make things simpler is to start making assumptions, and
>if you do that, at some point you will end up with a split-brain (both
>sites thinking the other is gone and trying to take the primary role)
>both sites will think the other is running, and neither will be. Add
>shared storage to the mix, and there's a high chance you will corrupt
>data when you need it most.
>Of course, there's always a chance you'll come up with a system no one
>else has thought of, just be aware of what you know and what you don't.
>HA is fun, in big part, because it's a challenge to get right.
>On 2020-04-17 4:43 p.m., Daniel Smith wrote:
>> We only have 1 cluster per site so adding additional hardware is not
>> optimal. I feel like I'm trying to use a saw where an axe would be
>> proper tool. I thank you or your time, but it appears that it may be
>> best for me to write something from scratch for the monitoring and
>> controlling of the failover rather than try and force pacemaker to do
>> something it was not built for.
>> **Daniel Smith
>> Network Engineer
>> **15894 Diplomatic Plaza Dr | Houston, TX 77032
>> P: 281-233-8487 | M: 832-301-1087
>> Daniel.Smith at craneww.com <mailto:Daniel.Smith at craneww.com>
>> <https://craneww.com/>
>> -----Original Message-----
>> From: Digimer [mailto:lists at alteeve.ca]
>> Sent: Friday, April 17, 2020 2:38 PM
>> To: Daniel Smith <Daniel.Smith at craneww.com>; Cluster Labs - Users
>> <users at clusterlabs.org>
>> Subject: Re: NFS in different subnets
>> EXTERNAL SENDER: Use caution with links/attachments.
>> On 2020-04-17 3:20 p.m., Daniel Smith wrote:
>>> Thank you digimer, and I apologize for getting the wrong email.
>>> Booth was the piece I was missing.  Have been researching setting
>>> up and finding a third location for quorum. From what I have found,
>>> believe I will need to setup single node pacemaker clusters at each
>> No, each sites needs to be a proper cluster (2 nodes minimum). The
>> is that, if the link to the building is lost, the cluster at the lost
>> site will shut down. With only one node, a hung node (that might
>> later) could recover and think it could still do things before it
>> realized it shouldn't. Booth is "a cluster of clusters".
>> The nodes at each site should be on different hardware, for the same
>> reason. It is very much NOT a waste of resources (and, of course, use
>> proper, tested STONITH/fencing).
>>> datacenter to use with booth. Since we have ESX clusters at each
>>> which has its own redundancies built in, building redundant nodes at
>>> each site is pretty much a waste of resources imho. I have 2
>>> about this setup though:
>>> 1.       If I setup pacemaker with a single node an no virtual IP,
>>> there any problems I need to be aware of?
>> Yes, see above.
>>> 2.       Is drbd the best tool for the data sync between the sites?
>>> looked at drbd proxy, but I get the sense that it's not open source,
>>> or would rsync with incrond be a better option?
>> DRBD would work, but you have to make a choice; If you run
>> so that data is never lost (writes are confirmed when they hit both
>> sites), then your disk latency/bandwidth is your network
>> latency/bandwidth. Otherwise, you run asychronous but you'll lose any
>> data that didn't get transmitted before a site is lost.
>> As for proxy; Yes, it's a commercial add-on. If protocol A (async)
>> replication can't buffer the data to be transmitted (because the data
>> changing faster than it can be flushed out), DRBD proxy provides a
>> system to have a MUCH larger send cache. It's specifically designed
>> long-throw asynchronous replication.
>>> I already made a script that executes with the network startup that
>>> updates DNS using nsupdate so that should be easy to create a
>>> based on it I would think.
>> Yes, RAs are fairly simple to write. See:
>> digimer
>> --
>> Digimer
>> Papers and Projects:
>> "I am, somehow, less interested in the weight and convolutions of
>> Einstein's brain than in the near certainty that people of equal
>> have lived and died in cotton fields and sweatshops." - Stephen Jay
>Papers and Projects: https://alteeve.com/w/
>"I am, somehow, less interested in the weight and convolutions of
>Einstein’s brain than in the near certainty that people of equal talent
>have lived and died in cotton fields and sweatshops." - Stephen Jay
>Manage your subscription:
>ClusterLabs home: https://www.clusterlabs.org/

I don't get something.

Why this cannot be done?

One  node is in siteA, one in siteB , qnet on third location.Routing between the 2 subnets is established and symmetrical.
Fencing via IPMI or  SBD (for  example from a HA iSCSI cluster) is  configured

The NFS resource is started on 1  node and a special RA is  used for the DNS records. If node1 dies, the cluster  will fence  it and node2  will  power up the NFS and update the records.

Of course, updating DNS only from 1  side must work for both sites.

Best Regards,
Strahil Nikolov

More information about the Users mailing list