[ClusterLabs] Question on sharing data with DRDB

Digimer lists at alteeve.ca
Wed Mar 20 13:34:52 EDT 2019


Depending on your fail-over tolerances, I might add NFS to the mix and
have the NFS server run on one node or the other, exporting your ext4 FS
that sits on DRBD in single-primary mode.

The failover (if the NFS host died) would look like this;

1. Lost node is fenced.
2. DRBD is promoted from Secondary to Primary
3. ext4 FS is mounted.
4. Virtual IP (used for NFS) is brought up.
5. NFS starts

Startup and graceful migration would be the same, minus the fence.

digimer

On 2019-03-20 12:53 p.m., JCA wrote:
> Thanks for the feedback. Based on what I am learning, I am not sure how
> to proceed. My ultimate goal is the following:
> I would like to have a two-node cluster, each node running exactly the
> same application A. I looked into what Pacemaker has to offer in this
> respect, and I believe that the OCF resource agent paradigm will allow
> me to integrate A with Pacemaker painlessly. Now the instances of A
> running in each of the nodes will need to have access to the same data
> set S, which can (and will) change regularly during the operation of A.
> That's why I thought that DRBD was what I needed here. I therefore need
> for both nodes to have access to S at all times. Now it would be seem to
> be the case that, in order to use DRDB that way, I can't use an ext4
> filesystem - or any other "common" filesystem, at that - I have to use
> GFS2, or something similarly specialized. While not necessarily a
> showstopper (and, based on what you wrote, logically inevitable) this
> does change things somewhat for me, which makes me wonder what other
> approaches to deploy the scenario above, integrated with Pacemaker,
> might be available out there?
> 
> On Wed, Mar 20, 2019 at 10:37 AM Digimer <lists at alteeve.ca
> <mailto:lists at alteeve.ca>> wrote:
> 
>     Note;
> 
>       Cluster filesystems are amazing if you need them, and to be avoided if
>     at all possible. The overhead from the cluster locking hurts performance
>     quite a lot, and adds a non-trivial layer of complexity.
> 
>       I say this as someone who has used dual-primary DRBD with GFS2 for
>     many years.
> 
>       To expand on why you can't use something like ext4; Non-cluster-aware
>     file systems expect all changes to the backing device to go through it.
>     So there's no mechanism to tell the FS on one node that blocks have
>     changed because of actions on another node. Likewise, they have no
>     mechanism to coordinate sane and safe access to blocks. These mechanisms
>     are exactly what makes a cluster FS what it is.
> 
>     digimer
> 
>     On 2019-03-20 11:52 a.m., Emmanuel Gelati wrote:
>     > If you need to access from both nodes, you need to use primary/primary
>     > mode in drbd
>     >
>     > Il giorno mer 20 mar 2019 alle ore 16:51 JCA <1.41421 at gmail.com
>     <mailto:1.41421 at gmail.com>
>     > <mailto:1.41421 at gmail.com <mailto:1.41421 at gmail.com>>> ha scritto:
>     >
>     >     OK, thanks. Yet another thing I was not aware of in the clustering
>     >     world :-(
>     >
>     >     On Wed, Mar 20, 2019 at 9:41 AM Valentin Vidic
>     >     <Valentin.Vidic at carnet.hr <mailto:Valentin.Vidic at carnet.hr>
>     <mailto:Valentin.Vidic at carnet.hr <mailto:Valentin.Vidic at carnet.hr>>>
>     wrote:
>     >
>     >         On Wed, Mar 20, 2019 at 09:36:58AM -0600, JCA wrote:
>     >         >      # pcs -f fs_cfg resource create TestFS Filesystem
>     >         device="/dev/drbd1"
>     >         > directory="/tmp/Testing"
>     >         >         fstype="ext4"
>     >
>     >         ext4 can only be mounted on one node at a time. If you need to
>     >         access
>     >         files on both nodes at the same time than a cluster filesystem
>     >         should
>     >         be used (GFS2, OCFS2).
>     >
>     >         --
>     >         Valentin
>     >         _______________________________________________
>     >         Manage your subscription:
>     >         https://lists.clusterlabs.org/mailman/listinfo/users
>     >
>     >         ClusterLabs home: https://www.clusterlabs.org/
>     >
>     >     _______________________________________________
>     >     Manage your subscription:
>     >     https://lists.clusterlabs.org/mailman/listinfo/users
>     >
>     >     ClusterLabs home: https://www.clusterlabs.org/
>     >
>     >
>     >
>     > --
>     >   .~.
>     >   /V\
>     >  //  \\
>     > /(   )\
>     > ^`~'^
>     >
>     > _______________________________________________
>     > Manage your subscription:
>     > https://lists.clusterlabs.org/mailman/listinfo/users
>     >
>     > ClusterLabs home: https://www.clusterlabs.org/
>     >
> 
> 
>     -- 
>     Digimer
>     Papers and Projects: https://alteeve.com/w/
>     "I am, somehow, less interested in the weight and convolutions of
>     Einstein’s brain than in the near certainty that people of equal talent
>     have lived and died in cotton fields and sweatshops." - Stephen Jay
>     Gould
>     _______________________________________________
>     Manage your subscription:
>     https://lists.clusterlabs.org/mailman/listinfo/users
> 
>     ClusterLabs home: https://www.clusterlabs.org/
> 
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
> 


-- 
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould


More information about the Users mailing list