[ClusterLabs] copy file

Mevo Govo govomevo at gmail.com
Fri Mar 9 04:35:02 EST 2018


Hi,
Thank for advices, I'm thinking an optimal config for us. While the DB is
working, it would do native DB replication. But oracle needs synchronized
controlfiles when it starts normally. I can save the file before overwrite
it. Currently I mean this (c1, c2, c3, c4, c5, c6 are control files):

c1: on node A, local file system
c2: on node A, on DRBD device1
c3: on node A, on DRBD device2 (FRA)
c4: on node B, on DRBD device2 (FRA)
c5: on node B, on DRBD device1
c6: on node B, local file system

c2+c3 is a "standard" oracle config. c2 is replicated into FRA (fast
recovery area of oracle). c1 (and c6) is just if all data in DRBD is lost.
c1, c2, c3, c4, c5 (but not c6) are in sync while DB runs on node A.
(c1,c2,c3: native DB replication, c2-c5, c3-c4 DRBD replication, protocol C)
When I switch from node A to node B, c6 is out of sync (older version). I
can (and I will) save it before overwrite by c5. But if c5 is corrupt,
manual repair is needed, and there are other replications to repair it (c4,
c3, c2, c1)
If c1 and c6 would be the same file on an nfs filesystem, there would be a
replication outside of DRBD without this "copy sync" problem. But in this
case, the fail of only one component (the nfs) would cause unavailable the
oracle DB on both node. (oracle DB stops if either of controlfile is lost
or corrupted. No automatic reapair happens)
As I think, the above consideration is similar to 3 node.
If we trust DRBD, no c1 and c6 would be needed, but we are new users of
DRBD.
Thanks: lados.





2018-03-08 20:12 GMT+01:00 Ken Gaillot <kgaillot at redhat.com>:

> On Thu, 2018-03-08 at 18:49 +0100, Mevo Govo wrote:
> > Hi,
> > thanks for advice and your intrest.
> > We would use oracle database over DRBD. Datafiles (and control and
> > redo files) will be on DRBD. FRA also (on an other DRBD device). But
> > we are new in DRBD, and DRBD is also a component what can fails. We
> > plan a scenario to recover the database without DRBD (without data
> > loss, if possible). We would use nfs or local filesystem for this. If
> > we use local FS, the control file is out of sync on the B node when
> > switch over (from A to B). We would copy controlfile (and redo files)
> > from DRBD to the local FS. After this, oracle can start, and it keeps
> > the controlfiles syncronized. If other backup related files (archlog,
> > backup) are also available on the local FS of either node, we can
> > recover the DB without DRBD (without data loss)
> > (I know it is a worst case scenario, because if DRBD fails, the FS on
> > it should be available at least on one node)
> > Thanks: lados.
>
> Why not use native database replication instead of copying files?
>
> Any method getting files from a DRBD cluster to a non-DRBD node will
> have some inherent problems: it would have to be periodic, losing some
> data since the last run; it would still fail if some DRBD issue
> corrupted the on-disk data, because you would be copying the corrupted
> data; and databases generally have in-memory state information that
> makes files copied from a live server insufficient for data integrity.
>
> Native replication would avoid all that.
>
> > 2018-03-07 10:20 GMT+01:00 Klaus Wenninger <kwenning at redhat.com>:
> > > On 03/07/2018 10:03 AM, Mevo Govo wrote:
> > > > Thanks for advices, I will try!
> > > > lados.
> > > >
> > > > 2018-03-05 23:29 GMT+01:00 Ken Gaillot <kgaillot at redhat.com
> > > > <mailto:kgaillot at redhat.com>>:
> > > >
> > > >     On Mon, 2018-03-05 at 15:09 +0100, Mevo Govo wrote:
> > > >     > Hi,
> > > >     > I am new in pacemaker. I think, I should use DRBD instead
> > > of copy
> > > >     > file. But in this case, I would copy a file from a DRBD to
> > > an
> > > >     > external device. Is there a builtin way to copy a file
> > > before a
> > > >     > resource is started (and after the DRBD is promoted)? For
> > > example a
> > > >     > "copy" resource? I did not find it.
> > > >     > Thanks: lados.
> > > >     >
> > > >
> > > >     There's no stock way of doing that, but you could easily
> > > write an
> > > >     agent
> > > >     that simply copies a file. You could use ocf:pacemaker:Dummy
> > > as a
> > > >     template, and add the copy to the start action. You can use
> > > standard
> > > >     ordering and colocation constraints to make sure everything
> > > happens in
> > > >     the right sequence.
> > > >
> > > >     I don't know what capabilities your external device has, but
> > > another
> > > >     approach would be to an NFS server to share the DRBD file
> > > system, and
> > > >     mount it from the device, if you want direct access to the
> > > original
> > > >     file rather than a copy.
> > > >
> > >
> > > csync2 & rsync might be considered as well although not knowing
> > > your scenario in detail it is hard to tell if it would be overkill.
> > >
> > > Regards,
> > > Klaus
> > >
> > > >     --
> > > >     Ken Gaillot <kgaillot at redhat.com <mailto:kgaillot at redhat.com>
> > > >
> > > >     _______________________________________________
> > > >     Users mailing list: Users at clusterlabs.org
> > > >     <mailto:Users at clusterlabs.org>
> > > >     https://lists.clusterlabs.org/mailman/listinfo/users
> > > >     <https://lists.clusterlabs.org/mailman/listinfo/users>
> > > >
> > > >     Project Home: http://www.clusterlabs.org
> > > >     Getting started:
> > > >     http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > >     <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> > > >     Bugs: http://bugs.clusterlabs.org
> > > >
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > Users mailing list: Users at clusterlabs.org
> > > > https://lists.clusterlabs.org/mailman/listinfo/users
> > > >
> > > > Project Home: http://www.clusterlabs.org
> > > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scra
> > > tch.pdf
> > > > Bugs: http://bugs.clusterlabs.org
> > >
> > > _______________________________________________
> > > Users mailing list: Users at clusterlabs.org
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > >
> > > Project Home: http://www.clusterlabs.org
> > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratc
> > > h.pdf
> > > Bugs: http://bugs.clusterlabs.org
> > >
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> > pdf
> > Bugs: http://bugs.clusterlabs.org
> --
> Ken Gaillot <kgaillot at redhat.com>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20180309/0285755f/attachment-0002.html>


More information about the Users mailing list