[ClusterLabs] CephFS virtual IP

Oscar Segarra oscar.segarra at gmail.com
Tue Mar 6 10:31:27 EST 2018


Hi Ken,

Thanks a lot for your quick response. As you guess, the interesting part is
the port-check.

As I'm planning to work with Ceph cluster, I think there is any resource
monitor for Ceph Monitor implemented yet.

Regarding your suggestions:

*If for whatever reason, you can't put the service under cluster control*
That would be perfect for me! But I don't know how to do it... :(

*(1) write a dummy agent whose monitor action checks the port, and colocate
the IP with that; *
How can I do that, is there any tutorial?

*(2) have a script check the port and set a node attribute appropriately,
and use a  rule-based location constraint for the IP (this approach would
be useful mainly if you already have some script doing a check).*
How can I do that, is there any tutorial?

Sorry for my simple questions, I'm a basic pacemaker/corosync user, and I
don't have experience developing new resource agents.

Thanks a lot!


2018-03-06 16:05 GMT+01:00 Ken Gaillot <kgaillot at redhat.com>:

> On Tue, 2018-03-06 at 10:11 +0100, Oscar Segarra wrote:
> > Hi,
> >
> > I'd like to recover this post in order to know if there is any way to
> > achieve this kind of simple HA system.
> >
> > Thanks a lot.
> >
> > 2017-08-28 4:10 GMT+02:00 Oscar Segarra <oscar.segarra at gmail.com>:
> > > Hi,
> > >
> > > In Ceph, by design there is no single point of failure I  terms of
> > > server roles, nevertheless, from the client point of view, it might
> > > exist.
> > >
> > > In my environment:
> > > Mon1: 192.168.100.101:6789
> > > Mon2: 192.168.100.102:6789
> > > Mon3: 192.168.100.103:6789
> > >
> > > Client: 192.168.100.104
> > >
> > > I have created a line in /etc/festa referencing Mon but, of course,
> > > if Mon1 fails, the mount point gets stuck.
> > >
> > > I'd like to create a vip assigned to any host with tcp port 6789 UP
> > > and, in the client, mount the CephFS using that VIP.
> > >
> > > Is there any way to achieve this?
> > >
> > > Thanks a lot in advance!
>
> The IP itself would be a standard floating IP address using the
> ocf:heartbeat:IPaddr2 resource agent. "Clusters from Scratch" has an
> example, though I'm sure you're familiar with that:
>
> http://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_
> from_Scratch/_add_a_resource.html
>
> The interesting part is making sure that port 6789 is responding. The
> usual design in these cases is to put the service that provides that
> port under cluster control; its monitor action would ensure the port is
> responding, and a standard colocation constraint would ensure the IP
> can only run when the service is up.
>
> If for whatever reason, you can't put the service under cluster
> control, I see two approaches: (1) write a dummy agent whose monitor
> action checks the port, and colocate the IP with that; or (2) have a
> script check the port and set a node attribute appropriately, and use a
> rule-based location constraint for the IP (this approach would be
> useful mainly if you already have some script doing a check).
> --
> Ken Gaillot <kgaillot at redhat.com>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20180306/4a3cfb51/attachment-0002.html>


More information about the Users mailing list