[ClusterLabs] [EXT] Re: Re: Clarification on resource groups
Eugen Block
eblock at nde.ag
Tue Feb 4 14:41:48 UTC 2025
I just noticed that with the crm command, I'm not even able to create
resource groups. At least I don't see any options for that. I
installed pcs on my lab env, there I have the option, but it doesn't
allow me to create groups from cloned resources:
root at controller01:~# pcs resource group add openstack-services
cl-httpd cl-neutron-server cl-nova-api
Error: 'cl-httpd' is a clone resource, clone resources cannot be put
into a group
Error: 'cl-neutron-server' is a clone resource, clone resources cannot
be put into a group
Error: 'cl-nova-api' is a clone resource, clone resources cannot be
put into a group
I guess this answers my question then. But to answer your questions
for completeness sake:
> As I understand it the virtual IP is a floating resource, but who needs that?
Correct, it can move between the two controller nodes. The openstack
endpoints are configured with a "virtual" hostname, pointing to this
VIP. So when one controller goes down, the services still get a
response from the other one after the VIP moved.
> Are galera and rabit also "floating" resources?
They are not floating services, but multi-state resources. Galera
forms a highly available MariaDB cluster, the clients (openstack
services) rely on the VIP for database access.
RabbitMQ is also a multi-state resource and forms a HA cluster by its
own, this service actually doesn't require the VIP because the
services know about all rabbit nodes:
transport_url =
rabbit://user:pw@controller01.fqdn:5672,user:pw@controller02.fqdn:5672/virtual_host
> As I understand it cinder,nova, and neutron run on both nodes
> independently; is that right?
Correct, all those systemd resources are stateless services and are
active on both nodes, except for cinder-volumes. That is a stateful
service and only one active instance is allowed.
Should I need to setup a fresh cluster, I will keep this in mind,
maybe I can make use of these groups somehow.
Thanks again for taking the time, I appreciate it! I believe we can
consider this thread as closed.
Best regards,
Eugen
Zitat von "Windl, Ulrich" <u.windl at ukr.de>:
> Hi!
>
> So did you configure anything in the cluster yet?
> As I understand it the virtual IP is a floating resource, but who
> needs that? Are galera and rabit also "floating" resources?
> As I understand it cinder,nova, and neutron run on both nodes
> independently; is that right?
>
> I was expecting to see a configuration with primitves, colocations
> and orderings, etc.
> Basically a group is just some syntactic sugar or colocation and
> ordering, while the later are more flexible.
>
> Kind regards,
> Ulrich Windl
>
>> -----Original Message-----
>> From: Users <users-bounces at clusterlabs.org> On Behalf Of Eugen Block
>> Sent: Friday, January 31, 2025 3:16 PM
>> To: Cluster Labs - All topics related to open-source clustering welcomed
>> <users at clusterlabs.org>
>> Subject: [EXT] Re: [ClusterLabs] Re: Clarification on resource groups
>>
>> Hi,
>>
>> thanks again for taking the time to look into this, I appreciate it!
>> Currently, I don't have any resource groups. I wanted to understand if
>> resource groups can achieve what I currently do with a script. But I
>> have the impression that they cannot. I can still try to clarify more,
>> I'll only a few services to keep it brief.
>>
>> controller01:
>> - virtual-ip
>> - galera
>> - rabbit
>> - clone-cinder (systemd)
>> - clone-nova (systemd)
>> - clone-neutron (systemd)
>>
>> controller02:
>> - galera
>> - rabbit
>> - clone-cinder (systemd)
>> - clone-nova (systemd)
>> - clone-neutron (systemd)
>>
>> The systemd services are multi-active. Now I would like to put only
>> the cloned resources into maintenance mode, not the entire node. I
>> thought I could define a resource group containing only the systemd
>> services, so I could put only them into maintenance mode. That way,
>> when controller01 would lose the virtual-ip, it could be moved to
>> controller02. But the cloned resources would still be unmanaged so I
>> could upgrade them safely. I don't even want to move the group, just
>> put it into maintenance mode since the remaining node still has an
>> active group of systemd resources.
>> But from my current understanding, I would have had to create those
>> groups during cluster bootstrap, maybe I misunderstood though. I'm
>> fine with my script solution for the time being, I was just curious if
>> this scenario was already covered by pacemaker.
>>
>> Thanks,
>> Eugen
>>
>> Zitat von "Windl, Ulrich" <u.windl at ukr.de>:
>>
>> > Hi!
>> >
>> > Maybe you should show the resources and their dependencies. If the
>> > VIP is in a group, what else is in that group?
>> > In my first reading I thought you want to manage one resource in a
>> > group, but still want the group to move. Is that right?
>> >
>> > Kind regards,
>> > Ulrich Windl
>> >
>> >> -----Original Message-----
>> >> From: Users <users-bounces at clusterlabs.org> On Behalf Of Eugen Block
>> >> Sent: Monday, January 27, 2025 3:41 PM
>> >> To: Cluster Labs - All topics related to open-source clustering welcomed
>> >> <users at clusterlabs.org>
>> >> Subject: [EXT] Re: [ClusterLabs] Clarification on resource groups
>> >>
>> >> Thanks for your response.
>> >>
>> >> Maybe I didn't explain myself well enough, I can go into more detail
>> >> if necessary. I wanted to focus on resource groups first. But let me
>> >> give an example from a recent upgrade that didn't go as smooth as
>> >> planned.
>> >>
>> >> We have a virtual IP colocated with haproxy, which redirects
>> >> (OpenStack) API calls to the backend servers. Since there are multiple
>> >> instances of those API services running, the services are still
>> >> responsive and functioning properly.
>> >> So I put one node in maintenance mode to be able to stop all the
>> >> systemd units wrt OpenStack, but not Galera and RabbitMQ. This node
>> >> didn't have the VIP at the time. When I was done with the first node,
>> >> everything was still good. But putting the node with the VIP into
>> >> maintenance mode was a mistake:
>> >>
>> >> During the system update there were also updates for network related
>> >> packages (I hadn't noticed that on the first node), leading to a
>> >> restart of the network service, causing the VIP to vanish. But since
>> >> pacemaker couldn't move the resource, we had an API outage for
>> >> OpenStack. To avoid that in the future, I will only put some of the
>> >> resources into maintenance mode, not all of them. That way pacemaker
>> >> will be able to move the VIP and HAProxy to the already upgraded node,
>> >> so there will only be a tiny disruption, but most clients won't
>> >> notice. I successfully tested that behavior on a test cloud, hence I'm
>> >> confident that this is the right approach in my case.
>> >>
>> >> If I put the entire cluster into maintenance mode, the VIP wouldn't be
>> >> moved away when the network gets interrupted, that would cause a
>> >> client disruption.
>> >>
>> >> Thanks!
>> >> Eugen
>> >>
>> >>
>> >> Zitat von "Windl, Ulrich" <u.windl at ukr.de>:
>> >>
>> >> > Hi!
>> >> >
>> >> > Assuming you know what "location" refers to. What you are doing
>> >> > makes very little sense IMHO:
>> >> > When working on a service that needs some IP, it makes little sense
>> >> > to put the service in maintenance mode, but not the IP:
>> >> > Imagine something bad happens to the network and the cluster wants
>> >> > to move the IP while you are working on the service. Would you want
>> >> > that to happen? Also (as I see it), to move the group, the cluster
>> >> > would stop the service first, but when it's in maintenance mode, it
>> >> > can't. So it can't move the group (and thus the IP neither).
>> >> > Pertsonally I put the cluster in maintenance mode as a whole,
>> >> > preferably for a rather short time. Unless services fail very
>> >> > frequently, this seems to be a safe mode of operation to me.
>> >> >
>> >> > Kind regards,
>> >> > Ulrich Windl
>> >> >
>> >> >> -----Original Message-----
>> >> >> From: Users <users-bounces at clusterlabs.org> On Behalf Of Eugen
>> Block
>> >> >> Sent: Monday, January 13, 2025 1:17 PM
>> >> >> To: users at clusterlabs.org
>> >> >> Subject: [EXT] [ClusterLabs] Clarification on resource groups
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> I'm hoping to get some clarification on my understanding of resource
>> >> >> groups [0]. It states:
>> >> >>
>> >> >> > One of the most common elements of a cluster is a set of resources
>> >> >> > that need to be located together, start sequentially, and stop in
>> >> >> > the reverse order.
>> >> >>
>> >> >> Especially the "located together" attribute confuses me.
>> >> >>
>> >> >> I'll try to provide some context:
>> >> >> I have a couple of systemd services as clones and some multi-state
>> >> >> resources such as galera and rabbitmq, running on two pacemaker
>> nodes.
>> >> >> In case of an upgrade or any kind of maintenance, I want to use the
>> >> >> maintenance mode for some resources, but not all of them. For
>> example,
>> >> >> I want the virtual IP, galera and rabbitmq to be still managed while
>> >> >> the rest is in maintenance mode. So currently, I would run a for loop
>> >> >> on the systemd services only, putting them into maintenance. This
>> way,
>> >> >> if the network stack is updated or something, the virtual IP would be
>> >> >> moved to the other node. IIUC, this is not covered by the resource
>> >> >> groups, is it?
>> >> >>
>> >> >> Or should I have used it when building the cluster from scratch,
>> >> >> creating groups containing my systemd services as primitives? And
>> then
>> >> >> clone a group?
>> >> >>
>> >> >> Is there another way of achieving that? I'd appreciate any comments!
>> >> >>
>> >> >> Thanks!
>> >> >> Eugen
>> >> >>
>> >> >> [0]
>> >> >>
>> >>
>> https://clusterlabs.org/projects/pacemaker/doc/2.1/Pacemaker_Explained/
>> >> >> singlehtml/index.html#group-resources
>> >> >>
>> >> >> _______________________________________________
>> >> >> Manage your subscription:
>> >> >> https://lists.clusterlabs.org/mailman/listinfo/users
>> >> >>
>> >> >> ClusterLabs home: https://www.clusterlabs.org/
>> >> > _______________________________________________
>> >> > Manage your subscription:
>> >> > https://lists.clusterlabs.org/mailman/listinfo/users
>> >> >
>> >> > ClusterLabs home: https://www.clusterlabs.org/
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> Manage your subscription:
>> >> https://lists.clusterlabs.org/mailman/listinfo/users
>> >>
>> >> ClusterLabs home: https://www.clusterlabs.org/
>> > _______________________________________________
>> > Manage your subscription:
>> > https://lists.clusterlabs.org/mailman/listinfo/users
>> >
>> > ClusterLabs home: https://www.clusterlabs.org/
>>
>>
>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
More information about the Users
mailing list