[ClusterLabs] Locate resource with functioning member of clone set?
Israel Brewster
israel at ravnalaska.net
Tue Nov 22 21:28:37 CET 2016
On Nov 17, 2016, at 4:04 PM, Ken Gaillot <kgaillot at redhat.com> wrote:
>
> On 11/17/2016 11:37 AM, Israel Brewster wrote:
>> I have a resource that is set up as a clone set across my cluster,
>> partly for pseudo-load balancing (If someone wants to perform an action
>> that will take a lot of resources, I can have them do it on a different
>> node than the primary one), but also simply because the resource can
>> take several seconds to start, and by having it already running as a
>> clone set, I can failover in the time it takes to move an IP resource -
>> essentially zero down time.
>>
>> This is all well and good, but I ran into a problem the other day where
>> the process on one of the nodes stopped working properly. Pacemaker
>> caught the issue, and tried to fix it by restarting the resource, but
>> was unable to because the old instance hadn't actually exited completely
>> and was still tying up the TCP port, thereby preventing the new instance
>> that pacemaker launched from being able to start.
>>
>> So this leaves me with two questions:
>>
>> 1) is there a way to set up a "kill script", such that before trying to
>> launch a new copy of a process, pacemaker will run this script, which
>> would be responsible for making sure that there are no other instances
>> of the process running?
>
> Sure, it's called a resource agent :)
>
> When recovering a failed resource, Pacemaker will call the resource
> agent's stop action first, then start. The stop should make sure the
> service has exited completely. If it doesn't, the agent should be fixed
> to do so.
Ah, gotcha. I wasn't thinking along those lines in this case because the resource in question doesn't have a dedicated resource agent - it's a basic system service type resource. So then the proper approach would be to modify the init.d script such that when "stop" is called, it makes sure to completely clean up any associated processes - even if the PID file disappears or gets changed.
>
>> 2) Even in the above situation, where pacemaker couldn't launch a good
>> copy of the resource on the one node, the situation could have been
>> easily "resolved" by pacemaker moving the virtual IP resource to another
>> node where the cloned resource was running correctly, and notifying me
>> of the problem. I know how to make colocation constraints in general,
>> but how do I do a colocation constraint with a cloned resource where I
>> just need the virtual IP running on *any* node where there clone is
>> working properly? Or is it the same as any other colocation resource,
>> and pacemaker is simply smart enough to both try to restart the failed
>> resource and move the virtual IP resource at the same time?
>
> Correct, a simple colocation constraint of "resource R with clone C"
> will make sure R runs with a working instance of C.
>
> There is a catch: if *any* instance of C restarts, R will also restart
> (even if it stays in the same place), because it depends on the clone as
> a whole. Also, in the case you described, pacemaker would first try to
> restart both C and R on the same node, rather than move R to another
> node (although you could set on-fail=stop on C to force R to move).
It *looked* like Pacemaker was continually trying to restart the cloned resource in this case - I think the issue being that from Pacemakers perspective the service *did* start successfully, it just failed again moments later (when it tried to bind to the port, and, being unable to, bailed out). So under the "default" configuration, Pacemaker would try restarting the service for quite a while before marking it as failed on that node. As such, it sounds like under the current configuration, the IP resource would never move (at least not in a reasonable time frame), as Pacemaker would simply continue to try restarting on the same node.
So to get around this, I'm thinking I could set the migration-threshold property on the clustered resource to something low, like two or three, perhaps combined with a failure-timeout so occasional successful restarts won't prevent the service from running on a node - only if it can't restart and stay running. Does that sound right?
>
> If that's not sufficient, you could try some magic with node attributes
> and rules. The new ocf:pacemaker:attribute resource in 1.1.16 could help
> there.
Unfortunately, as I am running CentOS 6.8, the newest version available to me is 1.1.14. I haven't yet developed an implementation plan for moving to CentOS 7, so unless I build from source or someone has made packages of a later release available for CentOS 6, I'm stuck at the moment. That said, between this and the alerts mentioned below, it might be worth spending more time looking into upgrading.
Thanks for the info!
-----------------------------------------------
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
-----------------------------------------------
>
>> As an addendum to question 2, I'd be interested in any methods there may
>> be to be notified of changes in the cluster state, specifically things
>> like when a resource fails on a node - my current nagios/icinga setup
>> doesn't catch that when pacemaker properly moves the resource to a
>> different node, because the resource remains up (which, of course, is
>> the whole point), but it would still be good to know something happened
>> so I could look into it and see if something needs fixed on the failed
>> node to allow the resource to run there properly.
>
> Since 1.1.15, Pacemaker has alerts:
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm47975782138832 <http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm47975782138832>
>
> Before 1.1.15, you can use the ocf:pacemaker:ClusterMon resource to do
> something similar.
>
>>
>> Thanks!
>> -----------------------------------------------
>> Israel Brewster
>> Systems Analyst II
>> Ravn Alaska
>> 5245 Airport Industrial Rd
>> Fairbanks, AK 99709
>> (907) 450-7293
>> -----------------------------------------------
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org <mailto:Users at clusterlabs.org>
> http://clusterlabs.org/mailman/listinfo/users <http://clusterlabs.org/mailman/listinfo/users>
>
> Project Home: http://www.clusterlabs.org <http://www.clusterlabs.org/>
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> Bugs: http://bugs.clusterlabs.org <http://bugs.clusterlabs.org/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://clusterlabs.org/pipermail/users/attachments/20161122/897867e8/attachment-0001.html>
More information about the Users
mailing list