[ClusterLabs] failcount is not getiing reset after failure_timeout if monitoring is disabled
Ken Gaillot
kgaillot at redhat.com
Tue May 23 12:15:18 EDT 2017
On 05/23/2017 08:00 AM, ashutosh tiwari wrote:
> Hi,
>
> We are running a two node cluster(Active(X)/passive(Y)) having muliple
> resources of type IpAddr2.
> Running monitor operations for multiple IPAddr2 resource is actually
> hoging the cpu,
> as we have configured very low value for monitor interval (200 msec).
That is very low. Although times are generally specified in msec in the
pacemaker configuration, pacemaker generally has 1-second granularity in
the implementation, so this is probably treated the same as a 1s interval.
>
> To avoid this problem ,we are trying to use netlink notification for
> monitoring floating Ip and updating the failcount for the corresponding
> Ipaddr2 resource using crm_failcount . Along with this we have disabled
> the ipaddr2 monitoring.
There is a better approach.
Directly modifying fail counts is not a good idea. Fail counts are being
overhauled in pacemaker 1.1.17 and later, and crm_failcount will only be
able to query or delete a failcount, not set or increment it. There
won't be a convenient way to modify a fail count, as we are trying to
discourage that as an implementation detail that can change.
> Thing work fine till here as IPAddr2 resource migrates to other node(Y)
> once failcount equals the migration threshold(1) and Y becomes Active
> due to resource colocation constraints.
>
> We have configured failure timeout to 3 sec and expected it to clear the
> failcount on the initially active node(X).
> Problem is that failcount never gets reset on X and thus cluster fails
> to move back to X.
Technically, it's not the fail count that expires, but a particular
failed operation that expires. Even though manually increasing the fail
count will result in recovery actions, if there is no failed operation
in the resource history, then there's nothing to expire.
However, pacemaker does provide a way to do what you want: see the
crm_resource(8) man page for the -F/--fail option. It will record a fake
operation failure in the resource history, and process it as if it were
a real failure. That should do what you want.
> However if we enable the monitoring everything works fine and failcount
> gets reset allowing to fallback.
>
>
> Regrds,
> Ashutosh T
FYI, there's an idea for a future feature that could also be helpful
here. We're thinking of creating a new ocf:pacemaker:IP resource agent
that would be based on systemd's networking support. This would allow
pacemaker to be notified by systemd of IP failures without having to
poll. I'm not sure how systemd itself detects the failures. No timeline
on when this might be available, though.
More information about the Users
mailing list