[ClusterLabs] Antw: Re: Antw: Re: Antw: Unexpected Resource movement after failover

Nikhil Utane nikhil.subscribed at gmail.com
Tue Oct 25 04:19:42 EDT 2016


I think it was a silly mistake. "placement-strategy" was not enabled.
We have enabled it now and testing it out.

Thanks

On Mon, Oct 24, 2016 at 7:35 PM, Ulrich Windl <
Ulrich.Windl at rz.uni-regensburg.de> wrote:

> >>> Nikhil Utane <nikhil.subscribed at gmail.com> schrieb am 24.10.2016 um
> 13:22
> in
> Nachricht
> <CAGNWmJXMLYnJjsJiPRmrKhcAPb3rNqX9xuWucqmWL5Qa8WwDgw at mail.gmail.com>:
> > I had set resource utilization to 1. Even then it scheduled 2 resources.
> > Doesn't it honor utilization resources if it doesn't find a free node?
>
> Show us the config and the logs, please!
>
>
> >
> > -Nikhil
> >
> > On Mon, Oct 24, 2016 at 4:43 PM, Vladislav Bogdanov <
> bubble at hoster-ok.com>
> > wrote:
> >
> >> 24.10.2016 14:04, Nikhil Utane wrote:
> >>
> >>> That is what happened here :(.
> >>> When 2 nodes went down, two resources got scheduled on single node.
> >>> Isn't there any way to stop this from happening. Colocation constraint
> >>> is not helping.
> >>>
> >>
> >> If it is ok to have some instances not running in such outage cases, you
> >> can limit them to 1-per-node with utilization attributes (as was
> suggested
> >> earlier). Then, when nodes return, resource instances will return with
> (and
> >> on!) them.
> >>
> >>
> >>
> >>> -Regards
> >>> Nikhil
> >>>
> >>> On Sat, Oct 22, 2016 at 12:57 AM, Vladislav Bogdanov
> >>> <bubble at hoster-ok.com <mailto:bubble at hoster-ok.com>> wrote:
> >>>
> >>>     21.10.2016 19:34, Andrei Borzenkov wrote:
> >>>
> >>>         14.10.2016 10:39, Vladislav Bogdanov пишет:
> >>>
> >>>
> >>>             use of utilization (balanced strategy) has one caveat:
> >>>             resources are
> >>>             not moved just because of utilization of one node is less,
> >>>             when nodes
> >>>             have the same allocation score for the resource. So, after
> the
> >>>             simultaneus outage of two nodes in a 5-node cluster, it may
> >>>             appear
> >>>             that one node runs two resources and two recovered nodes
> run
> >>>             nothing.
> >>>
> >>>
> >>>         I call this a feature. Every resource move potentially means
> >>> service
> >>>         outage, so it should not happen without explicit action.
> >>>
> >>>
> >>>     In a case I describe that moves could be easily prevented by using
> >>>     stickiness (it increases allocation score on a current node).
> >>>     The issue is that it is impossible to "re-balance" resources in
> >>>     time-frames when stickiness is zero (over-night maintenance
> window).
> >>>
> >>>
> >>>
> >>>             Original 'utilization' strategy only limits resource
> >>>             placement, it is
> >>>             not considered when choosing a node for a resource.
> >>>
> >>>
> >>>
> >>>         _______________________________________________
> >>>         Users mailing list: Users at clusterlabs.org
> >>>         <mailto:Users at clusterlabs.org>
> >>>         http://clusterlabs.org/mailman/listinfo/users
> >>>         <http://clusterlabs.org/mailman/listinfo/users>
> >>>
> >>>         Project Home: http://www.clusterlabs.org
> >>>         Getting started:
> >>>         http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >>>         <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> >>>         Bugs: http://bugs.clusterlabs.org
> >>>
> >>>
> >>>
> >>>     _______________________________________________
> >>>     Users mailing list: Users at clusterlabs.org <mailto:
> >>> Users at clusterlabs.org>
> >>>     http://clusterlabs.org/mailman/listinfo/users
> >>>     <http://clusterlabs.org/mailman/listinfo/users>
> >>>
> >>>     Project Home: http://www.clusterlabs.org
> >>>     Getting started:
> >>>     http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >>>     <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> >>>     Bugs: http://bugs.clusterlabs.org
> >>>
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> Users mailing list: Users at clusterlabs.org
> >>> http://clusterlabs.org/mailman/listinfo/users
> >>>
> >>> Project Home: http://www.clusterlabs.org
> >>> Getting started: http://www.clusterlabs.org/
> doc/Cluster_from_Scratch.pdf
> >>> Bugs: http://bugs.clusterlabs.org
> >>>
> >>>
> >>
> >> _______________________________________________
> >> Users mailing list: Users at clusterlabs.org
> >> http://clusterlabs.org/mailman/listinfo/users
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started: http://www.clusterlabs.org/
> doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> >>
>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20161025/a11ceda9/attachment-0003.html>


More information about the Users mailing list