[ClusterLabs] Colocation of a primitive resource with a clone with limited copies
Ken Gaillot
kgaillot at redhat.com
Fri Apr 21 10:37:06 EDT 2017
On 04/21/2017 07:14 AM, Vladislav Bogdanov wrote:
> 20.04.2017 23:16, Jan Wrona wrote:
>> On 20.4.2017 19:33, Ken Gaillot wrote:
>>> On 04/20/2017 10:52 AM, Jan Wrona wrote:
>>>> Hello,
>>>>
>>>> my problem is closely related to the thread [1], but I didn't find a
>>>> solution there. I have a resource that is set up as a clone C
>>>> restricted
>>>> to two copies (using the clone-max=2 meta attribute||), because the
>>>> resource takes long time to get ready (it starts immediately though),
>>> A resource agent must not return from "start" until a "monitor"
>>> operation would return success.
>>>
>>> Beyond that, the cluster doesn't care what "ready" means, so it's OK if
>>> it's not fully operational by some measure. However, that raises the
>>> question of what you're accomplishing with your monitor.
>> I know all that and my RA respects that. I didn't want to go into
>> details about the service I'm running, but maybe it will help you
>> understand. Its a data collector which receives and processes data from
>> a UDP stream. To understand these data, it needs templates which
>> periodically occur in the stream (every five minutes or so). After
>> "start" the service is up and running, "monitor" operations are
>> successful, but until the templates arrive the service is not "ready". I
>> basically need to somehow simulate this "ready" state.
>
> If you are able to detect that your application is ready (it already
> received its templates) in your RA's monitor, you may want to use
> transient node attributes to indicate that to the cluster. And tie your
> vip with such an attribute (with location constraint with rules).
That would be a good approach.
I'd combine it with stickiness so the application doesn't immediately
move when a "not ready" node becomes "ready".
I'd also keep the colocation constraint with the application. That helps
if a "ready" node crashes, because nothing is going to change the
attribute in that case, until the application is started there again.
The colocation constraint guarantees that the attribute is current.
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/_using_rules_to_determine_resource_location.html#_location_rules_based_on_other_node_properties
>
>
> Look at pacemaker/ping RA for attr management example.
>
> [...]
More information about the Users
mailing list