[ClusterLabs Developers] RA as a systemd wrapper -- the right way?

Adam Spiers aspiers at suse.com
Fri Oct 13 09:07:33 EDT 2017


Lars Ellenberg <lars.ellenberg at linbit.com> wrote: 
>On Mon, May 22, 2017 at 12:26:36PM -0500, Ken Gaillot wrote: 
>>Resurrecting an old thread, because I stumbled on something relevant ... 
>
>/me too :-) 
>
>>There had been some discussion about having the ability to run a more 
>>useful monitor operation on an otherwise systemd-based resource. We had 
>>talked about a couple approaches with advantages and disadvantages. 
>>
>>I had completely forgotten about an older capability of pacemaker that 
>>could be repurposed here: the (undocumented) "container" meta-attribute. 
>
>Which is nice to know. 
>
>The wrapper approach is appealing as well, though. 
>
>I have just implemented a PoC ocf:pacemaker:systemd "wrapper" RA, 
>to give my brain something different to do for a change. 

Cool!  Really nice to see a PoC for this, which BTW I mentioned at the 
recent Clusterlabs Summit, for those who missed the event: 

https://aspiers.github.io/clusterlabs-summit-2017-openstack-ha/#/control-plane-api-5 

>Takes two parameters, 
>unit=(systemd unit), and 
>monitor_hook=(some executable) 
>
>The monitor_hook has access to the environment, obviously, 
>in case it needs that.  For monitor, it will only be called, 
>if "systemctl is-active" thinks the thing is active. 
>
>It is expected to return 0 (OCF_SUCCESS) for "running", 
>and 7 (OCF_NOT_RUNNING) for "not running".  It can return anything else, 
>all exit codes are directly propagated for the "monitor" action. 
>"Unexpected" exit codes will be logged with ocf_exit_reason 
>(does that make sense?). 
>
>systemctl start and stop commands apparently are "synchronous" 
>(have always been? only nowadays? is that relevant?) 

I think it depends on exactly what you mean by "synchronous" here. 
You can start up a daemon, or a process which is responsible for 
forking into a daemon, but how can you know for sure that a service is 
really up and running?  Even if the daemon ran for a few seconds, it 
might die soon after.  At what point do you draw the line and say "OK 
start-up is now over, any failures after this are failures of a 
running service"?  In that light, "systemctl start" could return at a 
number of points in the startup process, but there's probably always 
an element of asynchronicity in there.  Interested to hear other 
opinions on this. 

>but to be so, they need properly written unit files. 
>If there is an ExecStop command defined which will only trigger 
>stopping, but not wait for it, systemd cannot wait, either 
>(it has no way to know what it should wait for in that case), 
>and no-one should blame systemd for that. 

Exactly.

>That's why you would need to fix such systemd units, 
>but that's also why I added the additional _monitor loops 
>after systemctl start / stop. 

Yes, those loops sound critical to me for the success of this 
approach.

>Maybe it should not be named systemd, but systemd-wrapper. 
>
>Other comments? 

I'm not sure if/when I'll get round to testing this approach.  The 
conclusion from the summit seemed to be that in the OpenStack context 
at least, it should be sufficient to just have stateless active/active 
REST API services managed by systemd (not Pacemaker) with auto-restart 
enabled, and HAProxy doing load-balancing and health monitoring via 
its httpchk option.  But if for some reason that doesn't cover all the 
cases we need, I'll definitely bear this approach in mind as an 
alternative option.  So again, thanks a lot for sharing! 




More information about the Developers mailing list