[ClusterLabs Developers] Resource Agent language discussion

Fabio M. Di Nitto fabbione at fabbione.net
Tue Aug 11 04:42:37 UTC 2015



On 8/7/2015 5:14 PM, Jehan-Guillaume de Rorthais wrote:
> Hi Jan,
> 
> On Fri, 7 Aug 2015 15:36:57 +0200
> Jan Pokorný <jpokorny at redhat.com> wrote:
> 
>> On 07/08/15 12:09 +0200, Jehan-Guillaume de Rorthais wrote:
>>> Now, I would like to discuss about the language used to write a RA in
>>> Pacemaker. I never seen discussion or page about this so far.
>>
>> it wasn't in such a "heretic :)" tone, but I tried to show that it
>> is extremely hard (if not impossible in some instances) to write
>> bullet-proof code in bash (or POSIX shell, for that matter) because
>> it's so cumbersome to move from "whitespace-delimited words as
>> a single argument" and "words as standalone arguments" back and forth,
>> connected with quotation-desired/-counterproductive madness
>> (what if one wants to indeed pass quotation marks as legitimate
>> characters within the passed value, etc.) few months back:
>>
>> http://clusterlabs.org/pipermail/users/2015-May/000403.html
>> (even on developers list, but with fewer replies and broken threading:
>> http://clusterlabs.org/pipermail/developers/2015-May/000023.html).
> 
> Thanks for the links and history. You add some more argument to my points :)
> 
>>> HINT: I don't want to discuss (neither troll about) what is the best
>>> language. I would like to know why **ALL** the RA are written in
>>> bash
>>
>> I would expect the original influence were the init scripts (as RAs
>> are mostly just enriched variants to support more flexible
>> configuration and better diagnostics back to the cluster stack),
>> which in turn were born having simplicity and ease of debugging
>> (maintainability) in mind.
> 
> That sounds legitimate. And bash is still appropriate for some simple RA.
> 
> But for the same ease of code debugging and maintainability arguments (and many
> others), complexe RA shouldn't use shell as language.

so beside the language you can/want to use, from a development
perspective you guys are probably right that in some cases, more complex
languages could be a better fit.

But you forgot to position yourself as end user and the reason why we
currently use bash/shell.

first of all, our end user is not necessarily a developer. Most of them
are in fact sysadmins and one common that sysadmins have is that they
know bash/shell.

If needs arise to debug a RA, shell is pretty much the only common
denominator with our user base.

The other problem i see in using other languages is how they operate
under extreme conditions (memory pressure, disk I/O etc).

Just for the fun of it, I did some basic profiling of "hello world" in
bash and perl. Please don´t take this as an attempt to start a
"language" flame war here. I just want to explain differences on why
shell vs others.

Perl is at least 3 times slower than bash
Perl uses at least 4/5 times more memory to execute the same command

Granted, it´s an incredibly small test et all, but all I am trying to
say is that Cluster is supposed to be as reliable as possible under
extreme conditions.

In most systems, all commands required to execute a RA in shell are
already cached in ram and requirements to re-run them are minimal (and
could save a system).

with Perl, there was no caching that I could see (even executing the
command several times), with lots of I/O to load modules from disks.

So given that, is it worth rewriting the RA in another language (and
what defines a "simple" vs "complex" ras from above)? or wouldn´t it
better to just fix the current ones for stuff like escapes and handling
of spaces in the options?

Just 2c
Fabio

> 
>>> and if there's traps (hidden far in ocf-shellfuncs as instance)
>>> to avoid if using a different language. And is it acceptable to
>>> include new libs for other languages?
>>
>> https://github.com/ClusterLabs/resource-agents/blob/v3.9.6/doc/dev-guides/ra-dev-guide.txt#L33
>> doesn't make any assumption about the target language beside stating
>> what's a common one.
> 
> Yes, I know that page. But this dev guide focus on shell and have some
> assumptions about ocf-shellfuncs.
> 
> I'll take the same exemple than in my previous message, there's nothing
> about the best practice for logging. In the "Script variables" section, some
> comes from environment, others from ocf-shellfuncs.
> 
>>> We rewrote the RA in perl, mostly because of me. I was bored with bash/sh
>>> limitations AND syntax AND useless code complexity for some easy tasks AND
>>> traps (return code etc). In my opinion, bash/sh are fine if you RA code is
>>> short and simple. Which was mostly the case back in the time of heartbeat
>>> which was stateless only. But it became a nightmare with multi-state agents
>>> struggling with complexe code to fit with Pacemaker behavior. Have a look
>>> to the mysql or pgsql agents.
>>>
>>> Moreover, with bash, I had some weird behaviors (timeouts) from the RA
>>> between runuser/su/sudo and systemd/pamd some months ago. The three of them
>>> have system implications or side effects deep in the system you need to
>>> take care off. Using a language able to seteuid/setuid after forking is
>>> much more natural and clean to drop root privileges and start the daemon
>>> (PostgreSQL refuses to start as root and is not able to drop its privileges
>>> to another system user itself).
>>
>> Other disadvantage of shell scripts is that frequently many processes
>> are spawned for simple changes within the filesystem and for string
>> parsing/reformatting, which in turn creates a dependency on plenty
>> of external executables.
> 
> True. Either you need to pipe multi small programs, forking all of them
> (cat|grep|cut|...), sometime with different behavior depending on the system or
> use a complexe one most people don't want to hear anymore (sed, awk, perl, ...).
> In the later case, you not only have to master bash, but other languages as
> well.
> 
>>> Now, we are far to have a enterprise class certified code, our RA had its
>>> very first tests passed successfully yesterday, but here is a quick
>>> feedback. The downside of picking another language than bash/sh is that
>>> there is no OCF module/library available for them. This is quite
>>> inconvenient when you need to get system specifics variables or logging
>>> shortcut only defined in ocf-shellfuncs (and I would guess patched by
>>> packagers ?).
>>>
>>> As instance, I had to "capture" values of $HA_SBIN_DIR and $HA_RSCTMP from
>>> my perl code.
>>
>> There could be a shell wrapper that would put these values into the
>> environment and then executed the target itself for its disposal
>> (generic solution for arbitrary executable).  That's not applicable
>> for "procedural knowledge" (logging, etc.), though, as you mention
>> below.
> 
> Yes.
> 
> What should we do next? Should we spin off an "ocf-perl-common" module from our
> agent and feed it with such pieces ported from ocf-shellfuncs?
> 
> 
> 
> _______________________________________________
> Developers mailing list
> Developers at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/developers
> 




More information about the Developers mailing list