[ClusterLabs] [Q] Check on application layer (kamailio, openhab)

Ken Gaillot kgaillot at redhat.com
Mon Dec 21 16:07:25 UTC 2015


On 12/19/2015 10:21 AM, Sebish wrote:
> Dear all ha-list members,
> 
> I am trying to setup two availability checks on application layer using
> heartbeat and pacemaker.
> To be more concrete I need 1 resource agent (ra) for openHAB and 1 for
> Kamailio SIP Proxy.
> 
> *My setup:
> *
> 
>    + Debian 7.9 + Heartbeat + Pacemaker + more

This should work for your purposes, but FYI, corosync 2 is the preferred
communications layer these days. Debian 7 provides corosync 1, which
might be worth using here, to make an eventual switch to corosync 2 easier.

Also FYI, Pacemaker was dropped from Debian 8, but there is a group
working on backporting the latest pacemaker/corosync/etc. to it.

>    + 2 Node Cluster with Hot-Standby Failover
>    + Active Cluster with clusterip, ip-monitoring, working failover and
>    services
>    + Copied kamailio ra into /usr/lib/ocf/resource.d/heartbeat, chmod
>    755 and 'crm ra list ocf heartbeat' finds it
> 
> *The plan:*
> 
> _openHAB_
> 
>    My idea was to let heartbeat check for the availabilty of openHAB's
>    website (jettybased) or check if the the process is up and running.
> 
>    I did not find a fitting resource agent. Is there a general ra in
>    which you would just have to insert the process name 'openhab'?
> 
> _Kamailio_
> 
>    My idea was to let an ra send a SIP-request to kamailio and check,
>    if it gets an answer AND if it is the correct one.
> 
>    It seems like the ra
>   
> https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/kamailio
> 
>    does exactly what I want,
>    but I do not really understand it. Is it plug and play? Do I have to
>    change values inside the code like users, the complete meta-data or
>    else?
> 
>    When I try to insert this agent (no changes) into pacemaker using
>    'crm configure primitive kamailio ocf:heartbeat:kamailio' it says:
> 
>        lrmadmin[4629]: 2015/12/19_16:11:40 ERROR:
>        lrm_get_rsc_type_metadata(578): got a return code HA_FAIL from a
>        reply message of rmetadata with function get_ret_from_msg.
>        ERROR: ocf:heartbeat:kamailio: could not parse meta-data:
>        ERROR: ocf:heartbeat:kamailio: could not parse meta-data:
>        ERROR: ocf:heartbeat:kamailio: no such resource agent

lrmadmin is no longer used, and I'm not familiar with it, but first I'd
check that the RA is executable. If it supports running directly from
the command line, maybe make sure you can run it that way first.

Most RAs support configuration options, which you can set in the cluster
configuration (you don't have to edit the RA). Each RA specifies the
options it accepts in the <parameters> section of its metadata.

> *The question:*_
> 
> _Maybe you could give me some hints on what to do next. Perhaps one of
> you is even already using the kamailio ra successfully or checking a
> non-apache website?
> If I simply have to insert all my cluster data into the kamailio ra, it
> should not throw this error, should it? Could have used a readme for
> this ra though...
> If you need any data, I will provide it asap!
> 
> *
> **Thanks a lot to all who read this mail!*
> 
> Sebish
> ha-newbie, but not noobie ;)





More information about the Users mailing list