[Pacemaker] [Linux-HA] Apache-Pacemaker Configuration Error?

Andrew Beekhof andrew at beekhof.net
Thu Nov 8 21:20:51 EST 2012


On Tue, Nov 6, 2012 at 3:33 AM, Viviana Cuellar Rivera
<marnaglaya at gmail.com> wrote:
> Hi all,
> I'm trying to mount the following settings:
>
> vip----Balancer1----|------|   Backend Nodes
>                                  |   Backend Nodes
> vip2---balancer2----|------|   Backend Nodes
>
> For that I have installed apache on the balancers nodes, so, my apache
> acts as balancer, for it has enabled mod_proxy modules, mod and mod
> proxy_http proxy_balancer
> The idea is to have two floating IP and publish each one of the two
> (Balancer 1 = vip and  Balancer2 = vip2, resulting in an active-active
> configuration), what I need is that pacemaker makes the publication of
> the ip and checking the status of apache on both balancer servers, and
> in case of a fall migrate the ip to the node that is still running, so
> I did the following settings:
> node balancer1
> node balancer2
> primitive apache-ref ocf:heartbeat:apache \
> params configfile="/etc/apache2/httpd.conf" \
> op monitor interval="20s"
> primitive vip ocf:heartbeat:IPaddr2 \
> params ip="192.168.52.50" cidr_netmask="255.255.255.0" \
> op monitor interval="20s" \
> meta target-role="Started"
> primitive vip2 ocf:heartbeat:IPaddr2 \
> params ip="192.168.52.40" cidr_netmask="255.255.255.0" \
> op monitor interval="20s" \
> meta target-role="Started"
> clone cl-apache apache-ref \
> meta clone-max="2" clone-node-max="1" target-role="Started"
> location vip2_pref_1 vip2 100: balancer2
> location vip2_pref_2 vip2 50: balancer1
> location vip_pref_1 vip 100: balancer1
> location vip_pref_2 vip 50: balancer2
> colocation apache-with-failover inf: vip vip2 cl-apache
> order apache-after-failover-ip inf: ( vip vip2 ) cl-apache
> property $id="cib-bootstrap-options" \
> dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
> cluster-infrastructure="openais" \
> expected-quorum-votes="2" \
> stonith-enabled="false" \
> no-quorum-policy="ignore"
> rsc_defaults $id="rsc-options" \
> resource-stickiness="0"
>
> But to make a crm status shows:
>
> root at balancer1:~# crm status
> ============
> Last updated: Sat Nov  3 12:13:29 2012
> Last change: Sat Nov  3 12:03:19 2012 via cibadmin on balancer2
> Stack: openais
> Current DC: balancer1 - partition with quorum
> Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
> 2 Nodes configured, 2 expected votes
> 4 Resources configured.
> ============
>
> Online: [ balancer1 balancer2 ]
>
>  vip    (ocf::heartbeat:IPaddr2):       Started balancer1
>  vip2   (ocf::heartbeat:IPaddr2):       Started balancer1
>  Clone Set: cl-apache [apache-ref]
>      Started: [ balancer1 ]
>      Stopped: [ apache-ref:1 ]

Both of the VIPs are on balancer1, and because you have said:

Essentially this is your problem:

colocation apache-with-failover inf: vip vip2 cl-apache

It is causing both VIPs to be placed on balancer1, and preventing
apache from running on any node where there is no VIP (ie. balancer2).
You probably want this instead:

colocation vip-with-apache inf: vip cl-apache
colocation vip2-with-apache inf: vip2 cl-apache

Hope that helps.

>
> First try to start apache in the balancer 2
>
> / etc/init.d/apache2 restart
>
> And when you got a status:
> / etc/init.d/apache2 status
> Apache2 is running (pid 14143).
>
> But for some reason when i'm doing crm status the problem persists, so
> I decided restart corosync, then to do on both nodes was obtained:
>
> ============
> Last updated: Sat Nov  3 12:43:35 2012
> Last change: Sat Nov  3 12:43:29 2012 via cibadmin on balancer1
> Stack: openais
> Current DC: balancer1 - partition with quorum
> Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
> 2 Nodes configured, 2 expected votes
> 4 Resources configured.
> ============
>
> Online: [ balancer1 balancer2 ]
>
>  vip    (ocf::heartbeat:IPaddr2):       Started balancer2
>  vip2   (ocf::heartbeat:IPaddr2):       Started balancer2
>
> Failed actions:
>     apache-ref:0_start_0 (node=balancer1, call=10, rc=-2, status=Timed
> Out): unknown exec error
>     apache-ref:0_start_0 (node=balancer2, call=10, rc=-2, status=Timed
> Out): unknown exec error
>
> As I thought I was doing wrong and was using the class wrong (sorry
> but I do not know how you call ocf:heartbeat), I changed the apache,
> and my configuration was:
>
> node balancer1
> node balancer2
> primitive apache-ref lsb:apache2
> primitive vip ocf:heartbeat:IPaddr2 \
>         params ip="192.168.52.50" cidr_netmask="255.255.255.0" \
>         op monitor interval="20s"
> primitive vip2 ocf:heartbeat:IPaddr2 \
>         params ip="192.168.52.40" cidr_netmask="255.255.255.0" \
>         op monitor interval="20s"
> clone cl-apache apache-ref \
>         meta clone-max="2" clone-node-max="1" target-role="Started"
> location vip2_pref_1 vip2 100: balancer2
> location vip2_pref_2 vip2 50: balancer1
> location vip_pref_1 vip 100: balancer1
> location vip_pref_2 vip 50: balancer2
> colocation apache-with-failover inf: vip vip2 cl-apache
> order apache-after-failover-ip inf: ( vip vip2 ) cl-apache
> property $id="cib-bootstrap-options" \
>         dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
>         cluster-infrastructure="openais" \
>         expected-quorum-votes="2" \
>         stonith-enabled="false" \
>         no-quorum-policy="ignore"
> rsc_defaults $id="rsc-options" \
>         resource-stickiness="0"
>
> I'm restart the corosync on both nodes, as the above error persisted,
> but to make new leaves crm status.
>
> root at balancer1:~# crm status
> ============
> Last updated: Sat Nov  3 12:13:29 2012
> Last change: Sat Nov  3 12:03:19 2012 via cibadmin on balancer2
> Stack: openais
> Current DC: balancer1 - partition with quorum
> Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
> 2 Nodes configured, 2 expected votes
> 4 Resources configured.
> ============
>
> Online: [ balancer1 balancer2 ]
>
>  vip    (ocf::heartbeat:IPaddr2):       Started balancer1
>  vip2   (ocf::heartbeat:IPaddr2):       Started balancer1
>  Clone Set: cl-apache [apache-ref]
>      Started: [ balancer1 ]
>      Stopped: [ apache-ref:1 ]
>
> What am I doing wrong?
>
> Note: I'm need to know how to do, so when the balancer 2 assumes the
> virtual ip 1 (vip) will run a sript.
> (Nota: Ademas quiero saber como hacer, para que cuando el balanceador
> 2 asuma la ip virtual 1 (vip) se corra un sript)
>
> I apologize for my English ;)
>
> Thanks!
> _______________________________________________
> Linux-HA mailing list
> Linux-HA at lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems




More information about the Pacemaker mailing list