[ClusterLabs] "waiting for apache /usr/local/httpd/conf/httpd.conf to come up"

Strahil Nikolov hunter86_bg at yahoo.com
Wed Feb 19 16:19:14 EST 2020

Here is the configuration of a test cluster I'm playing with. I hope it will help you with ideas:

[root at node1 ~]# pcs config show
Cluster Name: HACLUSTER3
Corosync Nodes:
 node1.localdomain node2.localdomain node3.localdomain
Pacemaker Nodes:
 node1.localdomain node2.localdomain node3.localdomain

 Group: HALVM
  Resource: IP (class=ocf provider=heartbeat type=IPaddr2)
   Attributes: cidr_netmask=24 ip= nic=eth0
   Operations: monitor interval=10s timeout=20s (IP-monitor-interval-10s)
               start interval=0s timeout=20s (IP-start-interval-0s)
               stop interval=0s timeout=20s (IP-stop-interval-0s)
  Resource: lvm (class=ocf provider=heartbeat type=LVM)
   Attributes: exclusive=true volgrpname=HALVM
   Operations: methods interval=0s timeout=5s (lvm-methods-interval-0s)
               monitor interval=10s timeout=30s (lvm-monitor-interval-10s)
               start interval=0s timeout=30s (lvm-start-interval-0s)
               stop interval=0s timeout=30s (lvm-stop-interval-0s)
  Resource: fs (class=ocf provider=heartbeat type=Filesystem)
   Attributes: device=/dev/HALVM/TEST directory=/HALVM fstype=xfs options=noatime run_fsck=no
   Operations: monitor interval=30s OCF_CHECK_LEVEL=20 (fs-monitor-interval-30s)
               notify interval=0s timeout=60s (fs-notify-interval-0s)
               start interval=0s timeout=60s (fs-start-interval-0s)
               stop interval=0s timeout=60s (fs-stop-interval-0s)
  Resource: apache (class=ocf provider=heartbeat type=apache)
   Attributes: configfile=/HALVM/httpd/conf/httpd.conf statusurl=
   Operations: monitor interval=10s timeout=20s (apache-monitor-interval-10s)
               start interval=0s timeout=40s (apache-start-interval-0s)
               stop interval=0s timeout=60s (apache-stop-interval-0s)

Stonith Devices:
 Resource: mpath (class=stonith type=fence_mpath)
  Attributes: devices=/dev/mapper/36001405cb123d0000000000000000000 pcmk_host_argument=key pcmk_host_check=static-list pcmk_host_list=node1.localdomain,node2.localdomain,node3.localdomain pcmk_host_map=node1.localdomain:1;node2.localdomain:2;node3.localdomain:3 pcmk_monitor_action=metadata pcmk_reboot_action=off
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (mpath-monitor-interval-60s)
 Resource: rhevm (class=stonith type=fence_rhevm)
  Attributes: ipaddr=engine.localdomain login=user at internal passwd=pass pcmk_host_map=node1.localdomain:node1;node2.localdomain:node2;node3.localdomain:node3 power_wait=5 ssl=1 ssl_secure=1
  Operations: monitor interval=60s (rhevm-monitor-interval-60s)
Fencing Levels:
  Target: node1.localdomain
    Level 1 - mpath
    Level 2 - rhevm
  Target: node2.localdomain
    Level 1 - mpath
    Level 2 - rhevm
  Target: node3.localdomain
    Level 1 - mpath
    Level 2 - rhevm

Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:

 No alerts defined

Resources Defaults:
 No defaults set
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: HACLUSTER3
 dc-version: 1.1.20-5.el7_7.2-3c4c782f70
 default-resource-stickiness: 1
 have-watchdog: false
 last-lrm-refresh: 1582144148

    wait_for_all: 0

Best Regards,
Strahil Nikolov

В сряда, 19 февруари 2020 г., 23:03:02 ч. Гринуич+2, Strahil Nikolov <hunter86_bg at yahoo.com> написа: 

On February 19, 2020 9:48:07 PM GMT+02:00, Paul Alberts <Paul.Alberts at ibm.com> wrote:
>you are correct - that was my bad typing.
>The status url is using the virtual_ip and is accessible from either
>node of the cluster but from nowhere else.
>After this:
>pcs resource create ContentServer ocf:heartbeat:apache
>configfile=/usr/local/httpd/conf/httpd.conf port=1090
>statusurl="http://$VIRTUAL_IP:1090/server-status" op monitor
>interval=1min --group content_svc
>I get this:
>pcs resource
> Resource Group: content_svc
>    prodplmvg  (ocf::heartbeat:LVM):  Started ha1.company.com
>    fs_plmlv  (ocf::heartbeat:Filesystem):    Started ha1.company.com
>    fs_sapcslv (ocf::heartbeat:Filesystem):    Started ha1.company.com
>content_vip        (ocf::heartbeat:IPaddr2):      Started
>ContentServer      (ocf::heartbeat:apache):        Starting
>until the ContentServer resource shows stopped.
>pcs resource debug-start ContentServer --full
>  apache not running
>  waiting for apache /usr/local/httpd/conf/httpd.conf to come up
>It seems something is getting miscommunicated to
>/usr/lib/ocf/resource.d/heartbeat/apache but I haven't yet been able to
>determine what exactly.
>thank you for your time and response, 
>Paul.Alberts at ibm.com
>-----Strahil Nikolov <hunter86_bg at yahoo.com> wrote: -----
>To: Cluster Labs - All topics related to open-source clustering
>welcomed <users at clusterlabs.org>, Paul Alberts <Paul.Alberts at ibm.com>
>From: Strahil Nikolov <hunter86_bg at yahoo.com>
>Date: 02/19/2020 12:17
>Subject: [EXTERNAL] Re: [ClusterLabs] "apache httpd program not found"
>"environment is invalid, resource considered stopped"
>On February 19, 2020 6:27:54 PM GMT+02:00, Paul Alberts
><Paul.Alberts at ibm.com> wrote:
>>Manage your subscription:
>>ClusterLabs home: https://www.clusterlabs.org/ 
>I hope  that  the url  is wrong due to copy/paste:
>Otherwise  -  check the protocol.As status URL should be available 
>only from, you can use  'http' instead.
>Best Regards,
>Strahil Nikolov

In the RA it is clearly stated:

If you set this, make sure that it succeeds *only* from localhost ( Otherwise, it may happen that the cluster complains about the resource being active on multiple nodes.

It seems that you use local apache conf - this can cause trouble supporting any changes in the future. I would recommend you to create 2  directories on a shared storage:
A) for DocumentRoot
B) for Configurations
Set the proper selinux context and update the resource.

Also verify that the binary is available on all nodes. I think you mentioned  compiling the  binary from source .
Any reason behind that ?

Best Regards,
Strahil Nikolov

More information about the Users mailing list