[ClusterLabs] The proxy server received an invalid response from an upstream server.

Reid Wahl nwahl at redhat.com
Tue Mar 16 16:26:46 EDT 2021


On Tue, Mar 16, 2021 at 1:03 PM Jason Long <hack3rcon at yahoo.com> wrote:

> Thanks.
> I changed it to the IP address of node2 and I can see my Apache Web Server.
>

Like I said, you don't want to do that. You should change it to an IP
address that you want the cluster to manage. If you set it to node2's IP
address, Pacemaker will try to remove node2's IP address and assign it to
node1 if the resource fails over to node1. If node2 is using that address
for anything else (e.g., corosync communication), then that would be a big
problem.

The managed floating IP address should be an IP address dedicated to the
web server, that can move between cluster nodes as needed.


> $ sudo pcs resource update floating_ip ocf:heartbeat:IPaddr2
> ip=192.168.1.10 cidr_netmask=24 op monitor interval=5s
>
> Now, I want to test my cluster and stop node1. On node1 I did:
>
> # pcs cluster stop http_server
> Error: nodes 'http_server' do not appear to exist in configuration
>
> Why?
>

The `pcs cluster stop` command stops pacemaker and corosync services on a
particular node (the local node if you don't specify one). You've specified
`http_server`, so the command is trying to connect to a node called
"http_server" and stop services there.

If you want to stop node1, then run `pcs cluster stop node1`.

If you want to prevent the http_server resource from running anywhere, then
run `pcs resource disable http_server`.

If you want to prevent the http_server resource from running on node2, then
run `pcs resource ban http_server node2`. If you want to remove that
constraint later and allow it to run on node2 again, run `pcs resource
clear http_server`.


>
>
>
>
> On Tuesday, March 16, 2021, 11:05:48 PM GMT+3:30, Reid Wahl <
> nwahl at redhat.com> wrote:
>
>
>
>
>
>
>
> On Tue, Mar 16, 2021 at 12:11 PM Jason Long <hack3rcon at yahoo.com> wrote:
> > Thank you so much.
> > I forgot to ask a question. In below command, what should be the ip="IP"
> value? Is it the IP address of my Apache or node2?
> >
> > $ sudo pcs resource create floating_ip ocf:heartbeat:IPaddr2 ip="IP"
> cidr_netmask=24 op monitor interval=5s
>
> It's the IP address that you want the cluster to manage. That sounds like
> it would be your web server IP address. You definitely don't want to set
> the ip option to some IP address that resides statically on one of the
> nodes. An IP managed by an IPaddr2 resource can be moved around the cluster.
>
> If that's your web server IP address, you'll also want to put it in a
> resource group with your apache resource. Otherwise, the floating IP may
> end up on a different node from your web server, which renders the IP
> address useless.
>
> For resources that already exist, you can use the `pcs resource group add`
> command. For example: `pcs resource group add apache_group floating_ip
> http_server`.
>
> For resources that you're newly creating, you can use the `--group` option
> of `pcs resource create`. For example, `pcs resource create new_IP IPaddr2
> <options> --group apache_group`. That adds the new resource to the end of
> the group.
>
> The pcs help outputs have more details on these options.
>
> If you're new to resource groups, you can check them out here:
> https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#group-resources
>
> >
> > Logs are:
> > https://paste.ubuntu.com/p/86YHRX6rdC/
> > https://paste.ubuntu.com/p/HHVzNvhRM2/
> > https://paste.ubuntu.com/p/kNxynhfyc2/
> >
> > I have not any "status.conf" file:
> >
> > # cat /etc/httpd/conf.d/status.conf
> > cat: /etc/httpd/conf.d/status.conf: No such file or directory
> >
>
> If you're using Ubuntu, I believe it's in a different location --
> somewhere in /etc/apache2 if memory serves.
>
> >
> >
> >
> >
> > On Tuesday, March 16, 2021, 07:20:32 PM GMT+3:30, Klaus Wenninger <
> kwenning at redhat.com> wrote:
> >
> >
> >
> >
> >
> > On 3/16/21 3:18 PM, Ken Gaillot wrote:
> >> On Tue, 2021-03-16 at 09:42 +0000, Jason Long wrote:
> >>> Hello,
> >>> I want to launch a Clustering for my Apache Web Server. I have three
> >>> servers:
> >>>
> >>> 1- Main server that acts as a Reverse Proxy
> >>> 2- The secondary server that when my main server stopped, work as a
> >>> Reverse Proxy
> >>> 3- Apache Web Server
> >>>
> >>> The client ---> Reverse Proxy Server ---> Apache Web Server
> >>>
> >>> The IP addresses are:
> >>> Main Server (node1)                    : 192.168.1.3
> >>> Secondary Server (node2)          : 192.168.1.10
> >>> Apache Web Server (node3)        : 192.168.1.4
> >>>
> >>> On the main and secondary servers, I installed and configured Apache
> >>> as a Reverse Proxy Server. I created a Virtual Host and my Reverse
> >>> Configuration is:
> >>>
> >>> <VirtualHost *:80>
> >>>      ProxyPreserveHost On
> >>>      ProxyPass / http://192.168.1.4/
> >>>      ProxyPassReverse / http://192.168.1.4/
> >>> </VirtualHost>
> >>>
> >>> As you see, it forward all requests to the Apache Web Server.
> >>>
> >>> I installed and configured Corosync and Pacemaker as below:
> >>>
> >>> On the main and secondary servers, I opened "/etc/hosts" files and
> >>> added my servers IP addresses and host names:
> >>>
> >>> 192.168.1.3    node1
> >>> 192.168.1.10  node2
> >>>
> >>> Then installed Pacemaker, Corosync, and Pcs packages on both servers
> >>> and started its service:
> >>>
> >>> $ sudo yum install corosync pacemaker pcs
> >>> $ sudo systemctl enable pcsd
> >>> $ sudo systemctl start pcsd
> >>> $ sudo systemctl status pcsd
> >>>
> >>> Then Configured the firewall on both servers as below:
> >>>
> >>> $ sudo firewall-cmd --permanent --add-service=http
> >>> $ sudo firewall-cmd --permanent --add-service=high-
> >>> availability
> >>> $ sudo firewall-cmd --reload
> >>>
> >>> After it, on both servers, I created a password for the "hacluster"
> >>> user, then on the main server:
> >>>
> >>> $ sudo pcs host auth node1 node2 -u hacluster -p password
> >>> node1: Authorized
> >>> node2: Authorized
> >>>
> >>> Then:
> >>> $ sudo pcs cluster setup mycluster node1 node2 --start --enable
> >>> $ sudo pcs cluster enable --all
> >>> node1: Cluster Enabled
> >>> node2: Cluster Enabled
> >>>
> >>> After it:
> >>> $ sudo pcs cluster start --all
> >>> node1: Starting Cluster...
> >>> node2: Starting Cluster...
> >>>
> >>> I checked my clusters with below command and they are up and running:
> >>> $ sudo pcs status
> >>> ...
> >>> Node List:
> >>>    * Online: [ node1 node2 ]
> >>> ....
> >>>
> >>> And finally, I tried to add a resource:
> >>> $ sudo pcs resource create floating_ip ocf:heartbeat:IPaddr2
> >>> ip=192.168.1.4 cidr_netmask=24 op monitor interval=5s
> > Shouldn't the virtual-IP moved between node1 & node2 be
> > different from the IP used for the web-server on node3?
> > And having just one instance of the reverse-proxy that
> > should probably be colocated with the virtual-IP - right?
> >
> > Klaus
> >
> >>> $ sudo pcs resource create http_server ocf:heartbeat:apache
> >>> configfile="/etc/httpd/conf.d/VirtualHost.conf" op monitor
> >>> timeout="5s" interval="5s"
> >>>
> >>> On both servers (Main and Secondary), pcsd service is enabled, but
> >>> when I want to see my Apache Web Server then it show me below error:
> >>>
> >>> Proxy Error
> >>> The proxy server received an invalid response from an upstream
> >>> server.
> >>> The proxy server could not handle the request
> >>> Reason: Error reading from remote server
> >>>
> >>> Why? Which parts of my configuration is wrong?
> >>> The output of "sudo pcs status" command is:
> >>> https://paste.ubuntu.com/p/V9KvHKwKtC/
> >>>
> >>> Thank you.
> >> The thing to investigate is:
> >>
> >> Failed Resource Actions:
> >>    * http_server_start_0 on node2 'error' (1): call=12, status='Timed
> Out', exitreason='', last-rc-change='2021-03-16 12:28:14 +03:30',
> queued=0ms, exec=40004ms
> >>    * http_server_start_0 on node1 'error' (1): call=14, status='Timed
> Out', exitreason='', last-rc-change='2021-03-16 12:28:52 +03:30',
> queued=0ms, exec=40002ms
> >>
> >> The web server start timed out. Check the system, pacemaker and apache
> >> logs around those times for any hints.
> >>
> >> Did you enable and test the status URL? The ocf:heartbeat:apache agent
> >> checks the status as part of its monitor (which is also done for
> >> start). It would be something like:
> >>
> >> cat <<-END >/etc/httpd/conf.d/status.conf
> >>  <Location /server-status>
> >>      SetHandler server-status
> >>      Require local
> >>  </Location>
> >> END
> >>
> >
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
> >
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
> >
> >
>
>
> --
> Regards,
>
> Reid Wahl, RHCA
> Senior Software Maintenance Engineer, Red Hat
> CEE - Platform Support Delivery - ClusterHA
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>


-- 
Regards,

Reid Wahl, RHCA
Senior Software Maintenance Engineer, Red Hat
CEE - Platform Support Delivery - ClusterHA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20210316/6970a527/attachment-0001.htm>


More information about the Users mailing list