<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 17, 2021 at 11:51 AM Jason Long <<a href="mailto:hack3rcon@yahoo.com">hack3rcon@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello,<br>
I changed "IP" to my Apache web server:<br>
<br>
$ sudo pcs resource update floating_ip ocf:heartbeat:IPaddr2 ip=192.168.1.4 cidr_netmask=24 op monitor interval=5s<br>
<br>
And did:<br>
<br>
$ sudo pcs status<br>
Cluster name: mycluster<br>
Cluster Summary:<br>
* Stack: corosync<br>
* Current DC: node1 (version 2.0.5-10.fc33-ba59be7122) - partition with quorum<br>
* Last updated: Wed Mar 17 21:55:58 2021<br>
* Last change: Wed Mar 17 21:55:02 2021 by root via cibadmin on node1<br>
* 2 nodes configured<br>
* 2 resource instances configured<br>
<br>
Node List:<br>
* Online: [ node1 node2 ]<br>
Full List of Resources:<br>
* floating_ip (ocf::heartbeat:IPaddr2): Started node1<br>
* http_server (ocf::heartbeat:apache): Stopped<br>
<br>
Failed Resource Actions:<br>
* http_server_start_0 on node1 'error' (1): call=10, status='Timed Out', exitreason='', last-rc-change='2021-03-17 21:50:31 +03:30', queued=0ms, exec=40002ms<br>
* http_server_start_0 on node2 'error' (1): call=11, status='Timed Out', exitreason='', last-rc-change='2021-03-17 21:51:11 +03:30', queued=0ms, exec=40002ms<br>
<br>
Daemon Status:<br>
corosync: active/enabled<br>
pacemaker: active/enabled<br>
pcsd: active/enabled<br>
<br>
<br>
Why "http_server (ocf::heartbeat:apache): Stopped" ?<br></blockquote><div><br></div><div>Like Ken said, the apache agent works by using wget to request the status URL. If this fails, then it will request the HTTP header of the index page.</div><div><br></div><div>You can run `pcs resource disable http_server` and then `pcs resource debug-start http_server --full` to get more detail about where the start operation is hanging. This likely needs to be done on the node where the floating IP resource is running, but you're welcome to try it on both nodes.<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I think you misunderstand my goal, please examine "<a href="https://paste.ubuntu.com/p/Nx2ptqZjFg/" rel="noreferrer" target="_blank">https://paste.ubuntu.com/p/Nx2ptqZjFg/</a>". I just have one Apache server and two Reverse Proxy servers, when a Reverse Proxy server stopped then another one work.<br>
In this scenario, is group resources mandatory?<br>
<br></blockquote><div> </div><div>Does the 192.168.1.4 IP address need to be on the same machine as the apache resource, or on a different machine? <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
<br>
<br>
<br>
On Wednesday, March 17, 2021, 01:50:35 AM GMT+3:30, Reid Wahl <<a href="mailto:nwahl@redhat.com" target="_blank">nwahl@redhat.com</a>> wrote: <br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
On Tue, Mar 16, 2021 at 3:13 PM Jason Long <<a href="mailto:hack3rcon@yahoo.com" target="_blank">hack3rcon@yahoo.com</a>> wrote:<br>
> I'm using CentOS.<br>
<br>
Ah okay. I think I had made an assumption based on the pastebins URLs.<br>
<br>
> Thus, I must use my Apache web server IP instead of node2?<br>
<br>
Yes, it's never a good idea to configure a node's constant IP address within an IPaddr2 resource. That will almost inevitably result in Pacemaker taking down the IP address at some point.<br>
<br>
For an IPaddr2 resource, you configure the IP address that's free to move around the cluster. In this case, that's the Apache web server IP. Node2's IP address isn't free to move to node1.<br>
<br>
> About resource group, is you mean "<a href="https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#group-resources" rel="noreferrer" target="_blank">https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#group-resources</a>" ?<br>
<br>
Yes, that's correct. And if you have access to the Red Hat docs, you can also refer to the following:<br>
- Chapter 5. Configuring an active/passive Apache HTTP server in a Red Hat High Availability cluster (<a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-active-passive-http-server-in-a-cluster-configuring-and-managing-high-availability-clusters" rel="noreferrer" target="_blank">https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-active-passive-http-server-in-a-cluster-configuring-and-managing-high-availability-clusters</a>)<br>
- Configuring resource groups (<a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-cluster-resources-configuring-and-managing-high-availability-clusters#assembly_resource-groups-configuring-cluster-resources" rel="noreferrer" target="_blank">https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-cluster-resources-configuring-and-managing-high-availability-clusters#assembly_resource-groups-configuring-cluster-resources</a>)<br>
<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> On Wednesday, March 17, 2021, 01:10:33 AM GMT+3:30, Reid Wahl <<a href="mailto:nwahl@redhat.com" target="_blank">nwahl@redhat.com</a>> wrote: <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> On Tue, Mar 16, 2021 at 1:47 PM Jason Long <<a href="mailto:hack3rcon@yahoo.com" target="_blank">hack3rcon@yahoo.com</a>> wrote:<br>
>> Thanks.<br>
>> Excuse me, did you read how did I set my cluster up? Please look at: <a href="https://paste.ubuntu.com/p/Nx2ptqZjFg/" rel="noreferrer" target="_blank">https://paste.ubuntu.com/p/Nx2ptqZjFg/</a><br>
>> Which part of my configuration is wrong?<br>
> <br>
> 1. You configured the IPaddr2 resource to use node2's IP address (192.168.1.10) instead of the floating IP address (192.168.1.4).<br>
> 2. You didn't configure the resources into a resource group. As a result, the floating IP may end up on a different node compared to the web server.<br>
> <br>
> Both of these are explained in more detail in previous emails :)<br>
> <br>
> I also thought that Ubuntu used /etc/apache2 instead of /etc/httpd, but maybe not.<br>
> <br>
>> Both of the main and secondary servers are an Apache Reverse Proxy Server. I want when main server failed, then the secondary server handle the requests.<br>
>> How can I achieve this goal?<br>
> <br>
> I don't know anything about reverse proxies, sorry. I can only really comment on general principles here, like "an IPaddr2 resource shouldn't manage an IP address that's expected to stay on one particular node" and "if two resources need to run on the same node and start in a particular order, they need to be grouped."<br>
> <br>
>> <br>
>> <br>
>> <br>
>> <br>
>> On Tuesday, March 16, 2021, 11:57:13 PM GMT+3:30, Reid Wahl <<a href="mailto:nwahl@redhat.com" target="_blank">nwahl@redhat.com</a>> wrote: <br>
>> <br>
>> <br>
>> <br>
>> <br>
>> <br>
>> <br>
>> <br>
>> On Tue, Mar 16, 2021 at 1:03 PM Jason Long <<a href="mailto:hack3rcon@yahoo.com" target="_blank">hack3rcon@yahoo.com</a>> wrote:<br>
>>> Thanks.<br>
>>> I changed it to the IP address of node2 and I can see my Apache Web Server.<br>
>> <br>
>> Like I said, you don't want to do that. You should change it to an IP address that you want the cluster to manage. If you set it to node2's IP address, Pacemaker will try to remove node2's IP address and assign it to node1 if the resource fails over to node1. If node2 is using that address for anything else (e.g., corosync communication), then that would be a big problem.<br>
>> <br>
>> The managed floating IP address should be an IP address dedicated to the web server, that can move between cluster nodes as needed.<br>
>> <br>
>>> <br>
>>> $ sudo pcs resource update floating_ip ocf:heartbeat:IPaddr2 ip=192.168.1.10 cidr_netmask=24 op monitor interval=5s<br>
>>> <br>
>>> Now, I want to test my cluster and stop node1. On node1 I did:<br>
>>> <br>
>>> # pcs cluster stop http_server<br>
>>> Error: nodes 'http_server' do not appear to exist in configuration<br>
>>> <br>
>>> Why?<br>
>> <br>
>> The `pcs cluster stop` command stops pacemaker and corosync services on a particular node (the local node if you don't specify one). You've specified `http_server`, so the command is trying to connect to a node called "http_server" and stop services there.<br>
>> <br>
>> If you want to stop node1, then run `pcs cluster stop node1`.<br>
>> <br>
>> If you want to prevent the http_server resource from running anywhere, then run `pcs resource disable http_server`.<br>
>> <br>
>> If you want to prevent the http_server resource from running on node2, then run `pcs resource ban http_server node2`. If you want to remove that constraint later and allow it to run on node2 again, run `pcs resource clear http_server`.<br>
>> <br>
>>> <br>
>>> <br>
>>> <br>
>>> <br>
>>> <br>
>>> On Tuesday, March 16, 2021, 11:05:48 PM GMT+3:30, Reid Wahl <<a href="mailto:nwahl@redhat.com" target="_blank">nwahl@redhat.com</a>> wrote: <br>
>>> <br>
>>> <br>
>>> <br>
>>> <br>
>>> <br>
>>> <br>
>>> <br>
>>> On Tue, Mar 16, 2021 at 12:11 PM Jason Long <<a href="mailto:hack3rcon@yahoo.com" target="_blank">hack3rcon@yahoo.com</a>> wrote:<br>
>>>> Thank you so much.<br>
>>>> I forgot to ask a question. In below command, what should be the ip="IP" value? Is it the IP address of my Apache or node2?<br>
>>>> <br>
>>>> $ sudo pcs resource create floating_ip ocf:heartbeat:IPaddr2 ip="IP" cidr_netmask=24 op monitor interval=5s<br>
>>> <br>
>>> It's the IP address that you want the cluster to manage. That sounds like it would be your web server IP address. You definitely don't want to set the ip option to some IP address that resides statically on one of the nodes. An IP managed by an IPaddr2 resource can be moved around the cluster.<br>
>>> <br>
>>> If that's your web server IP address, you'll also want to put it in a resource group with your apache resource. Otherwise, the floating IP may end up on a different node from your web server, which renders the IP address useless.<br>
>>> <br>
>>> For resources that already exist, you can use the `pcs resource group add` command. For example: `pcs resource group add apache_group floating_ip http_server`.<br>
>>> <br>
>>> For resources that you're newly creating, you can use the `--group` option of `pcs resource create`. For example, `pcs resource create new_IP IPaddr2 <options> --group apache_group`. That adds the new resource to the end of the group.<br>
>>> <br>
>>> The pcs help outputs have more details on these options.<br>
>>> <br>
>>> If you're new to resource groups, you can check them out here: <a href="https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#group-resources" rel="noreferrer" target="_blank">https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#group-resources</a><br>
>>> <br>
>>>> <br>
>>>> Logs are:<br>
>>>> <a href="https://paste.ubuntu.com/p/86YHRX6rdC/" rel="noreferrer" target="_blank">https://paste.ubuntu.com/p/86YHRX6rdC/</a><br>
>>>> <a href="https://paste.ubuntu.com/p/HHVzNvhRM2/" rel="noreferrer" target="_blank">https://paste.ubuntu.com/p/HHVzNvhRM2/</a><br>
>>>> <a href="https://paste.ubuntu.com/p/kNxynhfyc2/" rel="noreferrer" target="_blank">https://paste.ubuntu.com/p/kNxynhfyc2/</a><br>
>>>> <br>
>>>> I have not any "status.conf" file:<br>
>>>> <br>
>>>> # cat /etc/httpd/conf.d/status.conf<br>
>>>> cat: /etc/httpd/conf.d/status.conf: No such file or directory<br>
>>>> <br>
>>> <br>
>>> If you're using Ubuntu, I believe it's in a different location -- somewhere in /etc/apache2 if memory serves.<br>
>>> <br>
>>>> <br>
>>>> <br>
>>>> <br>
>>>> <br>
>>>> On Tuesday, March 16, 2021, 07:20:32 PM GMT+3:30, Klaus Wenninger <<a href="mailto:kwenning@redhat.com" target="_blank">kwenning@redhat.com</a>> wrote: <br>
>>>> <br>
>>>> <br>
>>>> <br>
>>>> <br>
>>>> <br>
>>>> On 3/16/21 3:18 PM, Ken Gaillot wrote:<br>
>>>>> On Tue, 2021-03-16 at 09:42 +0000, Jason Long wrote:<br>
>>>>>> Hello,<br>
>>>>>> I want to launch a Clustering for my Apache Web Server. I have three<br>
>>>>>> servers:<br>
>>>>>><br>
>>>>>> 1- Main server that acts as a Reverse Proxy<br>
>>>>>> 2- The secondary server that when my main server stopped, work as a<br>
>>>>>> Reverse Proxy<br>
>>>>>> 3- Apache Web Server<br>
>>>>>><br>
>>>>>> The client ---> Reverse Proxy Server ---> Apache Web Server<br>
>>>>>><br>
>>>>>> The IP addresses are:<br>
>>>>>> Main Server (node1) : 192.168.1.3<br>
>>>>>> Secondary Server (node2) : 192.168.1.10<br>
>>>>>> Apache Web Server (node3) : 192.168.1.4<br>
>>>>>><br>
>>>>>> On the main and secondary servers, I installed and configured Apache<br>
>>>>>> as a Reverse Proxy Server. I created a Virtual Host and my Reverse<br>
>>>>>> Configuration is:<br>
>>>>>><br>
>>>>>> <VirtualHost *:80><br>
>>>>>> ProxyPreserveHost On<br>
>>>>>> ProxyPass / <a href="http://192.168.1.4/" rel="noreferrer" target="_blank">http://192.168.1.4/</a><br>
>>>>>> ProxyPassReverse / <a href="http://192.168.1.4/" rel="noreferrer" target="_blank">http://192.168.1.4/</a><br>
>>>>>> </VirtualHost><br>
>>>>>><br>
>>>>>> As you see, it forward all requests to the Apache Web Server.<br>
>>>>>><br>
>>>>>> I installed and configured Corosync and Pacemaker as below:<br>
>>>>>><br>
>>>>>> On the main and secondary servers, I opened "/etc/hosts" files and<br>
>>>>>> added my servers IP addresses and host names:<br>
>>>>>><br>
>>>>>> 192.168.1.3 node1<br>
>>>>>> 192.168.1.10 node2<br>
>>>>>><br>
>>>>>> Then installed Pacemaker, Corosync, and Pcs packages on both servers<br>
>>>>>> and started its service:<br>
>>>>>><br>
>>>>>> $ sudo yum install corosync pacemaker pcs<br>
>>>>>> $ sudo systemctl enable pcsd<br>
>>>>>> $ sudo systemctl start pcsd<br>
>>>>>> $ sudo systemctl status pcsd<br>
>>>>>><br>
>>>>>> Then Configured the firewall on both servers as below:<br>
>>>>>><br>
>>>>>> $ sudo firewall-cmd --permanent --add-service=http<br>
>>>>>> $ sudo firewall-cmd --permanent --add-service=high-<br>
>>>>>> availability<br>
>>>>>> $ sudo firewall-cmd --reload<br>
>>>>>><br>
>>>>>> After it, on both servers, I created a password for the "hacluster"<br>
>>>>>> user, then on the main server:<br>
>>>>>><br>
>>>>>> $ sudo pcs host auth node1 node2 -u hacluster -p password<br>
>>>>>> node1: Authorized<br>
>>>>>> node2: Authorized<br>
>>>>>><br>
>>>>>> Then:<br>
>>>>>> $ sudo pcs cluster setup mycluster node1 node2 --start --enable<br>
>>>>>> $ sudo pcs cluster enable --all<br>
>>>>>> node1: Cluster Enabled<br>
>>>>>> node2: Cluster Enabled<br>
>>>>>><br>
>>>>>> After it:<br>
>>>>>> $ sudo pcs cluster start --all<br>
>>>>>> node1: Starting Cluster...<br>
>>>>>> node2: Starting Cluster...<br>
>>>>>><br>
>>>>>> I checked my clusters with below command and they are up and running:<br>
>>>>>> $ sudo pcs status<br>
>>>>>> ...<br>
>>>>>> Node List:<br>
>>>>>> * Online: [ node1 node2 ]<br>
>>>>>> ....<br>
>>>>>><br>
>>>>>> And finally, I tried to add a resource:<br>
>>>>>> $ sudo pcs resource create floating_ip ocf:heartbeat:IPaddr2<br>
>>>>>> ip=192.168.1.4 cidr_netmask=24 op monitor interval=5s<br>
>>>> Shouldn't the virtual-IP moved between node1 & node2 be<br>
>>>> different from the IP used for the web-server on node3?<br>
>>>> And having just one instance of the reverse-proxy that<br>
>>>> should probably be colocated with the virtual-IP - right?<br>
>>>> <br>
>>>> Klaus<br>
>>>> <br>
>>>>>> $ sudo pcs resource create http_server ocf:heartbeat:apache<br>
>>>>>> configfile="/etc/httpd/conf.d/VirtualHost.conf" op monitor<br>
>>>>>> timeout="5s" interval="5s"<br>
>>>>>><br>
>>>>>> On both servers (Main and Secondary), pcsd service is enabled, but<br>
>>>>>> when I want to see my Apache Web Server then it show me below error:<br>
>>>>>><br>
>>>>>> Proxy Error<br>
>>>>>> The proxy server received an invalid response from an upstream<br>
>>>>>> server.<br>
>>>>>> The proxy server could not handle the request<br>
>>>>>> Reason: Error reading from remote server<br>
>>>>>><br>
>>>>>> Why? Which parts of my configuration is wrong?<br>
>>>>>> The output of "sudo pcs status" command is:<br>
>>>>>> <a href="https://paste.ubuntu.com/p/V9KvHKwKtC/" rel="noreferrer" target="_blank">https://paste.ubuntu.com/p/V9KvHKwKtC/</a><br>
>>>>>><br>
>>>>>> Thank you.<br>
>>>>> The thing to investigate is:<br>
>>>>><br>
>>>>> Failed Resource Actions:<br>
>>>>> * http_server_start_0 on node2 'error' (1): call=12, status='Timed Out', exitreason='', last-rc-change='2021-03-16 12:28:14 +03:30', queued=0ms, exec=40004ms<br>
>>>>> * http_server_start_0 on node1 'error' (1): call=14, status='Timed Out', exitreason='', last-rc-change='2021-03-16 12:28:52 +03:30', queued=0ms, exec=40002ms<br>
>>>>><br>
>>>>> The web server start timed out. Check the system, pacemaker and apache<br>
>>>>> logs around those times for any hints.<br>
>>>>><br>
>>>>> Did you enable and test the status URL? The ocf:heartbeat:apache agent<br>
>>>>> checks the status as part of its monitor (which is also done for<br>
>>>>> start). It would be something like:<br>
>>>>><br>
>>>>> cat <<-END >/etc/httpd/conf.d/status.conf<br>
>>>>> <Location /server-status><br>
>>>>> SetHandler server-status<br>
>>>>> Require local<br>
>>>>> </Location><br>
>>>>> END<br>
>>>>><br>
>>>> <br>
>>>> _______________________________________________<br>
>>>> Manage your subscription:<br>
>>>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>>>> <br>
>>>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>>>> <br>
>>>> _______________________________________________<br>
>>>> Manage your subscription:<br>
>>>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>>>> <br>
>>>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>>>> <br>
>>>> <br>
>>> <br>
>>> <br>
>>> -- <br>
>>> Regards,<br>
>>> <br>
>>> Reid Wahl, RHCA<br>
>>> Senior Software Maintenance Engineer, Red Hat<br>
>>> CEE - Platform Support Delivery - ClusterHA<br>
> <br>
>> <br>
>>> <br>
>>> <br>
>>> _______________________________________________<br>
>>> Manage your subscription:<br>
>>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>>> <br>
>>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>>> _______________________________________________<br>
>>> Manage your subscription:<br>
>>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>>> <br>
>>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>>> <br>
>> <br>
>> <br>
>> -- <br>
>> Regards,<br>
>> <br>
>> Reid Wahl, RHCA<br>
>> Senior Software Maintenance Engineer, Red Hat<br>
>> CEE - Platform Support Delivery - ClusterHA<br>
>> <br>
>> _______________________________________________<br>
>> Manage your subscription:<br>
>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>> <br>
>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>> _______________________________________________<br>
>> Manage your subscription:<br>
>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>> <br>
>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>> <br>
> <br>
> <br>
> -- <br>
> Regards,<br>
> <br>
> Reid Wahl, RHCA<br>
> Senior Software Maintenance Engineer, Red Hat<br>
> CEE - Platform Support Delivery - ClusterHA<br>
> <br>
> _______________________________________________<br>
> Manage your subscription:<br>
> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
> <br>
> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
> _______________________________________________<br>
> Manage your subscription:<br>
> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
> <br>
> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
> <br>
<br>
<br>
-- <br>
Regards,<br>
<br>
Reid Wahl, RHCA<br>
Senior Software Maintenance Engineer, Red Hat<br>
CEE - Platform Support Delivery - ClusterHA<br>
<br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div>Regards,<br><br></div>Reid Wahl, RHCA<br></div><div>Senior Software Maintenance Engineer, Red Hat<br></div>CEE - Platform Support Delivery - ClusterHA</div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>