[ClusterLabs] pcs status shows all nodes online but pcs cluster status shows all nodes off line
Reid Wahl
nwahl at redhat.com
Wed Dec 2 02:27:47 EST 2020
You've added the high-availability service to the public zone. Can you
verify that the interface you're using for pcsd is bound to the public
zone?
Check whether the https_proxy environment variable is set. You may
need to unset it or to add it to /etc/sysconfig/pcsd.
Just an FYI, the `--name` option isn't used with `pcs cluster auth`.
It looks like it gets ignored though.
On Mon, Nov 30, 2020 at 5:47 AM John Karippery <john.karippery at yahoo.com> wrote:
>
> I have problem while setup pacemaker on debian 9 servers. i have 3 server and I installed
>
> apt install pacemaker, corosync, pcsdfirewalld fence-agents
>
>
> pcs status
>
> pcs status
> Cluster name: vipcluster
> Stack: corosync
> Current DC: server1 (version 1.1.16-94ff4df) - partition with quorum
> Last updated: Mon Nov 30 14:43:36 2020
> Last change: Mon Nov 30 13:03:09 2020 by root via cibadmin on server1
>
> 3 nodes configured
> 2 resources configured
>
> Online: [ server1 server2 server3 ]
>
> Full list of resources:
>
> MasterVip (ocf::heartbeat:IPaddr2): Started server1
> Apache (ocf::heartbeat:apache): Started server1
>
> Daemon Status:
> corosync: active/enabled
> pacemaker: active/enabled
> pcsd: active/enabled
>
>
>
>
> pcs cluster status
> Cluster Status:
> Stack: corosync
> Current DC: server1 (version 1.1.16-94ff4df) - partition with quorum
> Last updated: Mon Nov 30 14:44:55 2020
> Last change: Mon Nov 30 13:03:09 2020 by root via cibadmin on server1
> 3 nodes configured
> 2 resources configured
>
> PCSD Status:
> server3: Offline
> server1: Offline
> server2: Offline
>
>
>
> error is showing while pcs auth
>
> pcs cluster auth server1 server2 server3 --name vipcluster -u hacluster -p 12345678 --debug
> Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb auth
> --Debug Input Start--
> {"username": "hacluster", "local": false, "nodes": ["server1", "server2", "server3"], "password": "12345678", "force": false}
> --Debug Input End--
> Return Value: 0
> --Debug Output Start--
> {
> "status": "ok",
> "data": {
> "auth_responses": {
> "server1": {
> "status": "noresponse"
> },
> "server2": {
> "status": "noresponse"
> },
> "server3": {
> "status": "noresponse"
> }
> },
> "sync_successful": true,
> "sync_nodes_err": [
>
> ],
> "sync_responses": {
> }
> },
> "log": [
> "I, [2020-11-30T14:46:24.758862 #9677] INFO -- : PCSD Debugging enabled\n",
> "D, [2020-11-30T14:46:24.758900 #9677] DEBUG -- : Did not detect RHEL 6\n",
> "I, [2020-11-30T14:46:24.758919 #9677] INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_name\n",
> "I, [2020-11-30T14:46:24.758931 #9677] INFO -- : CIB USER: hacluster, groups: \n",
> "D, [2020-11-30T14:46:24.770175 #9677] DEBUG -- : [\"totem.cluster_name (str) = vipcluster\\n\"]\n",
> "D, [2020-11-30T14:46:24.770373 #9677] DEBUG -- : []\n",
> "D, [2020-11-30T14:46:24.770444 #9677] DEBUG -- : Duration: 0.011215661s\n",
> "I, [2020-11-30T14:46:24.770585 #9677] INFO -- : Return Value: 0\n",
> "I, [2020-11-30T14:46:24.772514 #9677] INFO -- : SRWT Node: server1 Request: check_auth\n",
> "E, [2020-11-30T14:46:24.772628 #9677] ERROR -- : Unable to connect to node server1, no token available\n",
> "I, [2020-11-30T14:46:24.772943 #9677] INFO -- : SRWT Node: server2 Request: check_auth\n",
> "E, [2020-11-30T14:46:24.773032 #9677] ERROR -- : Unable to connect to node server2, no token available\n",
> "I, [2020-11-30T14:46:24.773202 #9677] INFO -- : SRWT Node: server3 Request: check_auth\n",
> "E, [2020-11-30T14:46:24.773278 #9677] ERROR -- : Unable to connect to node server3, no token available\n"
> ]
> }
> --Debug Output End--
>
> Error: Unable to communicate with server1
> Error: Unable to communicate with server2
> Error: Unable to communicate with server3
>
>
> firewall settings
>
>
> # firewall-cmd --permanent --add-service=high-availability
> Warning: ALREADY_ENABLED: high-availability
> success
> ~# firewall-cmd --add-service=high-availability
> Warning: ALREADY_ENABLED: 'high-availability' already in 'public'
> success
>
>
>
>
>
>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
--
Regards,
Reid Wahl, RHCA
Senior Software Maintenance Engineer, Red Hat
CEE - Platform Support Delivery - ClusterHA
More information about the Users
mailing list