[ClusterLabs] Antw: The node and resource status is defferent when the node poweroff
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Thu Mar 15 03:58:30 EDT 2018
>>> ??? <fanguoteng at highgo.com> schrieb am 15.03.2018 um 08:42 in Nachricht
<64284d08929a4409b6aa29e9a7dd3ee6 at EX01.highgo.com>:
> Hello,
>
> There are three nodes in our cluster (redhat7). When we run "reboot" in one
> node, the "pcs status" show the node status is offline and the resource
> status is Stopped. That is fine. But when we power off the node directly,
the
> node status is " UNCLEAN (offline)" and the resource status is "
> Started(UNCLEAN) ".
I'm not a Redhat user, but my guess is that reboot shuts down the cluster
node, so it is "expected" that the node is down, while on power-off, the node
is "unexpectedly" down.
A resource that is started (unclean)" means that according to the last monitor
operation the resurce is started, but as the node is down, the resource's
actual state may be different than stored in the CIB. The next monitor
operation should fix the status, or, as the monitor cannot run on a failed
node, fencing the node will implicitly declare all resources on the node as
stopped.
>
> Why is the status different when one node shutdown in different way? Could
> we have any way to make the resource status change from " Started node1
> (UNCLEAN)" to "Stopped" when we poweroff the node computer?
Yes, that should happen automatically after some delay. See above.
Regards,
Ulrich
>
>
> 1. The normal status:
> scsi-shooter (stonith:fence_scsi): Started node1
> Clone Set: dlm-clone [dlm]
> Started: [ node1 node2 node3 ]
> Clone Set: clvmd-clone [clvmd]
> Started: [ node1 node2 node3 ]
> Clone Set: clusterfs-clone [clusterfs]
> Started: [ node1 node2 node3 ]
> Master/Slave Set: pgsql-ha [pgsqld]
> Masters: [ node3 ]
> Slaves: [ node1 node2 ]
> pgsql-master-ip (ocf::heartbeat:IPaddr2): Started node3
>
> 2. When executing "reboot" in one node:
> Online: [ node2 node3 ]
> OFFLINE: [ node1 ]
>
> Full list of resources:
>
> scsi-shooter (stonith:fence_scsi): Started node2
> Clone Set: dlm-clone [dlm]
> Started: [ node2 node3 ]
> Stopped: [ node1 ]
> Clone Set: clvmd-clone [clvmd]
> Started: [ node2 node3 ]
> Stopped: [ node1 ]
> Clone Set: clusterfs-clone [clusterfs]
> Started: [ node2 node3 ]
> Stopped: [ node1 ]
> Master/Slave Set: pgsql-ha [pgsqld]
> Masters: [ node3 ]
> Slaves: [ node2 ]
> Stopped: [ node1 ]
> pgsql-master-ip (ocf::heartbeat:IPaddr2): Started node3
>
> 3. When power off the node:
>
> Node node1: UNCLEAN (offline)
> Online: [ node2 node3 ]
>
> Full list of resources:
>
> scsi-shooter (stonith:fence_scsi): Started[ node1 node2 ]
> Clone Set: dlm-clone [dlm]
> dlm (ocf::pacemaker:controld): Started node1 (UNCLEAN)
> Started: [ node2 node3 ]
> Clone Set: clvmd-clone [clvmd]
> clvmd (ocf::heartbeat:clvm): Started node1 (UNCLEAN)
> Started: [ node2 node3 ]
> Clone Set: clusterfs-clone [clusterfs]
> clusterfs (ocf::heartbeat:Filesystem): Started node1 (UNCLEAN)
> Started: [ node2 node3 ]
> Master/Slave Set: pgsql-ha [pgsqld]
> pgsqld (ocf::heartbeat:pgsqlms): Slave node1 (UNCLEAN)
> Masters: [ node3 ]
> Slaves: [ node2 ]
> pgsql-master-ip (ocf::heartbeat:IPaddr2): Started node3
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users
mailing list