[ClusterLabs] Q: nodes staying "UNCLEAN (offline)" -- why?
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Tue Apr 23 08:58:51 EDT 2019
Hi!
After some tweaking past updating SLES11 to SLES12 I build a new config file for corosync.
Corosync is happy, pacemaker says the nodes are online, but the cluster status still says both nodes are "UNCLEAN (offline)". Why?
Messages I see are:
crmd: info: peer_update_callback: Client h06/peer now has status [online] (DC=<null>, changed=4000000)
crmd: info: do_started: Delaying start, no membership data (0000000000100000)
crmd: info: peer_update_callback: Client h06/peer now has status [online] (DC=<null>, changed=4000000)
crmd: info: init_cs_connection_once: Connection to 'corosync': established
...
crmd: info: peer_update_callback: Client rksaph02/peer now has status [online] (DC=<null>, changed=4000000)
...
attrd: notice: crm_update_peer_state_iter: Node h02 state is now member | nodeid=202 previous=unknown source=crm_update_peer_proc
...
attrd: info: election_count_vote: election-attrd round 2 (owner node ID 202) lost: vote from h02 (Uptime)
# crm_mon -1Arfj
Stack: classic openais (with plugin)
Current DC: NONE
Last updated: Tue Apr 23 14:29:36 2019
Last change: Tue Apr 23 13:44:52 2019 by hacluster via crmd on h06
2 nodes configured (2 expected votes)
13 resources configured (1 DISABLED)
Node h02: UNCLEAN (offline)
Node h06: UNCLEAN (offline)
Full list of resources:
...
The only clue I get from "crm configure verify" is:
WARNING: cib-bootstrap-options: unknown attribute 'expected-quorum-votes'
The config section has these values set:
<crm_config>
<cluster_property_set id="cib-bootstrap-options">
<nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.12-f47ea56"/>
<nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="classic openais (with plugin)"/>
<nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/>
<nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
<nvpair id="cib-bootstrap-options-placement-strategy" name="placement-strategy" value="utilization"/>
<nvpair id="cib-bootstrap-options-stonith-timeout" name="stonith-timeout" value="90s"/>
<nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1513368008"/>
<nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true"/>
<nvpair id="cib-bootstrap-options-maintenance-mode" name="maintenance-mode" value="false"/>
<nvpair id="cib-bootstrap-options-pe-error-series-max" name="pe-error-series-max" value="100"/>
<nvpair id="cib-bootstrap-options-pe-warn-series-max" name="pe-warn-series-max" value="100"/>
<nvpair id="cib-bootstrap-options-pe-input-series-max" name="pe-input-series-max" value="100"/>
<nvpair id="cib-bootstrap-options-cluster-recheck-interval" name="cluster-recheck-interval" value="15m"/>
<nvpair id="cib-bootstrap-options-enable-acl" name="enable-acl" value="true"/>
</cluster_property_set>
</crm_config>
Any ideas what might cause this?
# cibadmin --upgrade --force
Call cib_upgrade failed (-62): Timer expired
The only message I had about it is:
cib: info: cib_process_request: Forwarding cib_upgrade operation for section 'all' to all (origin=local/cibadmin/2)
Regards,
Ulrich
More information about the Users
mailing list