[ClusterLabs] why is node fenced ?

Lentes, Bernd bernd.lentes at helmholtz-muenchen.de
Mon Aug 12 12:09:24 EDT 2019


last Friday (9th of August) i had to install patches on my two-node cluster.
I put one of the nodes (ha-idg-2) into standby (crm node standby ha-idg-2), patched it, rebooted, 
started the cluster (systemctl start pacemaker) again, put the node again online, everything fine.

Then i wanted to do the same procedure with the other node (ha-idg-1).
I put it in standby, patched it, rebooted, started pacemaker again.
But then ha-idg-1 fenced ha-idg-2, it said the node is unclean.
I know that nodes which are unclean need to be shutdown, that's logical.

But i don't know from where the conclusion comes that the node is unclean respectively why it is unclean,
i searched in the logs and didn't find any hint.

I put the syslog and the pacemaker log on a seafile share, i'd be very thankful if you'll have a look.

Here the cli history of the commands:

17:03:04  crm node standby ha-idg-2
17:07:15  zypper up (install Updates on ha-idg-2)
17:17:30  systemctl reboot
17:25:21  systemctl start pacemaker.service
17:25:47  crm node online ha-idg-2
17:26:35  crm node standby ha-idg1-
17:30:21  zypper up (install Updates on ha-idg-1)
17:37:32  systemctl reboot
17:43:04  systemctl start pacemaker.service
17:44:00  ha-idg-1 is fenced



OS is SLES 12 SP4, pacemaker 1.1.19, corosync 2.3.6-9.13.1


Bernd Lentes 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.lentes at helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 

Perfekt ist wer keine Fehler macht 
Also sind Tote perfekt

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

More information about the Users mailing list