[ClusterLabs] Cluster Stopped, No Messages?
Strahil Nikolov
hunter86_bg at yahoo.com
Fri May 28 18:21:14 EDT 2021
I agree -> fencing is mandatory.
You can enable the debug logs by editing corosync.conf or /etc/sysconfig/pacemaker.
In case simple reload doesn't work, you can set the cluster in global maintenance, stop and then start the stack.
Best Regards,Strahil Nikolov
On Fri, May 28, 2021 at 22:13, Digimer<lists at alteeve.ca> wrote: On 2021-05-28 3:08 p.m., Eric Robinson wrote:
>
>> -----Original Message-----
>> From: Digimer <lists at alteeve.ca>
>> Sent: Friday, May 28, 2021 12:43 PM
>> To: Cluster Labs - All topics related to open-source clustering welcomed
>> <users at clusterlabs.org>; Eric Robinson <eric.robinson at psmnv.com>; Strahil
>> Nikolov <hunter86_bg at yahoo.com>
>> Subject: Re: [ClusterLabs] Cluster Stopped, No Messages?
>>
>> Shared storage is not what triggers the need for fencing. Coordinating actions
>> is what triggers the need. Specifically; If you can run resource on both/all
>> nodes at the same time, you don't need HA. If you can't, you need fencing.
>>
>> Digimer
>
> Thanks. That said, there is no fencing, so any thoughts on why the node behaved the way it did?
Without fencing, when a communication or membership issues arises, it's
hard to predict what will happen.
I don't see anything in the short log snippet to indicate what happened.
What's on the other node during the event? When did the node disappear
and when was it rejoined, to help find relevant log entries?
Going forward, if you want predictable and reliable operation, implement
fencing asap. Fencing is required.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20210528/1d134485/attachment-0001.htm>
More information about the Users
mailing list