[ClusterLabs] Antw: [EXT] Re: how to setup single node cluster

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Thu Apr 8 03:24:08 EDT 2021


>>> Klaus Wenninger <kwenning at redhat.com> schrieb am 08.04.2021 um 08:26 in
Nachricht <01fe6b6e-690a-2ea7-6218-8545f0b7a5e5 at redhat.com>:
> On 4/8/21 8:16 AM, Reid Wahl wrote:
>>
>>
>> On Wed, Apr 7, 2021 at 9:46 PM Strahil Nikolov <hunter86_bg at yahoo.com 
>> <mailto:hunter86_bg at yahoo.com>> wrote:
>>
>>     I always though that the setup is the same, just the node count is
>>     only one.
>>
>>     I guess you need pcs, corosync + pacemaker.
>>     If RH is going to support it, they will require fencing. Most
>>     probably sbd or ipmi are the best candidates.
>>
>>
>> I don't think we do require fencing for single-node clusters. (Anyone 
>> at Red Hat, feel free to comment.) I vaguely recall an internal 
>> mailing list or IRC conversation where we discussed this months ago, 
>> but I can't find it now. I've also checked our support policies 
>> documentation, and it's not mentioned in the "cluster size" doc or the 
>> "fencing" doc.
>>
>> The closest thing I can find is the following, from the cluster size 
>> doc[1]:
>> ~~~
>> RHEL 8.2 and later: Support for 1 or more nodes
>>
>>   * Single node clusters do not support DLM and GFS2 filesystems (as
>>     they require fencing).
>>
>> ~~~

Actually I think using DLM and a cluster filesystem for just one single node would be overkill, BUT it should work (if you have planned to extend your 1-node cluster to more nodes at a later time).
Fencing for a single-node-cluster just means reboot, so that shouldn't really be the problem.

>>
>> To me that suggests that fencing isn't required in a single-node 
>> cluster. Maybe sbd could work (I haven't thought it through), but 
>> conventional power fencing (e.g., fence_ipmilan) wouldn't. That's 
>> because most conventional power fencing agents require sending a 
>> "power on" signal after the "power off" is complete.
> And moreover you have to be alive enough to kick off
> conventional power fencing to self-fence ;-)
> With sbd the hardware-watchdog should kick in.
> 
> Klaus
>>
>> [1] https://access.redhat.com/articles/3069031 
>> <https://access.redhat.com/articles/3069031>
>>
>>
>>     Best Regards,
>>     Strahil Nikolov
>>
>>         On Thu, Apr 8, 2021 at 6:52, d tbsky
>>         <tbskyd at gmail.com <mailto:tbskyd at gmail.com>> wrote:
>>         Hi:
>>             I found RHEL 8.2 support single node cluster now.  but I
>>         didn't
>>         find further document to explain the concept. RHEL 8.2 also
>>         support
>>         "disaster recovery cluster". so I think maybe a single node
>>         disaster
>>         recovery cluster is not bad.
>>
>>           I think corosync is still necessary under single node
>>         cluster. or
>>         is there other new style of configuration?
>>
>>             thanks for help!
>>         _______________________________________________
>>         Manage your subscription:
>>         https://lists.clusterlabs.org/mailman/listinfo/users 
>>         <https://lists.clusterlabs.org/mailman/listinfo/users>
>>
>>         ClusterLabs home: https://www.clusterlabs.org/ 
>>         <https://www.clusterlabs.org/>
>>
>>     _______________________________________________
>>     Manage your subscription:
>>     https://lists.clusterlabs.org/mailman/listinfo/users 
>>     <https://lists.clusterlabs.org/mailman/listinfo/users>
>>
>>     ClusterLabs home: https://www.clusterlabs.org/ 
>>     <https://www.clusterlabs.org/>
>>
>>
>>
>> -- 
>> Regards,
>>
>> Reid Wahl, RHCA
>> Senior Software Maintenance Engineer, Red Hat
>> CEE - Platform Support Delivery - ClusterHA
>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users 
>>
>> ClusterLabs home: https://www.clusterlabs.org/ 
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 






More information about the Users mailing list