[ClusterLabs] Two nodes cluster issue

Klaus Wenninger kwenning at redhat.com
Mon Jul 24 12:27:56 UTC 2017


On 07/24/2017 02:05 PM, Kristián Feldsam wrote:
> Hello, you have to use second fencing device, for ex. APC Switched PDU.
>
> https://wiki.clusterlabs.org/wiki/Configure_Multiple_Fencing_Devices_Using_pcs

Problem here seems to be that the fencing devices available are running from
the same power-supply as the node itself. So they are kind of useless to
determine
weather the partner-node has no power or simply is no reachable via network.
 
>
> S pozdravem Kristián Feldsam
> Tel.: +420 773 303 353, +421 944 137 535
> E-mail.: support at feldhost.cz
>
> www.feldhost.cz - FeldHost™ – profesionální hostingové a serverové
> služby za adekvátní ceny.
>
> FELDSAM s.r.o.
> V rohu 434/3
> Praha 4 – Libuš, PSČ 142 00
> IČ: 290 60 958, DIČ: CZ290 60 958
> C 200350 vedená u Městského soudu v Praze
>
> Banka: Fio banka a.s.
> Číslo účtu: 2400330446/2010
> BIC: FIOBCZPPXX
> IBAN: CZ82 2010 0000 0024 0033 0446
>
>> On 24 Jul 2017, at 13:51, Tomer Azran <tomer.azran at edp.co.il> wrote:
>>
>> Hello,
>>  
>> We built a pacemaker cluster with 2 physical servers.
>> We configured DRBD in Master\Slave setup, a floating IP and file
>> system mount in Active\Passive mode.
>> We configured two STONITH devices (fence_ipmilan), one for each server.
>>  
>> We are trying to simulate a situation when the Master server crushes
>> with no power.
>> We pulled both of the PSU cables and the server becomes offline
>> (UNCLEAN).
>> The resources that the Master use to hold are now in Started
>> (UNCLEAN) state.
>> The state is unclean since the STONITH failed (the STONITH device is
>> located on the server (Intel RMM4 - IPMI) – which uses the same power
>> supply).
>>  
>> The problem is that now, the cluster does not releasing the resources
>> that the Master holds, and the service goes down.
>>  
>> Is there any way to overcome this situation?
>> We tried to add a qdevice but got the same results.

If you have already setup qdevice (using an additional node or so) you
could use
quorum-based watchdog-fencing via SBD.

>>  
>> We are using pacemaker 1.1.15 on CentOS 7.3
>>  
>> Thanks,
>> Tomer.
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org <mailto:Users at clusterlabs.org>
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org <http://www.clusterlabs.org/>
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org <http://bugs.clusterlabs.org/>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


-- 
Klaus Wenninger

Senior Software Engineer, EMEA ENG Openstack Infrastructure

Red Hat

kwenning at redhat.com   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20170724/f79c42a7/attachment-0002.html>


More information about the Users mailing list