[ClusterLabs] Two mode cluster VMware drbd
Klaus Wenninger
kwenning at redhat.com
Thu Mar 14 05:30:36 EDT 2019
On 03/12/2019 06:30 PM, Andrei Borzenkov wrote:
> 12.03.2019 18:10, Adam Budziński пишет:
>> Hello,
>>
>>
>>
>> I’m planning to setup a two node (active-passive) HA cluster consisting of
>> pacemaker, corosync and DRBD. The two nodes will run on VMware VM’s and
>> connect to a single DB server (unfortunately for various reasons not
>> included in the cluster).
>>
>>
>>
>> Resources:
>>
>>
>>
>> Resource Group: clusterd_services
>>
>> otrs_articel_fs (ocf::heartbeat:Filesystem): Started srv2
>>
>> vip (ocf::heartbeat:IPaddr2): Started srv2
>>
>> Apache (systemd:httpd): Started srv2
>>
>> OTRS (systemd:otrs): Started srv2
>>
>> Master/Slave Set: articel_ms [articel_drbd]
>>
>> Masters: [ srv2 ]
>>
>> Slaves: [ srv1 ]
>>
>> my_vcentre-fence (stonith:fence_vmware_soap): Started srv1
>>
>>
>>
>>
>>
>>
>>
>> Ultimately I would do
>>
>>
>>
>> - Each VM will be running on separate ESXi hosts to provide at least
>> a certain type of protection against hardware failure;
>>
>> - Redundant communication paths between the two nodes for DRBD
>> replication and cluster communication to prevent split-brain scenarios;
>>
>> - fence_vmware_soap for VM fencing;
>>
>> - pacemaker , corosync, pcsd not configured to start on both so that
>> in case of a fence event they will not join the cluster but give room to
>> investigate why it got fenced in first place;
>>
>> - /usr/lib/drbd/crm-fence-peer.9.sh and
>> /usr/lib/drbd/crm-unfence-peer.9.sh for DRBD resource level fencing
>> (if the DRBD replication link becomes disconnected, the
>> crm-fence-peer.9.sh script contacts the cluster manager, determines
>> the Pacemaker Master/Slave resource associated with this DRBD
>> resource, and ensures that the Master/Slave resource no longer gets
>> promoted on any node other than the currently active one);
>>
>>
>>
>>
>>
>>
>>
>> What I’m just wondering is that if for whatever reason the communication
>> paths between both nodes are interrupted so that each will think that the
>> other node is gone, each of them will try to fence each other resulting in
>> a fence race.
> Does it really matter? At the end the node that wins this race will
> restart applications. Exactly what HA is for.
>
> More serious problem is, in case of physical ESXi failure fencing will
> fail and no failover happens. At least, if you are using direct access
> to ESXi. I am not sure what happens in case of vCenter - will it realize
> that node is completely down? What happens if network is cut off but
> datastore heartbeat still works?
>
>> I was reading that you could possibly introduce a delay into
>> the secondary’s fencing for example 30-60 seconds so that during that
>> delay, ASSUMING the primary is functioning well, the primary will fence the
>> secondary, but that doesn’t sound like a reliable solution to me, I mean
>> how can I assume before which one is primary and which one will suffer
>> problems?
>>
>
> That is question that is asked quite often and so far there is no easy
> way to control fencing delay basing on application(s) weights (or
> priority).
Due to pacemaker-fenced not evaluating complex rules the way
as they would be for RAs (evaluation not retriggered when an
attribute is changed e.g. or with recheck-interval ...) they
unfortunately can't be used for that purpose.
That is why I tried to introduce what I called 'heuristics' fence-agents
that are living on the same fencing-level as a 'real' fence-agent
but they just decide if the 'real' fence-agent should be used
or not.
Of course the same approach can be used to just introduce
additional delay in case of certain conditions (application not
running in this case).
See the one and only example that made it upstream so far:
https://github.com/ClusterLabs/fence-agents/blob/master/agents/heuristics_ping/fence_heuristics_ping.py
Klaus
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users
mailing list