[ClusterLabs] Beginner with STONITH Problem

Strahil Nikolov hunter86_bg at yahoo.com
Thu Jun 25 12:01:07 EDT 2020


Hi Stefan,

this  sounds  like  firewall issue.

Check that the port udp/1229 is opened  for  the Hypervisours and tcp/1229 for the VMs.

P.S.: The  protocols are based on my fading memory, so double check the .

Best Regards,
Strahil Nikolov

На 25 юни 2020 г. 18:18:46 GMT+03:00, "stefan.schmitz at farmpartner-tec.com" <stefan.schmitz at farmpartner-tec.com> написа:
>Hello,
>
>I have now tried to use that "how to" to make things work. Sadly I have
>
>run into a couple of Problems.
>
>I have installed and configured fence_xvm like it was told in the 
>walk-through but as expected the fence_virtd does not find all VMs,
>only 
>the one installed on itself.
>In the configuration I have chosen "bond0" as the listeners interface 
>since the hosts have bonding configured. I have appenmded the complete 
>fence_virt.conf at the end of the mail.
>All 4 servers, CentOS-Hosts and Ubuntu-VMs are in the same Network.
>Also 
>the generated key is present on all 4 Servers.
>
>Still the "fence_xvm -o list" command olny results in showing the local
>VM
>
># fence_xvm -o list
>kvm101                           beee402d-c6ac-4df4-9b97-bd84e637f2e7
>on
>
>I hav tried the "Alternative configuration for guests running on 
>multiple hosts" but this fails right from the start, because the 
>packages libvirt-qpid are not available
>
># yum install -y libvirt-qpid qpidd
>[...]
>No package libvirt-qpid available.
>No package qpidd available.
>
>Could anyone please advise on how to proceed to get both nodes 
>recognized by the CentOS-Hosts? As a side note, all 4 Servers can ping 
>each other, so they are present and available in the same network.
>
>In addition, I cant seem to find the correct packages for Ubuntu 18.04 
>to install on the VMs. Trying to install fence_virt and/or fence_xvm 
>just results in "E: Unable to locate package fence_xvm/fence_virt".
>Are those packages available at all forUbubtu 18.04? I could only find 
>them for 20.04 or are they just called completely different so that I
>am 
>not able to find them?
>
>Thank you in advance for your help!
>
>Kind regards
>Stefan Schmitz
>
>
>The current /etc/fence_virt.conf:
>
>fence_virtd {
>         listener = "multicast";
>         backend = "libvirt";
>         module_path = "/usr/lib64/fence-virt";
>}
>
>listeners {
>         multicast {
>                 key_file = "/etc/cluster/fence_xvm.key";
>                 address = "225.0.0.12";
>                 interface = "bond0";
>                 family = "ipv4";
>                 port = "1229";
>         }
>
>}
>
>backends {
>         libvirt {
>                 uri = "qemu:///system";
>         }
>
>}
>
>
>
>
>
>
>
>Am 25.06.2020 um 10:28 schrieb stefan.schmitz at farmpartner-tec.com:
>> Hello and thank you both for the help,
>> 
>>  >> Are the VMs in the same VLAN like the hosts?
>> Yes the VMs and Hosts are all in the same VLan. So I will try the 
>> fence_xvm solution.
>> 
>>  > https://wiki.clusterlabs.org/wiki/Guest_Fencing
>> Thank you for the pointer to that walk-through. Sadly every VM is on
>its 
>> own host which is marked as "Not yet supported" but still this how to
>is 
>> a good starting point and I will try to work and tweak my way through
>it 
>> for out setup.
>> 
>> Thanks again!
>> 
>> Kind regards
>> Stefan Schmitz
>> 
>> Am 24.06.2020 um 15:51 schrieb Ken Gaillot:
>>> On Wed, 2020-06-24 at 15:47 +0300, Strahil Nikolov wrote:
>>>> Hello Stefan,
>>>>
>>>> There are multiple options for stonith, but it depends on the
>>>> environment.
>>>> Are the VMs in the same VLAN like the hosts? I am asking this , as
>>>> the most popular candidate is 'fence_xvm' but it requires the VM to
>>>> send fencing request to the KVM host (multicast) where the partner
>VM
>>>> is hosted .
>>>
>>> FYI a fence_xvm walk-through for the simple case is available on the
>>> ClusterLabs wiki:
>>>
>>> https://wiki.clusterlabs.org/wiki/Guest_Fencing
>>>
>>>> Another approach is to use a shared disk (either over iSCSI or
>>>> SAN)  and use sbd for power-based fencing,  or  use SCSI3
>Persistent
>>>> Reservations (which can also be converted into a power-based
>>>> fencing).
>>>>
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>>
>>>> На 24 юни 2020 г. 13:44:27 GMT+03:00, "
>>>> stefan.schmitz at farmpartner-tec.com" <
>>>> stefan.schmitz at farmpartner-tec.com> написа:
>>>>> Hello,
>>>>>
>>>>> I am an absolute beginner trying to setup our first HA Cluster.
>>>>> So far I have been working with the "Pacemaker 1.1 Clusters from
>>>>> Scratch" Guide wich worked for me perfectly up to the Point where
>I
>>>>> need
>>>>> to install and configure STONITH.
>>>>>
>>>>> Curerent Situation is:2 Ubuntu Server as the cluster. Both of
>>>>> those
>>>>> Servers are virtual machines running on 2 Centos KVM Hosts.
>>>>> Those are the devices or ressources we can use for a STONITH
>>>>> implementation. In this and other guides I do read a lot about
>>>>> external
>>>>>
>>>>> devices and in the "pcs stonith list" there are some XEN but sadly
>>>>> I
>>>>> cannot find anything about KVM. At this point I am stumped and
>have
>>>>> no
>>>>> clue in how to proceed, I am not even sure what further
>inforamtion
>>>>> I
>>>>> shopuld provide that would be useful for giving advise?
>>>>>
>>>>> The current pcs status is:
>>>>>
>>>>> # pcs status
>>>>> Cluster name: pacemaker_cluster
>>>>> WARNING: corosync and pacemaker node names do not match (IPs used
>>>>> in
>>>>> setup?)
>>>>> Stack: corosync
>>>>> Current DC: server2ubuntu1 (version 1.1.18-2b07d5c5a9) - partition
>>>>> with
>>>>>
>>>>> quorum
>>>>> Last updated: Wed Jun 24 12:43:24 2020
>>>>> Last change: Wed Jun 24 12:35:17 2020 by root via cibadmin on
>>>>> server4ubuntu1
>>>>>
>>>>> 2 nodes configured
>>>>> 12 resources configured
>>>>>
>>>>> Online: [ server2ubuntu1 server4ubuntu1 ]
>>>>>
>>>>> Full list of resources:
>>>>>
>>>>>   Master/Slave Set: r0_pacemaker_Clone [r0_pacemaker]
>>>>>       Masters: [ server4ubuntu1 ]
>>>>>       Slaves: [ server2ubuntu1 ]
>>>>>   Clone Set: dlm-clone [dlm]
>>>>>       Stopped: [ server2ubuntu1 server4ubuntu1 ]
>>>>>   Clone Set: ClusterIP-clone [ClusterIP] (unique)
>>>>>       ClusterIP:0        (ocf::heartbeat:IPaddr2):       Started
>>>>> server4ubuntu1
>>>>>       ClusterIP:1        (ocf::heartbeat:IPaddr2):       Started
>>>>> server4ubuntu1
>>>>>   Master/Slave Set: WebDataClone [WebData]
>>>>>       Masters: [ server2ubuntu1 server4ubuntu1 ]
>>>>>   Clone Set: WebFS-clone [WebFS]
>>>>>       Stopped: [ server2ubuntu1 server4ubuntu1 ]
>>>>>   Clone Set: WebSite-clone [WebSite]
>>>>>       Stopped: [ server2ubuntu1 server4ubuntu1 ]
>>>>>
>>>>> Failed Actions:
>>>>> * dlm_start_0 on server2ubuntu1 'not configured' (6): call=437,
>>>>> status=complete, exitreason='',
>>>>>      last-rc-change='Wed Jun 24 12:35:30 2020', queued=0ms,
>>>>> exec=86ms
>>>>> * r0_pacemaker_monitor_60000 on server2ubuntu1 'master' (8):
>>>>> call=438,
>>>>> status=complete, exitreason='',
>>>>>      last-rc-change='Wed Jun 24 12:36:30 2020', queued=0ms,
>exec=0ms
>>>>> * dlm_start_0 on server4ubuntu1 'not configured' (6): call=441,
>>>>> status=complete, exitreason='',
>>>>>      last-rc-change='Wed Jun 24 12:35:30 2020', queued=0ms,
>>>>> exec=74ms
>>>>>
>>>>>
>>>>> Daemon Status:
>>>>>    corosync: active/disabled
>>>>>    pacemaker: active/disabled
>>>>>    pcsd: active/enabled
>>>>>
>>>>>
>>>>>
>>>>> I have researched the shown dlm Problem but everything I have
>found
>>>>> says
>>>>> that configuring STONITH would solve that issue.
>>>>> Could please someone advise on how to proceed?
>>>>>
>>>>> Thank you in advance!
>>>>>
>>>>> Kind regards
>>>>> Stefan Schmitz
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Manage your subscription:
>>>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>>>
>>>>> ClusterLabs home: https://www.clusterlabs.org/
>>>>
>>>> _______________________________________________
>>>> Manage your subscription:
>>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>>
>>>> ClusterLabs home: https://www.clusterlabs.org/
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>> 
>> ClusterLabs home: https://www.clusterlabs.org/


More information about the Users mailing list