[ClusterLabs] Still Beginner STONITH Problem

Strahil Nikolov hunter86_bg at yahoo.com
Wed Jul 15 00:32:35 EDT 2020


How  did you configure the network on your ubuntu 20.04 Hosts ? I tried  to setup bridged connection for the test setup , but obviously I'm missing something.

Best Regards,
Strahil Nikolov

На 14 юли 2020 г. 11:06:42 GMT+03:00, "stefan.schmitz at farmpartner-tec.com" <stefan.schmitz at farmpartner-tec.com> написа:
>Hello,
>
>
>Am 09.07.2020 um 19:10 Strahil Nikolov wrote:
> >Have  you  run 'fence_virtd  -c' ?
>Yes I had run that on both Hosts. The current config looks like that
>and 
>is identical on both.
>
>cat fence_virt.conf
>fence_virtd {
>         listener = "multicast";
>         backend = "libvirt";
>         module_path = "/usr/lib64/fence-virt";
>}
>
>listeners {
>         multicast {
>                 key_file = "/etc/cluster/fence_xvm.key";
>                 address = "225.0.0.12";
>                 interface = "bond0";
>                 family = "ipv4";
>                 port = "1229";
>         }
>
>}
>
>backends {
>         libvirt {
>                 uri = "qemu:///system";
>         }
>
>}
>
>
>The situation is still that no matter on what host I issue the 
>"fence_xvm -a 225.0.0.12 -o list" command, both guest systems receive 
>the traffic. The local guest, but also the guest on the other host. I 
>reckon that means the traffic is not filtered by any network device, 
>like switches or firewalls. Since the guest on the other host receives 
>the packages, the traffic must reach te physical server and 
>networkdevice and is then routed to the VM on that host.
>But still, the traffic is not shown on the host itself.
>
>Further the local firewalls on both hosts are set to let each and every
>
>traffic pass. Accept to any and everything. Well at least as far as I 
>can see.
>
>
>Am 09.07.2020 um 22:34 Klaus Wenninger wrote:
> > makes me believe that
> > the whole setup doesn't lookas I would have
> > expected (bridges on each host where theguest
> > has a connection to and where ethernet interfaces
> > that connect the 2 hosts are part of as well
>
>On each physical server the networkcards are bonded to achieve failure 
>safety (bond0). The guest are connected over a bridge(br0) but 
>apparently our virtualization softrware creates an own device named 
>after the guest (kvm101.0).
>There is no direct connection between the servers, but as I said 
>earlier, the multicast traffic does reach the VMs so I assume there is 
>no problem with that.
>
>
>Am 09.07.2020 um 20:18 Vladislav Bogdanov wrote:
> > First, you need to ensure that your switch (or all switches in the
> > path) have igmp snooping enabled on host ports (and probably
> > interconnects along the path between your hosts).
> >
>> Second, you need an igmp querier to be enabled somewhere near (better
>> to have it enabled on a switch itself). Please verify that you see
>its
> > queries on hosts.
> >
> > Next, you probably need to make your hosts to use IGMPv2 (not 3) as
> > many switches still can not understand v3. This is doable by sysctl,
> > find on internet, there are many articles.
>
>
>I have send an query to our Data center Techs who are analyzing this
>and 
>were already on it analyzing if multicast Traffic is somewhere blocked 
>or hindered. So far the answer is, "multicast ist explictly allowed in 
>the local network and no packets are filtered or dropped". I am still 
>waiting for a final report though.
>
>In the meantime I have switched IGMPv3 to IGMPv2 on every involved 
>server, hosts and guests via the mentioned sysctl. The switching itself
>
>was successful, according to "cat /proc/net/igmp" but sadly did not 
>better the behavior. It actually led to that no VM received the 
>multicast traffic anymore too.
>
>kind regards
>Stefan Schmitz
>
>
>Am 09.07.2020 um 22:34 schrieb Klaus Wenninger:
>> On 7/9/20 5:17 PM, stefan.schmitz at farmpartner-tec.com wrote:
>>> Hello,
>>>
>>>> Well, theory still holds I would say.
>>>>
>>>> I guess that the multicast-traffic from the other host
>>>> or the guestsdoesn't get to the daemon on the host.
>>>> Can't you just simply check if there are any firewall
>>>> rules configuredon the host kernel?
>>>
>>> I hope I did understand you corretcly and you are referring to
>iptables?
>> I didn't say iptables because it might have been
>> nftables - but yesthat is what I was referring to.
>> Guess to understand the config the output is
>> lacking verbositybut it makes me believe that
>> the whole setup doesn't lookas I would have
>> expected (bridges on each host where theguest
>> has a connection to and where ethernet interfaces
>> that connect the 2 hosts are part of as well -
>> everythingconnected via layer 2 basically).
>>> Here is the output of the current rules. Besides the IP of the guest
>>> the output is identical on both hosts:
>>>
>>> # iptables -S
>>> -P INPUT ACCEPT
>>> -P FORWARD ACCEPT
>>> -P OUTPUT ACCEPT
>>>
>>> # iptables -L
>>> Chain INPUT (policy ACCEPT)
>>> target     prot opt source               destination
>>>
>>> Chain FORWARD (policy ACCEPT)
>>> target     prot opt source               destination
>>> SOLUSVM_TRAFFIC_IN  all  --  anywhere             anywhere
>>> SOLUSVM_TRAFFIC_OUT  all  --  anywhere             anywhere
>>>
>>> Chain OUTPUT (policy ACCEPT)
>>> target     prot opt source               destination
>>>
>>> Chain SOLUSVM_TRAFFIC_IN (1 references)
>>> target     prot opt source               destination
>>>             all  --  anywhere             192.168.1.14
>>>
>>> Chain SOLUSVM_TRAFFIC_OUT (1 references)
>>> target     prot opt source               destination
>>>             all  --  192.168.1.14         anywhere
>>>
>>> kind regards
>>> Stefan Schmitz
>>>
>>>
>> 


More information about the Users mailing list