<div dir="ltr"><div dir="ltr"><div>I'm not sure that the libvirt backend is intended to be used in this way, with multiple hosts using the same multicast address. From the fence_virt.conf man page:</div><div><br></div><div>~~~</div><div>BACKENDS</div><div> libvirt<br> The libvirt plugin is the simplest plugin. It is used in environments where routing fencing requests between multiple hosts is not required, for example by a user running a cluster of virtual<br> machines on a single desktop computer.</div><div> libvirt-qmf<br> The libvirt-qmf plugin acts as a QMFv2 Console to the libvirt-qmf daemon in order to route fencing requests over AMQP to the appropriate computer.</div><div> cpg<br> The cpg plugin uses corosync CPG and libvirt to track virtual machines and route fencing requests to the appropriate computer.</div><div>~~~</div><div><br></div><div>I'm not an expert on fence_xvm or libvirt. It's possible that this is a viable configuration with the libvirt backend.</div><div><br></div><div>However, when users want to configure fence_xvm for multiple hosts with the libvirt backend, I have typically seen them configure multiple fence_xvm devices (one per host) and configure a different multicast address on each host.</div><div><br></div><div>If you have a Red Hat account, see also:</div><div> - <a href="https://access.redhat.com/solutions/2386421#comment-1209661">https://access.redhat.com/solutions/2386421#comment-1209661</a></div><div> - <a href="https://access.redhat.com/solutions/2386421#comment-1209801">https://access.redhat.com/solutions/2386421#comment-1209801</a></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jul 17, 2020 at 7:49 AM Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The simplest way to check if the libvirt's network is NAT (or not) is to try to ssh from the first VM to the second one.<br>
<br>
I should admit that I was lost when I tried to create a routed network in KVM, so I can't help with that.<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
<br>
На 17 юли 2020 г. 16:56:44 GMT+03:00, "<a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a>" <<a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a>> написа:<br>
>Hello,<br>
><br>
>I have now managed to get # fence_xvm -a 225.0.0.12 -o list to list at <br>
>least its local Guest again. It seems the fence_virtd was not working <br>
>properly anymore.<br>
><br>
>Regarding the Network XML config<br>
><br>
># cat default.xml<br>
> <network><br>
> <name>default</name><br>
> <bridge name="virbr0"/><br>
> <forward/><br>
> <ip address="192.168.122.1" netmask="255.255.255.0"><br>
> <dhcp><br>
> <range start="192.168.122.2" end="192.168.122.254"/><br>
> </dhcp><br>
> </ip><br>
> </network><br>
><br>
>I have used "virsh net-edit default" to test other network Devices on <br>
>the hosts but this did not change anything.<br>
><br>
>Regarding the statement<br>
><br>
> > If it is created by libvirt - this is NAT and you will never<br>
> > receive output from the other host.<br>
><br>
>I am at a loss an do not know why this is NAT. I am aware what NAT <br>
>means, but what am I supposed to reconfigure here to dolve the problem?<br>
>Any help would be greatly appreciated.<br>
>Thank you in advance.<br>
><br>
>Kind regards<br>
>Stefan Schmitz<br>
><br>
><br>
>Am 15.07.2020 um 16:48 schrieb <a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a>:<br>
>> <br>
>> Am 15.07.2020 um 16:29 schrieb Klaus Wenninger:<br>
>>> On 7/15/20 4:21 PM, <a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a> wrote:<br>
>>>> Hello,<br>
>>>><br>
>>>><br>
>>>> Am 15.07.2020 um 15:30 schrieb Klaus Wenninger:<br>
>>>>> On 7/15/20 3:15 PM, Strahil Nikolov wrote:<br>
>>>>>> If it is created by libvirt - this is NAT and you will never<br>
>>>>>> receive output from the other host.<br>
>>>>> And twice the same subnet behind NAT is probably giving<br>
>>>>> issues at other places as well.<br>
>>>>> And if using DHCP you have to at least enforce that both sides<br>
>>>>> don't go for the same IP at least.<br>
>>>>> But all no explanation why it doesn't work on the same host.<br>
>>>>> Which is why I was asking for running the service on the<br>
>>>>> bridge to check if that would work at least. So that we<br>
>>>>> can go forward step by step.<br>
>>>><br>
>>>> I just now finished trying and testing it on both hosts.<br>
>>>> I ran # fence_virtd -c on both hosts and entered different network<br>
>>>> devices. On both I tried br0 and the kvm10x.0.<br>
>>> According to your libvirt-config I would have expected<br>
>>> the bridge to be virbr0.<br>
>> <br>
>> I understand that, but an "virbr0" Device does not seem to exist on<br>
>any <br>
>> of the two hosts.<br>
>> <br>
>> # ip link show<br>
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN<br>
>mode <br>
>> DEFAULT group default qlen 1000<br>
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br>
>> 2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq <br>
>> master bond0 state UP mode DEFAULT group default qlen 1000<br>
>> link/ether 0c:c4:7a:fb:30:1a brd ff:ff:ff:ff:ff:ff<br>
>> 3: enp216s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN<br>
>mode <br>
>> DEFAULT group default qlen 1000<br>
>> link/ether ac:1f:6b:26:69:dc brd ff:ff:ff:ff:ff:ff<br>
>> 4: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq <br>
>> master bond0 state UP mode DEFAULT group default qlen 1000<br>
>> link/ether 0c:c4:7a:fb:30:1a brd ff:ff:ff:ff:ff:ff<br>
>> 5: enp216s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN<br>
>mode <br>
>> DEFAULT group default qlen 1000<br>
>> link/ether ac:1f:6b:26:69:dd brd ff:ff:ff:ff:ff:ff<br>
>> 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc <br>
>> noqueue master br0 state UP mode DEFAULT group default qlen 1000<br>
>> link/ether 0c:c4:7a:fb:30:1a brd ff:ff:ff:ff:ff:ff<br>
>> 7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue<br>
>state <br>
>> UP mode DEFAULT group default qlen 1000<br>
>> link/ether 0c:c4:7a:fb:30:1a brd ff:ff:ff:ff:ff:ff<br>
>> 8: kvm101.0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc<br>
>pfifo_fast <br>
>> master br0 state UNKNOWN mode DEFAULT group default qlen 1000<br>
>> link/ether fe:16:3c:ba:10:6c brd ff:ff:ff:ff:ff:ff<br>
>> <br>
>> <br>
>> <br>
>>>><br>
>>>> After each reconfiguration I ran #fence_xvm -a 225.0.0.12 -o list<br>
>>>> On the second server it worked with each device. After that I<br>
>>>> reconfigured back to the normal device, bond0, on which it did not<br>
>>>> work anymore, it worked now again!<br>
>>>> # fence_xvm -a 225.0.0.12 -o list<br>
>>>> kvm102 <br>
>bab3749c-15fc-40b7-8b6c-d4267b9f0eb9 on<br>
>>>><br>
>>>> But anyhow not on the first server, it did not work with any<br>
>device.<br>
>>>> # fence_xvm -a 225.0.0.12 -o list always resulted in<br>
>>>> Timed out waiting for response<br>
>>>> Operation failed<br>
>>>><br>
>>>><br>
>>>><br>
>>>> Am 15.07.2020 um 15:15 schrieb Strahil Nikolov:<br>
>>>>> If it is created by libvirt - this is NAT and you will never<br>
>receive<br>
>>>> output from the other host.<br>
>>>>><br>
>>>> To my knowledge this is configured by libvirt. At least I am not<br>
>aware<br>
>>>> having changend or configured it in any way. Up until today I did<br>
>not<br>
>>>> even know that file existed. Could you please advise on what I need<br>
>to<br>
>>>> do to fix this issue?<br>
>>>><br>
>>>> Kind regards<br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>>> Is pacemaker/corosync/knet btw. using the same interfaces/IPs?<br>
>>>>><br>
>>>>> Klaus<br>
>>>>>><br>
>>>>>> Best Regards,<br>
>>>>>> Strahil Nikolov<br>
>>>>>><br>
>>>>>> На 15 юли 2020 г. 15:05:48 GMT+03:00,<br>
>>>>>> "<a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a>"<br>
>>>>>> <<a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a>> написа:<br>
>>>>>>> Hello,<br>
>>>>>>><br>
>>>>>>> Am 15.07.2020 um 13:42 Strahil Nikolov wrote:<br>
>>>>>>>> By default libvirt is using NAT and not routed network - in<br>
>such<br>
>>>>>>> case, vm1 won't receive data from host2.<br>
>>>>>>>> Can you provide the Networks' xml ?<br>
>>>>>>>><br>
>>>>>>>> Best Regards,<br>
>>>>>>>> Strahil Nikolov<br>
>>>>>>>><br>
>>>>>>> # cat default.xml<br>
>>>>>>> <network><br>
>>>>>>> <name>default</name><br>
>>>>>>> <bridge name="virbr0"/><br>
>>>>>>> <forward/><br>
>>>>>>> <ip address="192.168.122.1" netmask="255.255.255.0"><br>
>>>>>>> <dhcp><br>
>>>>>>> <range start="192.168.122.2" end="192.168.122.254"/><br>
>>>>>>> </dhcp><br>
>>>>>>> </ip><br>
>>>>>>> </network><br>
>>>>>>><br>
>>>>>>> I just checked this and the file is identical on both hosts.<br>
>>>>>>><br>
>>>>>>> kind regards<br>
>>>>>>> Stefan Schmitz<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>>> На 15 юли 2020 г. 13:19:59 GMT+03:00, Klaus Wenninger<br>
>>>>>>> <<a href="mailto:kwenning@redhat.com" target="_blank">kwenning@redhat.com</a>> написа:<br>
>>>>>>>>> On 7/15/20 11:42 AM, <a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a> wrote:<br>
>>>>>>>>>> Hello,<br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>> Am 15.07.2020 um 06:32 Strahil Nikolov wrote:<br>
>>>>>>>>>>> How did you configure the network on your ubuntu 20.04<br>
>Hosts ? I<br>
>>>>>>>>>>> tried to setup bridged connection for the test setup , but<br>
>>>>>>>>> obviously<br>
>>>>>>>>>>> I'm missing something.<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Best Regards,<br>
>>>>>>>>>>> Strahil Nikolov<br>
>>>>>>>>>>><br>
>>>>>>>>>> on the hosts (CentOS) the bridge config looks like that.The<br>
>>>>>>> bridging<br>
>>>>>>>>>> and configuration is handled by the virtualization software:<br>
>>>>>>>>>><br>
>>>>>>>>>> # cat ifcfg-br0<br>
>>>>>>>>>> DEVICE=br0<br>
>>>>>>>>>> TYPE=Bridge<br>
>>>>>>>>>> BOOTPROTO=static<br>
>>>>>>>>>> ONBOOT=yes<br>
>>>>>>>>>> IPADDR=192.168.1.21<br>
>>>>>>>>>> NETMASK=255.255.0.0<br>
>>>>>>>>>> GATEWAY=192.168.1.1<br>
>>>>>>>>>> NM_CONTROLLED=no<br>
>>>>>>>>>> IPV6_AUTOCONF=yes<br>
>>>>>>>>>> IPV6_DEFROUTE=yes<br>
>>>>>>>>>> IPV6_PEERDNS=yes<br>
>>>>>>>>>> IPV6_PEERROUTES=yes<br>
>>>>>>>>>> IPV6_FAILURE_FATAL=no<br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>> Am 15.07.2020 um 09:50 Klaus Wenninger wrote:<br>
>>>>>>>>>>> Guess it is not easy to have your servers connected<br>
>physically <br>
>>>>>>>>>>> for<br>
>>>>>>>>> a<br>
>>>>>>>>>> try.<br>
>>>>>>>>>>> But maybe you can at least try on one host to have<br>
>virt_fenced &<br>
>>>>>>> VM<br>
>>>>>>>>>>> on the same bridge - just to see if that basic pattern is <br>
>>>>>>>>>>> working.<br>
>>>>>>>>>> I am not sure if I understand you correctly. What do you by<br>
>having<br>
>>>>>>>>>> them on the same bridge? The bridge device is configured on<br>
>the<br>
>>>>>>> host<br>
>>>>>>>>>> by the virtualization software.<br>
>>>>>>>>> I meant to check out which bridge the interface of the VM is<br>
>>>>>>> enslaved<br>
>>>>>>>>> to and to use that bridge as interface in<br>
>/etc/fence_virt.conf.<br>
>>>>>>>>> Get me right - just for now - just to see if it is working for<br>
>this<br>
>>>>>>> one<br>
>>>>>>>>> host and the corresponding guest.<br>
>>>>>>>>>><br>
>>>>>>>>>>> Well maybe still sbdy in the middle playing IGMPv3 or the<br>
>request<br>
>>>>>>>>> for<br>
>>>>>>>>>>> a certain source is needed to shoot open some firewall or<br>
>>>>>>>>> switch-tables.<br>
>>>>>>>>>> I am still waiting for the final report from our Data Center <br>
>>>>>>>>>> techs.<br>
>>>>>>> I<br>
>>>>>>>>>> hope that will clear up somethings.<br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>> Additionally I have just noticed that apparently since<br>
>switching<br>
>>>>>>>>> from<br>
>>>>>>>>>> IGMPv3 to IGMPv2 and back the command "fence_xvm -a<br>
>225.0.0.12 -o<br>
>>>>>>>>>> list" is no completely broken.<br>
>>>>>>>>>> Before that switch this command at least returned the local<br>
>VM. <br>
>>>>>>>>>> Now<br>
>>>>>>>>> it<br>
>>>>>>>>>> returns:<br>
>>>>>>>>>> Timed out waiting for response<br>
>>>>>>>>>> Operation failed<br>
>>>>>>>>>><br>
>>>>>>>>>> I am a bit confused by that, because all we did was running<br>
>>>>>>> commands<br>
>>>>>>>>>> like "sysctl -w net.ipv4.conf.all.force_igmp_version =" with<br>
>the<br>
>>>>>>>>>> different Version umbers and #cat /proc/net/igmp shows that<br>
>V3 is<br>
>>>>>>>>> used<br>
>>>>>>>>>> again on every device just like before...?!<br>
>>>>>>>>>><br>
>>>>>>>>>> kind regards<br>
>>>>>>>>>> Stefan Schmitz<br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>>> На 14 юли 2020 г. 11:06:42 GMT+03:00,<br>
>>>>>>>>>>> "<a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a>"<br>
>>>>>>>>>>> <<a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a>> написа:<br>
>>>>>>>>>>>> Hello,<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Am 09.07.2020 um 19:10 Strahil Nikolov wrote:<br>
>>>>>>>>>>>>> Have you run 'fence_virtd -c' ?<br>
>>>>>>>>>>>> Yes I had run that on both Hosts. The current config looks<br>
>like<br>
>>>>>>>>> that<br>
>>>>>>>>>>>> and<br>
>>>>>>>>>>>> is identical on both.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> cat fence_virt.conf<br>
>>>>>>>>>>>> fence_virtd {<br>
>>>>>>>>>>>> listener = "multicast";<br>
>>>>>>>>>>>> backend = "libvirt";<br>
>>>>>>>>>>>> module_path = "/usr/lib64/fence-virt";<br>
>>>>>>>>>>>> }<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> listeners {<br>
>>>>>>>>>>>> multicast {<br>
>>>>>>>>>>>> key_file =<br>
>"/etc/cluster/fence_xvm.key";<br>
>>>>>>>>>>>> address = "225.0.0.12";<br>
>>>>>>>>>>>> interface = "bond0";<br>
>>>>>>>>>>>> family = "ipv4";<br>
>>>>>>>>>>>> port = "1229";<br>
>>>>>>>>>>>> }<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> }<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> backends {<br>
>>>>>>>>>>>> libvirt {<br>
>>>>>>>>>>>> uri = "qemu:///system";<br>
>>>>>>>>>>>> }<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> }<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> The situation is still that no matter on what host I issue<br>
>the<br>
>>>>>>>>>>>> "fence_xvm -a 225.0.0.12 -o list" command, both guest<br>
>systems<br>
>>>>>>>>> receive<br>
>>>>>>>>>>>> the traffic. The local guest, but also the guest on the<br>
>other<br>
>>>>>>> host.<br>
>>>>>>>>> I<br>
>>>>>>>>>>>> reckon that means the traffic is not filtered by any<br>
>network<br>
>>>>>>>>> device,<br>
>>>>>>>>>>>> like switches or firewalls. Since the guest on the other<br>
>host<br>
>>>>>>>>> receives<br>
>>>>>>>>>>>> the packages, the traffic must reach te physical server and<br>
>>>>>>>>>>>> networkdevice and is then routed to the VM on that host.<br>
>>>>>>>>>>>> But still, the traffic is not shown on the host itself.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Further the local firewalls on both hosts are set to let<br>
>each <br>
>>>>>>>>>>>> and<br>
>>>>>>>>> every<br>
>>>>>>>>>>>> traffic pass. Accept to any and everything. Well at least<br>
>as far<br>
>>>>>>> as<br>
>>>>>>>>> I<br>
>>>>>>>>>>>> can see.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Am 09.07.2020 um 22:34 Klaus Wenninger wrote:<br>
>>>>>>>>>>>>> makes me believe that<br>
>>>>>>>>>>>>> the whole setup doesn't lookas I would have<br>
>>>>>>>>>>>>> expected (bridges on each host where theguest<br>
>>>>>>>>>>>>> has a connection to and where ethernet interfaces<br>
>>>>>>>>>>>>> that connect the 2 hosts are part of as well<br>
>>>>>>>>>>>> On each physical server the networkcards are bonded to<br>
>achieve<br>
>>>>>>>>> failure<br>
>>>>>>>>>>>> safety (bond0). The guest are connected over a bridge(br0)<br>
>but<br>
>>>>>>>>>>>> apparently our virtualization softrware creates an own<br>
>device<br>
>>>>>>> named<br>
>>>>>>>>>>>> after the guest (kvm101.0).<br>
>>>>>>>>>>>> There is no direct connection between the servers, but as I<br>
>said<br>
>>>>>>>>>>>> earlier, the multicast traffic does reach the VMs so I<br>
>assume<br>
>>>>>>> there<br>
>>>>>>>>> is<br>
>>>>>>>>>>>> no problem with that.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Am 09.07.2020 um 20:18 Vladislav Bogdanov wrote:<br>
>>>>>>>>>>>>> First, you need to ensure that your switch (or all<br>
>switches in<br>
>>>>>>> the<br>
>>>>>>>>>>>>> path) have igmp snooping enabled on host ports (and<br>
>probably<br>
>>>>>>>>>>>>> interconnects along the path between your hosts).<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Second, you need an igmp querier to be enabled somewhere<br>
>near<br>
>>>>>>>>> (better<br>
>>>>>>>>>>>>> to have it enabled on a switch itself). Please verify that<br>
>you<br>
>>>>>>> see<br>
>>>>>>>>>>>> its<br>
>>>>>>>>>>>>> queries on hosts.<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Next, you probably need to make your hosts to use IGMPv2 <br>
>>>>>>>>>>>>> (not 3)<br>
>>>>>>>>> as<br>
>>>>>>>>>>>>> many switches still can not understand v3. This is doable<br>
>by<br>
>>>>>>>>> sysctl,<br>
>>>>>>>>>>>>> find on internet, there are many articles.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> I have send an query to our Data center Techs who are<br>
>analyzing<br>
>>>>>>>>> this<br>
>>>>>>>>>>>> and<br>
>>>>>>>>>>>> were already on it analyzing if multicast Traffic is<br>
>somewhere<br>
>>>>>>>>> blocked<br>
>>>>>>>>>>>> or hindered. So far the answer is, "multicast ist explictly<br>
>>>>>>> allowed<br>
>>>>>>>>> in<br>
>>>>>>>>>>>> the local network and no packets are filtered or dropped".<br>
>I am<br>
>>>>>>>>> still<br>
>>>>>>>>>>>> waiting for a final report though.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> In the meantime I have switched IGMPv3 to IGMPv2 on every<br>
>>>>>>> involved<br>
>>>>>>>>>>>> server, hosts and guests via the mentioned sysctl. The<br>
>switching<br>
>>>>>>>>> itself<br>
>>>>>>>>>>>> was successful, according to "cat /proc/net/igmp" but sadly<br>
>did<br>
>>>>>>> not<br>
>>>>>>>>>>>> better the behavior. It actually led to that no VM received<br>
>the<br>
>>>>>>>>>>>> multicast traffic anymore too.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> kind regards<br>
>>>>>>>>>>>> Stefan Schmitz<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Am 09.07.2020 um 22:34 schrieb Klaus Wenninger:<br>
>>>>>>>>>>>>> On 7/9/20 5:17 PM, <a href="mailto:stefan.schmitz@farmpartner-tec.com" target="_blank">stefan.schmitz@farmpartner-tec.com</a><br>
>wrote:<br>
>>>>>>>>>>>>>> Hello,<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>> Well, theory still holds I would say.<br>
>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>> I guess that the multicast-traffic from the other host<br>
>>>>>>>>>>>>>>> or the guestsdoesn't get to the daemon on the host.<br>
>>>>>>>>>>>>>>> Can't you just simply check if there are any firewall<br>
>>>>>>>>>>>>>>> rules configuredon the host kernel?<br>
>>>>>>>>>>>>>> I hope I did understand you corretcly and you are<br>
>referring to<br>
>>>>>>>>>>>> iptables?<br>
>>>>>>>>>>>>> I didn't say iptables because it might have been<br>
>>>>>>>>>>>>> nftables - but yesthat is what I was referring to.<br>
>>>>>>>>>>>>> Guess to understand the config the output is<br>
>>>>>>>>>>>>> lacking verbositybut it makes me believe that<br>
>>>>>>>>>>>>> the whole setup doesn't lookas I would have<br>
>>>>>>>>>>>>> expected (bridges on each host where theguest<br>
>>>>>>>>>>>>> has a connection to and where ethernet interfaces<br>
>>>>>>>>>>>>> that connect the 2 hosts are part of as well -<br>
>>>>>>>>>>>>> everythingconnected via layer 2 basically).<br>
>>>>>>>>>>>>>> Here is the output of the current rules. Besides the IP<br>
>of the<br>
>>>>>>>>> guest<br>
>>>>>>>>>>>>>> the output is identical on both hosts:<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> # iptables -S<br>
>>>>>>>>>>>>>> -P INPUT ACCEPT<br>
>>>>>>>>>>>>>> -P FORWARD ACCEPT<br>
>>>>>>>>>>>>>> -P OUTPUT ACCEPT<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> # iptables -L<br>
>>>>>>>>>>>>>> Chain INPUT (policy ACCEPT)<br>
>>>>>>>>>>>>>> target prot opt source destination<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Chain FORWARD (policy ACCEPT)<br>
>>>>>>>>>>>>>> target prot opt source destination<br>
>>>>>>>>>>>>>> SOLUSVM_TRAFFIC_IN all -- anywhere <br>
>anywhere<br>
>>>>>>>>>>>>>> SOLUSVM_TRAFFIC_OUT all -- anywhere <br>
>anywhere<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Chain OUTPUT (policy ACCEPT)<br>
>>>>>>>>>>>>>> target prot opt source destination<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Chain SOLUSVM_TRAFFIC_IN (1 references)<br>
>>>>>>>>>>>>>> target prot opt source destination<br>
>>>>>>>>>>>>>> all -- anywhere <br>
>192.168.1.14<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Chain SOLUSVM_TRAFFIC_OUT (1 references)<br>
>>>>>>>>>>>>>> target prot opt source destination<br>
>>>>>>>>>>>>>> all -- 192.168.1.14 anywhere<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> kind regards<br>
>>>>>>>>>>>>>> Stefan Schmitz<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>><br>
>>>>>> _______________________________________________<br>
>>>>>> Manage your subscription:<br>
>>>>>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>>>>>><br>
>>>>>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>>>>><br>
>>>><br>
>>><br>
>> _______________________________________________<br>
>> Manage your subscription:<br>
>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>> <br>
>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>_______________________________________________<br>
>Manage your subscription:<br>
><a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
><br>
>ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div>Regards,<br><br></div>Reid Wahl, RHCA<br></div><div>Software Maintenance Engineer, Red Hat<br></div>CEE - Platform Support Delivery - ClusterHA</div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>