[Pacemaker] fence_xvm / fence_virtd problem

Digimer lists at alteeve.ca
Sat Jun 15 17:26:57 UTC 2013


Ah, I think it's a problem with the firewall rules on the host. Not sure 
how to fix it though...

lemass:/home/digimer# iptables-save
# Generated by iptables-save v1.4.16.2 on Sat Jun 15 13:26:33 2013
*nat
:PREROUTING ACCEPT [246583:89552160]
:INPUT ACCEPT [2335:362026]
:OUTPUT ACCEPT [11740:741351]
:POSTROUTING ACCEPT [11706:738225]
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j 
MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j 
MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
COMMIT
# Completed on Sat Jun 15 13:26:33 2013
# Generated by iptables-save v1.4.16.2 on Sat Jun 15 13:26:33 2013
*mangle
:PREROUTING ACCEPT [3250861:2486027770]
:INPUT ACCEPT [2557761:1301267981]
:FORWARD ACCEPT [444644:1094901100]
:OUTPUT ACCEPT [1919457:2636518995]
:POSTROUTING ACCEPT [2364615:3731498365]
-A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM 
--checksum-fill
COMMIT
# Completed on Sat Jun 15 13:26:33 2013
# Generated by iptables-save v1.4.16.2 on Sat Jun 15 13:26:33 2013
*filter
:INPUT ACCEPT [2557761:1301267981]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1919457:2636518995]
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate 
RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Sat Jun 15 13:26:33 2013

digimer

On 06/15/2013 01:09 PM, Digimer wrote:
> Hi all,
>
>    I'm trying to play with pacemaker on fedora 19 (pre-release) and I am
> having trouble getting the guests to talk to the host.
>
>  From the host, I can run;
>
> lemass:/home/digimer# fence_xvm -o list
> pcmk1                83f6abdc-bb48-d794-4aca-13f091f32c8b on
> pcmk2                2d778455-de7d-a9fa-994c-69d7b079fda8 on
>
> I can fence the guests from the host as well. However, I can not get the
> list (or fence) from the quests;
>
> [root at pcmk1 ~]# fence_xvm -o list
> Timed out waiting for response
> Operation failed
>
> I suspect a multicast issue, but so far as I can tell, multicast is
> enabled on the bridge;
>
> lemass:/home/digimer# ifconfig
> virbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>          inet 192.168.122.1  netmask 255.255.255.0  broadcast
> 192.168.122.255
>          ether 52:54:00:da:90:a1  txqueuelen 0  (Ethernet)
>          RX packets 103858  bytes 8514464 (8.1 MiB)
>          RX errors 0  dropped 0  overruns 0  frame 0
>          TX packets 151988  bytes 177742562 (169.5 MiB)
>          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>          inet6 fe80::fc54:ff:feed:3701  prefixlen 64  scopeid 0x20<link>
>          ether fe:54:00:ed:37:01  txqueuelen 500  (Ethernet)
>          RX packets 212828  bytes 880551892 (839.7 MiB)
>          RX errors 0  dropped 0  overruns 0  frame 0
>          TX packets 225430  bytes 182955760 (174.4 MiB)
>          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>          inet6 fe80::fc54:ff:fe45:e9ae  prefixlen 64  scopeid 0x20<link>
>          ether fe:54:00:45:e9:ae  txqueuelen 500  (Ethernet)
>          RX packets 4840  bytes 587902 (574.1 KiB)
>          RX errors 0  dropped 0  overruns 0  frame 0
>          TX packets 7495  bytes 899578 (878.4 KiB)
>          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> I tried specifying the mcast address and port without success.
>
> The host's config is:
>
> lemass:/home/digimer# cat /etc/fence_virt.conf
> backends {
>      libvirt {
>          uri = "qemu:///system";
>      }
>
> }
>
> listeners {
>      multicast {
>          port = "1229";
>          family = "ipv4";
>          interface = "virbr0";
>          address = "239.192.214.190";
>          key_file = "/etc/cluster/fence_xvm.key";
>      }
>
> }
>
> fence_virtd {
>      module_path = "/usr/lib64/fence-virt";
>      backend = "libvirt";
>      listener = "multicast";
> }
>
> The cluster forms and corosync is using multicast, so I am not sure if
> mcast really is the problem.
>
> Any tips/help?
>
> Thanks!
>


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without 
access to education?




More information about the Pacemaker mailing list