[ClusterLabs] Clustered LVM with iptables issue

Digimer lists at alteeve.ca
Thu Sep 10 19:43:34 EDT 2015


For the record;

  Noel helped me on IRC. The problem was that sctp was now allowed in
the firewall. The clue was:

====
[root at node1 ~]# /etc/init.d/clvmd start
Starting clvmd:
Activating VG(s):                                          [  OK  ]
====

====] syslog
Sep 10 23:30:47 node1 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Sep 10 23:30:47 node1 kernel: nf_conntrack version 0.5.0 (16384 buckets,
65536 max)
*** Sep 10 23:31:02 node1 kernel: dlm: Using SCTP for communications
Sep 10 23:31:03 node1 clvmd: Cluster LVM daemon started - connected to CMAN
====

====
[root at node2 ~]# /etc/init.d/clvmd start
Starting clvmd: clvmd startup timed out
====

====] syslog
Sep 10 23:31:03 node2 kernel: dlm: Using SCTP for communications
Sep 10 23:31:05 node2 corosync[3001]:   [TOTEM ] Incrementing problem
counter for seqid 5644 iface 10.20.10.2 to [1 of 3]
Sep 10 23:31:07 node2 corosync[3001]:   [TOTEM ] ring 0 active with no
faults
====

Adding;

iptables -I INPUT -p sctp -j ACCEPT

Got it working. Obviously, that needs to be tightened up.

digimer

On 10/09/15 07:01 PM, Digimer wrote:
> On 10/09/15 06:54 PM, Noel Kuntze wrote:
>>
>> Hello Digimer,
>>
>> I initially assumed you were familiar with ss or netstat and simply
>> forgot about them.
>> Seems I was wrong.
>>
>> Check the output of this: `ss -tpn` and `ss -upn`.
>> Those commands give you the current open TCP and UDP connections,
>> as well as the program that opened the connection.
>> Check listening sockets with `ss -tpnl` and `ss -upnl`
> 
> I'm not so strong on the network side of things, so I am not very
> familiar with ss or netstat.
> 
> I have clvmd running:
> 
> ====
> [root at node1 ~]# /etc/init.d/clvmd status
> clvmd (pid  3495) is running...
> Clustered Volume Groups: (none)
> Active clustered Logical Volumes: (none)
> ====
> 
> Thought I don't seem to see anything:
> 
> ====
> [root at node1 ~]# ss -tpnl
> State      Recv-Q Send-Q                       Local Address:Port
>                   Peer Address:Port
> LISTEN     0      5                                       :::11111
>                             :::*      users:(("ricci",2482,3))
> LISTEN     0      128                              127.0.0.1:199
>                              *:*      users:(("snmpd",2020,8))
> LISTEN     0      128                                     :::111
>                             :::*      users:(("rpcbind",1763,11))
> LISTEN     0      128                                      *:111
>                              *:*      users:(("rpcbind",1763,8))
> LISTEN     0      128                                      *:48976
>                              *:*      users:(("rpc.statd",1785,8))
> LISTEN     0      5                                       :::16851
>                             :::*      users:(("modclusterd",2371,5))
> LISTEN     0      128                                     :::55476
>                             :::*      users:(("rpc.statd",1785,10))
> LISTEN     0      128                                     :::22
>                             :::*      users:(("sshd",2037,4))
> LISTEN     0      128                                      *:22
>                              *:*      users:(("sshd",2037,3))
> LISTEN     0      100                                    ::1:25
>                             :::*      users:(("master",2142,13))
> LISTEN     0      100                              127.0.0.1:25
>                              *:*      users:(("master",2142,12))
> ====
> 
> ====
> [root at node1 ~]# ss -tpn
> State      Recv-Q Send-Q                       Local Address:Port
>                   Peer Address:Port
> ESTAB      0      0                           192.168.122.10:22
>                  192.168.122.1:53935  users:(("sshd",2636,3))
> ESTAB      0      0                           192.168.122.10:22
>                  192.168.122.1:53934  users:(("sshd",2613,3))
> ESTAB      0      0                               10.10.10.1:48985
>                     10.10.10.2:7788
> ESTAB      0      0                               10.10.10.1:7788
>                     10.10.10.2:51681
> ESTAB      0      0                        ::ffff:10.20.10.1:16851
>              ::ffff:10.20.10.2:43553  users:(("modclusterd",2371,6))
> ====
> 
> ====
> [root at node1 ~]# ss -upn
> State      Recv-Q Send-Q                       Local Address:Port
>                   Peer Address:Port
> ====
> 
> I ran all three again and routed output to a file, stopped clvmd and
> re-ran the three calls to a different file. I diff'ed the resulting
> files and saw nothing of interest:
> 
> ====
> [root at node1 ~]# /etc/init.d/clvmd status
> clvmd (pid  3495) is running...
> Clustered Volume Groups: (none)
> Active clustered Logical Volumes: (none)
> ====
> 
> ====
> [root at node1 ~]# ss -tpnl > tpnl.on
> [root at node1 ~]# ss -tpn > tpn.on
> [root at node1 ~]# ss -upn > upn.on
> ====
> 
> ====[root at node1 ~]# /etc/init.d/clvmd stop
> Signaling clvmd to exit                                    [  OK  ]
> clvmd terminated                                           [  OK  ]
> ====
> 
> ====
> [root at node1 ~]# ss -tpnl > tpnl.off
> [root at node1 ~]# ss -tpn > tpn.off
> [root at node1 ~]# ss -upn > upn.off
> [root at node1 ~]# diff -U0 tpnl.on tpnl.off
> [root at node1 ~]# diff -U0 tpn.on tpn.off
> [root at node1 ~]# diff -U0 upn.on upn.off
> ====
> 
> I'm reading up on 'multiport' now and will adjust my iptables. It does
> look a lot cleaner.
> 


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?




More information about the Users mailing list