<div dir="ltr">Thank you, a group way works fine</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">пт, 18 мар. 2022 г. в 12:07, Reid Wahl <<a href="mailto:nwahl@redhat.com">nwahl@redhat.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Thu, Mar 17, 2022 at 9:20 AM ZZ Wave <<a href="mailto:zzwave@gmail.com" target="_blank">zzwave@gmail.com</a>> wrote:<br>
><br>
> Thank you for the idea about a bug in resource script.<br>
><br>
> ...<br>
> NETWORK=`$IP2UTIL route list dev $INTERFACE scope link $PROTO match $ipaddress|grep -m 1 -o '^[^ ]*'`<br>
> ...<br>
><br>
> $NETWORK was surprisingly empty when a bug occurs, something wrong was with $PROTO variable. Command above returns the correct route without it and I've removed it. Now it works like a charm. Maybe it's something Debian10-specific.<br>
<br>
There have been some recent fixes upstream.<br>
- <a href="https://github.com/ClusterLabs/resource-agents/commit/50a596bf" rel="noreferrer" target="_blank">https://github.com/ClusterLabs/resource-agents/commit/50a596bf</a><br>
<br>
><br>
> чт, 17 мар. 2022 г. в 17:46, Andrei Borzenkov <<a href="mailto:arvidjaar@gmail.com" target="_blank">arvidjaar@gmail.com</a>>:<br>
>><br>
>> On 17.03.2022 14:14, ZZ Wave wrote:<br>
>> >> Define "network connectivity to node2".<br>
>> ><br>
>> > pacemaker instances can reach each other, I think.<br>
>><br>
>> This is called split brain, the only way to resolve it is fencing.<br>
>><br>
>> > In case of connectivity<br>
>> > loss (turn off network interface manually, disconnect eth cable etc), it<br>
>> > should turn off virtsrc and then virtip on active node, turn virtip on and<br>
>> > then virtsrc on second node, and vice-versa. IPaddr2 alone works fine this<br>
>> > way "out of a box", but IPsrcaddr doesn't :(<br>
>> ><br>
>><br>
>> According to scarce logs you provided stop request for IPsrcaddr<br>
>> resource failed which is fatal. You do not use fencing so pacemaker<br>
>> blocks any further change of resource state.<br>
>><br>
>> I cannot say whether this is resource agent bug or agent legitimately<br>
>> cannot perform stop action. Personally I would claim that if<br>
>> corresponding routing entry is not present, resource is stopped so<br>
>> failing stop request because no route entry was found sounds like a bug.<br>
>><br>
>> > Is my setup correct for this anyway?<br>
>><br>
>> You need to define "this". Your definition of "network connectivity"<br>
>> ("pacemaker instances can reach each other") does not match what you<br>
>> describe later. Most likely you want failover if current node does not<br>
>> some *external* connectivity.<br>
>><br>
>> > Howtos and google give me only "just<br>
>> > add both resources to group or to colocation+order and that's all", but it<br>
>> > definitely doesn't work the way I expect.<br>
>> ><br>
>><br>
>> So your expectations are wrong. You need to define more precisely what<br>
>> is network connectivity in your case and how you check for it.<br>
>><br>
>> >> What are static IPs?<br>
>> ><br>
>> > node1 <a href="http://192.168.80.21/24" rel="noreferrer" target="_blank">192.168.80.21/24</a><br>
>> > node2 <a href="http://192.168.80.22/24" rel="noreferrer" target="_blank">192.168.80.22/24</a><br>
>> > floating <a href="http://192.168.80.23/24" rel="noreferrer" target="_blank">192.168.80.23/24</a><br>
>> > gw 192.168.80.1<br>
>> ><br>
>><br>
>> I did not ask for IP addresses. I asked for your explanation what<br>
>> "static IP" means to you and how is it different from "gloating IP".<br>
>><br>
>> >> I do not see anything wrong here.<br>
>> ><br>
>> > Let me explain. After initial setup, virtip and virtsrc successfully apply<br>
>> > on node1. There are both .23 alias and def route src. After a network<br>
>> > failure, there is NO default route at all on both nodes and IPsrcaddr<br>
>> > fails, as it requires default route.<br>
>> ><br>
>><br>
>> I already explained above why IPsrcaddr was not migrated.<br>
>><br>
>> ><br>
>> > ср, 16 мар. 2022 г. в 19:23, Andrei Borzenkov <<a href="mailto:arvidjaar@gmail.com" target="_blank">arvidjaar@gmail.com</a>>:<br>
>> ><br>
>> >> On 16.03.2022 12:24, ZZ Wave wrote:<br>
>> >>> Hello. I'm trying to implement floating IP with pacemaker but I can't<br>
>> >>> get IPsrcaddr to work correctly. I want a following thing - floating<br>
>> >>> IP and its route SRC is started on node1. If node1 loses network<br>
>> >>> connectivity to node2, node1 should instantly remove floating IP and<br>
>> >>> restore default route,<br>
>> >><br>
>> >> Define "network connectivity to node2".<br>
>> >><br>
>> >>> node2 brings these things up. And vice-versa when node1 returns.<br>
>> >>> Static IPs should be intact in any way.<br>
>> >>><br>
>> >><br>
>> >> What are static IPs?<br>
>> >><br>
>> >>> What I've done:<br>
>> >>><br>
>> >>> pcs host auth node1 node2<br>
>> >>> pcs cluster setup my_cluster node1 node2 --force<br>
>> >>> pcs cluster enable node1 node2<br>
>> >>> pcs cluster start node1 node2<br>
>> >>> pcs property set stonith-enabled=false<br>
>> >>> pcs property set no-quorum-policy=ignore<br>
>> >>> pcs resource create virtip ocf:heartbeat:IPaddr2 ip=192.168.80.23<br>
>> >>> cidr_netmask=24 op monitor interval=30s<br>
>> >>> pcs resource create virtsrc ocf:heartbeat:IPsrcaddr<br>
>> >>> ipaddress=192.168.80.23 cidr_netmask=24 op monitor interval=30<br>
>> >>> pcs constraint colocation add virtip with virtsrc<br>
>> >>> pcs constraint order virtip then virtsrc<br>
>> >>><br>
>> >>> It sets IP and src correctly on node1 one time after this setup, but<br>
>> >>> in case of failover to node2 a havoc occurs -<br>
<br>
Your colocation constraint should be "virtsrc with virtip", not<br>
"virtip with virtsrc". virtsrc depends on virtip, not vice-versa.<br>
<br>
It would be easier to put the resources in a group (with virtip first<br>
and virtsrc second) instead of using constraints.<br>
<br>
>> >><br>
>> >> Havoc is not useful technical description. Explain what is wrong.<br>
>> >><br>
>> >>> <a href="https://pastebin.com/GZMtG480" rel="noreferrer" target="_blank">https://pastebin.com/GZMtG480</a><br>
>> >>><br>
>> >>> What's wrong?<br>
>> >><br>
>> >> You tell us. I do not see anything wrong here.<br>
>> >><br>
>> >>> Help me please :)<br>
>> >>><br>
>> >>><br>
>> >>> _______________________________________________<br>
>> >>> Manage your subscription:<br>
>> >>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>> >>><br>
>> >>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>> >><br>
>> >> _______________________________________________<br>
>> >> Manage your subscription:<br>
>> >> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>> >><br>
>> >> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>> >><br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > Manage your subscription:<br>
>> > <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>> ><br>
>> > ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
>><br>
>> _______________________________________________<br>
>> Manage your subscription:<br>
>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>><br>
>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
><br>
> _______________________________________________<br>
> Manage your subscription:<br>
> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
><br>
> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
<br>
<br>
<br>
-- <br>
Regards,<br>
<br>
Reid Wahl (He/Him), RHCA<br>
Senior Software Maintenance Engineer, Red Hat<br>
CEE - Platform Support Delivery - ClusterHA<br>
<br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div>