<div dir="ltr">If you have two copy of the clone in the same, it cannot work, because is like to have a dupplicate ip in the same node, because you are using clone-node-max="2" </div><div class="gmail_extra"><br><div class="gmail_quote">2017-09-05 16:15 GMT+02:00 Octavian Ciobanu <span dir="ltr"><<a href="mailto:coctavian1979@gmail.com" target="_blank">coctavian1979@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Based on ocf:heartbeat:IPaddr2 man page it can be used without an static IP address if the kernel has net.ipv4.conf.all.promote_<wbr>secondaries=1. <br><br>"There must be at least one static IP address, which is not managed by
the cluster, assigned to the network interface.
If you can not assign any static IP address on the interface,
modify this kernel parameter:
sysctl -w net.ipv4.conf.all.promote_<wbr>secondaries=1
(or per device)"<br><br>This kernel parameter is set by default in CentOS 7.3.<br><br></div>With clone-node-max="1" it works as it should be but with clone-node-max="2" both instances of VIP are started on the same node even if the other node is online. <br><br>Pacemaker 1.1 Cluster from Scratch say that <br>"<code class="m_-5422537750430360717gmail-literal">clone-node-max=2</code> says that one node can run
up to 2 instances of the clone. This should also equal the number of
nodes that can host the IP, so that if any node goes down, another node
can take over the failed node’s "request bucket". Otherwise, requests
intended for the failed node would be discarded."<br><br></div>To have this functionality do I must have a static IP set on the interfaces ?<br><br><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 5, 2017 at 4:54 PM, emmanuel segura <span dir="ltr"><<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I never tried to set an virtual ip in one interfaces without ip, because the vip is a secondary ip that switch between nodes, not primary ip<br></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_-5422537750430360717h5">2017-09-05 15:41 GMT+02:00 Octavian Ciobanu <span dir="ltr"><<a href="mailto:coctavian1979@gmail.com" target="_blank">coctavian1979@gmail.com</a>></span>:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_-5422537750430360717h5"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div>Hello all,<br></div><br></div></div></div></div>I've encountered an issue with IP cloning. <br></div><div><br></div><div>Based the "Pacemaker 1.1 Clusters from Scratch" I've configured a test configuration with 2 nodes based on CentOS 7.3. The nodes have 2 Ethernet cards one for cluster communication with private IP network and second for public access to services. The public Ethernet has no IP assigned at boot.<br></div><div><br></div>I've created an IP resource with clone using the following command<br><br>pcs resource create ClusterIP ocf:heartbeat:IPaddr2 params nic="ens192" ip="xxx.yyy.zzz.www" cidr_netmask="24" clusterip_hash="sourceip" op start interval="0" timeout="20" op stop interval="0" timeout="20" op monitor interval="10" timeout="20" meta resource-stickiness=0 clone meta clone-max="2" clone-node-max="2" interleave="true" globally-unique="true"<br><br></div>The xxx.yyy.zzz.www is public IP not a private one.<br><br></div>With the above command the IP clone is created but it is started only on one node. This is the output of pcs status command</div><div><br></div><div>Clone Set: ClusterIP-clone [ClusterIP] (unique)<br> ClusterIP:0 (ocf::heartbeat:IPaddr2): Started node02<br> ClusterIP:1 (ocf::heartbeat:IPaddr2): Started node02<br></div><div><br></div><div></div><div>If I modify the clone-node-max to 1 then the resource is started on both nodes as seen in this pcs status output:</div><div><br></div><div><div>Clone Set: ClusterIP-clone [ClusterIP] (unique)<br> ClusterIP:0 (ocf::heartbeat:IPaddr2): Started node02<br> ClusterIP:1 (ocf::heartbeat:IPaddr2): Started node01<br></div><div><br></div></div><div>But if one node fails the IP resource is not migrated to active node as is said in documentation.</div><div><br></div><div>Clone Set: ClusterIP-clone [ClusterIP] (unique)<br> ClusterIP:0 (ocf::heartbeat:IPaddr2): Started node02<br> ClusterIP:1 (ocf::heartbeat:IPaddr2): Stopped</div><div><br></div><div></div>When the IP is active on both nodes the services are accessible so there is not an issue with the fact that the interface dose not have an IP allocated at boot. The gateway is set with another pcs command and it is working.<br><br></div>Thank in advance for any info.<br><br></div>Best regards<span class="m_-5422537750430360717m_7637399861812459947HOEnZb"><font color="#888888"><br></font></span></div><span class="m_-5422537750430360717m_7637399861812459947HOEnZb"><font color="#888888">Octavian Ciobanu<br></font></span></div>
<br></div></div>______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/m<wbr>ailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc<wbr>/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><span class="m_-5422537750430360717HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div class="m_-5422537750430360717m_7637399861812459947gmail_signature" data-smartmail="gmail_signature"> .~.<br> /V\<br> // \\<br>/( )\<br>^`~'^</div>
</font></span></div>
<br>______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/m<wbr>ailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc<wbr>/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br></div>
<br>______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"> .~.<br> /V\<br> // \\<br>/( )\<br>^`~'^</div>
</div>