<div class="socmaildefaultfont" dir="ltr" style="font-family:Arial, Helvetica, sans-serif;font-size:10pt" ><div dir="ltr" >Hello,</div>
<div dir="ltr" > </div>
<div dir="ltr" >Yes, the promoted role of one of the dbs will failover to a node that is not using the virtual IP address. However, neither the db resources nor the virtual IP resources will follow the db that failed over, hence we end up in a state where the dbs are located on separate nodes and the virtual IP only points to the set of dbs on the original node.</div>
<div dir="ltr" > </div>
<div dir="ltr" >Thank you,</div>
<div dir="ltr" >Abithan </div>
<div dir="ltr" > </div>
<blockquote data-history-content-modified="1" data-history-expanded="1" dir="ltr" style="border-left:solid #aaaaaa 2px; margin-left:5px; padding-left:5px; direction:ltr; margin-right:0px" >----- Original message -----<br>From: kgaillot@redhat.com<br>Sent by: "Users" <users-bounces@clusterlabs.org><br>To: Cluster Labs - All topics related to open-source clustering welcomed <users@clusterlabs.org><br>Cc:<br>Subject: [EXTERNAL] Re: [ClusterLabs] Colocating a Virtual IP address with multiple resources<br>Date: Mon, Jun 7, 2021 5:52 PM<br> 
<div><font face="Default Monospace,Courier New,Courier,monospace" size="2" >On Mon, 2021-06-07 at 20:37 +0000, Abithan Kumarasamy wrote:<br>> Hello Team,<br>>  <br>> We have been recently experimenting with some resource model options<br>> to fulfil the following scenario. We would like to collocate a<br>> virtual IP resource with multiple db resources. When the virtual IP<br>> fails over to another node, all the dbs associated should also fail<br>> over to the new node. We were able to accomplish this with resource<br>> sets as defined in Example 6.17 in this documentation page:<br>> <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__clusterlabs.org_pacemaker_doc_en-2DUS_Pacemaker_1.1_html-2Dsingle_Pacemaker-5FExplained_index.html-23s-2Dresource-2Dsets-2Dcolocation&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=D8QLExyK-VADmlLj41ei6cxKVFfIqyMaP2nnLugMWCQ&m=QXK1iKZp4LebD4fdErRosGB8EqKglxg4hw0JpBD9OYw&s=BlGCb3d9kcAjwJ3vuyTJmlFw48GeDBYNEK03iCB4wQ0&e=" target="_blank" >https://urldefense.proofpoint.com/v2/url?u=https-3A__clusterlabs.org_pacemaker_doc_en-2DUS_Pacemaker_1.1_html-2Dsingle_Pacemaker-5FExplained_index.html-23s-2Dresource-2Dsets-2Dcolocation&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=D8QLExyK-VADmlLj41ei6cxKVFfIqyMaP2nnLugMWCQ&m=QXK1iKZp4LebD4fdErRosGB8EqKglxg4hw0JpBD9OYw&s=BlGCb3d9kcAjwJ3vuyTJmlFw48GeDBYNEK03iCB4wQ0&e=</a> <br>> . However, whenever a single db fails over to the other node, the<br>> virtual IP address and the other dbs are not following and failing<br>> over to the other node. Are there any configurations that may be<br>> causing this undesired behaviour? We have already tried resource<br>> sets, colocation constraints, and ordering constraints. Are there any<br>> other models that we should consider to achieve this solution? Our<br>> current constraint model looks like this in a simplified manner.<br>>  <br>> <rsc_colocation id="vip-with-multiple-dbs" score="INFINITY" ><br>> <resource_set id="db-set" role=”Master”><br>> <resource_ref id="db1"/><br>> <resource_ref id="db2"/><br>> <resource_ref id="db3"/><br>> <resource_ref id="db4"/><br>> </resource_set><br>> <resource_set id="vip-set"><br>> <resource_ref id="primary-VIP"/><br>> </resource_set><br>> </rsc_colocation><br><br>With the above configuration, the resources should fail over all<br>together. However the database colocations are limited to the promoted<br>role; any unpromoted instances can fail over without restrictions.<br><br>If you want the dbs to depend only on the IP, and not each other, add<br>sequential="false" to db-set.<br><br>With the exact above configuration, is the promoted role of one of the<br>databases failing over to a node that's not running the IP?<br><br>>  <br>> Thanks,<br>> Abithan<br>--<br>Ken Gaillot <kgaillot@redhat.com><br><br>_______________________________________________<br>Manage your subscription:<br><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.clusterlabs.org_mailman_listinfo_users&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=D8QLExyK-VADmlLj41ei6cxKVFfIqyMaP2nnLugMWCQ&m=QXK1iKZp4LebD4fdErRosGB8EqKglxg4hw0JpBD9OYw&s=xttYLyWroAG3DASCICTLC7J8nX-NkFhxxy9Y-V1D3O8&e=" target="_blank" >https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.clusterlabs.org_mailman_listinfo_users&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=D8QLExyK-VADmlLj41ei6cxKVFfIqyMaP2nnLugMWCQ&m=QXK1iKZp4LebD4fdErRosGB8EqKglxg4hw0JpBD9OYw&s=xttYLyWroAG3DASCICTLC7J8nX-NkFhxxy9Y-V1D3O8&e=</a> <br><br>ClusterLabs home: <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.clusterlabs.org_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=D8QLExyK-VADmlLj41ei6cxKVFfIqyMaP2nnLugMWCQ&m=QXK1iKZp4LebD4fdErRosGB8EqKglxg4hw0JpBD9OYw&s=_CMY2jDXy7UHjt-7hAb84qRKGQeVxK5jO5y7Er6k1ME&e=" target="_blank" >https://urldefense.proofpoint.com/v2/url?u=https-3A__www.clusterlabs.org_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=D8QLExyK-VADmlLj41ei6cxKVFfIqyMaP2nnLugMWCQ&m=QXK1iKZp4LebD4fdErRosGB8EqKglxg4hw0JpBD9OYw&s=_CMY2jDXy7UHjt-7hAb84qRKGQeVxK5jO5y7Er6k1ME&e=</a> </font><br> </div></blockquote>
<div dir="ltr" > </div></div><BR>