<html><body><p>Ulrich, <br><br>Can you share with me some details how your pacemaker, corosync and LGM is configured? I should <br>consider implementing a similar approach and this would be very helpful to me. Thanks... <br><br>Scott <br><br><br>Scott Greenlese ... KVM on System Z - Solutions Test, IBM Poughkeepsie, N.Y.<br> INTERNET: swgreenl@us.ibm.com <br> PHONE: 8/293-7301 (845-433-7301) <br><br><br><img width="16" height="16" src="cid:1__=8FBB0A4EDFC0A5ED8f9e8a93df938690918c8FB@" border="0" alt="Inactive hide details for "Ulrich Windl" ---03/08/2017 03:20:53 AM--->>> "Scott Greenlese" <swgreenl@us.ibm.com> schrieb am 07."><font color="#424282">"Ulrich Windl" ---03/08/2017 03:20:53 AM--->>> "Scott Greenlese" <swgreenl@us.ibm.com> schrieb am 07.03.2017 um 17:28 in Nachricht</font><br><br><font size="2" color="#5F5F5F">From: </font><font size="2">"Ulrich Windl" <Ulrich.Windl@rz.uni-regensburg.de></font><br><font size="2" color="#5F5F5F">To: </font><font size="2"><users@clusterlabs.org></font><br><font size="2" color="#5F5F5F">Date: </font><font size="2">03/08/2017 03:20 AM</font><br><font size="2" color="#5F5F5F">Subject: </font><font size="2">[ClusterLabs] Antw: Re: Antw: Expected recovery behavior of remote-node guest when corosync ring0 is lost in a passive mode RRP config?</font><br><hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br><br><br><tt>>>> "Scott Greenlese" <swgreenl@us.ibm.com> schrieb am 07.03.2017 um 17:28 in<br>Nachricht<br><OF940FA4A7.627D922B-ON002580DC.0050C1DB-852580DC.005A8ACD@notes.na.collabserv.c <br>m>:<br>[...]<br>> <br>> Maybe my question is... is there any way to facilitate an alternate Live<br>> Guest Migration path in the event of a ring0_addr failure?<br>> This might also apply to a single ring protocol as well.<br>[...]<br>I don't know the answer, but what we have here is that every network is connected using two NICs in a bonding interface, and we have two of these networks for cluster communication, one being exclusively for cluster communication. And the network being used for migration is separate from the two networks used for cluster communication. As said, we are USING SLES11 here, but we never had problems that you have. One problem we had was that massive parallel VM migrations would overload the network, so we limited the number of parallel migrations. Maybe this is still helpful...<br><br>Regards,<br>Ulrich<br><br><br><br><br>_______________________________________________<br>Users mailing list: Users@clusterlabs.org<br></tt><tt><a href="http://lists.clusterlabs.org/mailman/listinfo/users">http://lists.clusterlabs.org/mailman/listinfo/users</a></tt><tt><br><br>Project Home: </tt><tt><a href="http://www.clusterlabs.org">http://www.clusterlabs.org</a></tt><tt><br>Getting started: </tt><tt><a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a></tt><tt><br>Bugs: </tt><tt><a href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a></tt><tt><br><br></tt><br><br><BR>
</body></html>