<div dir="ltr">Hi Noel,<div><br></div><div>Thanks for the quick reply, I really appreciate it. I found out that after I kill the nginx at the node1. I run the command <b>pcs status</b> and I got below info.</div><div><br></div><div><div>[root@node2 ~]# pcs status</div><div>Cluster name: cluster_web</div><div>Last updated: Sun Aug 9 12:49:20 2015</div><div>Last change: Sun Aug 9 09:24:37 2015 via cibadmin on node1</div><div>Stack: corosync</div><div>Current DC: node2 (2) - partition with quorum</div><div>Version: 1.1.10-29.el7-368c726</div><div>2 Nodes configured</div><div>2 Resources configured</div><div><br></div><div><br></div><div>Online: [ node1 node2 ]</div><div><br></div><div>Full list of resources:</div><div><br></div><div> Resource Group: test</div><div> nginx (ocf::heartbeat:nginx): Started node2 </div><div> virtual_ip (ocf::heartbeat:IPaddr2): Started node2 </div><div><br></div>Failed actions:<br><b> nginx_monitor_60000 on node1 'not running' (7): call=11, status=complete, last-rc-change='Sun Aug 9 12:34:47 2015', queued=0ms, exec=0ms</b><br></div><div><br></div><div>Looks like the nginx monitor is failing on the node1 and causing the issue. After I restart the cluster node 1, it take back the VIP and Nginx resource again, because it got a higher score than the node2. But is it possible to make the node1 recovery it's nginx monitor on it own? Thanks again for your time!</div><div><br></div><div>Thanks,</div><div>Jacob</div><div><br><div class="gmail_quote"><div dir="ltr">Noel Kuntze <<a href="mailto:noel@familie-kuntze.de">noel@familie-kuntze.de</a>>于2015年8月9日周日 上午11:07写道:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Hello Jacob,<br>
<br>
Look at the journal. It will tell you. Also, it's hard to debug without any information from the daemons.<br>
<br>
Regards,<br>
Noel Kuntze<br><br><div class="gmail_quote"></div></div><div><div class="gmail_quote">Am 9. August 2015 05:02:15 MESZ, schrieb jun huang <<a href="mailto:huangjun.job@gmail.com" target="_blank">huangjun.job@gmail.com</a>>:</div></div><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">Hello Everyone,<div><br></div><div><p style="margin:0px 0px 1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(34,34,34);line-height:19.5px">I setup a cluster with two nodes with pacemaker 1.1.10 on CentOS 7. Then I downloaded a<a href="https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/nginx" rel="nofollow" style="margin:0px;padding:0px;border-width:0px 0px 1px;border-bottom-style:dotted;border-bottom-color:rgb(69,69,69);text-decoration:none;color:rgb(12,101,165)" target="_blank">resource agent for nginx from github</a></p><p style="margin:0px 0px 1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(34,34,34);line-height:19.5px">I tested my setup like this:</p><ol style="margin:0px 0px 1em 30px;padding:0px;border:0px;font-size:15px;color:rgb(34,34,34);line-height:19.5px"><li style="margin:0px 0px 0.5em;padding:0px;border:0px;word-wrap:break-word">Node 1 is started with the nginx and vip, everyting is ok</li><li style="margin:0px 0px 0.5em;padding:0px;border:0px;word-wrap:break-word">Kill Node1 nginx, wait for a few seconds</li><li style="margin:0px 0px 0.5em;padding:0px;border:0px;word-wrap:break-word">See the ngnix and vip are moved to node2, failover succeeded, and Node1 doesn't have any resources active</li><li style="margin:0px 0px 0.5em;padding:0px;border:0px;word-wrap:break-word">I kill nginx on node2, but nginx and vip don't come back to Node1</li></ol><p style="margin:0px 0px 1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(34,34,34);line-height:19.5px">I set <code style="margin:0px;padding:1px 5px;border:0px;font-size:13px;font-family:Consolas,Menlo,Monaco,'Lucida Console','Liberation Mono','DejaVu Sans Mono','Bitstream Vera Sans Mono','Courier New',monospace,sans-serif;white-space:pre-wrap;background-color:rgb(238,238,238)">no-quorum-policy="ignore"</code> and <code style="margin:0px;padding:1px 5px;border:0px;font-size:13px;font-family:Consolas,Menlo,Monaco,'Lucida Console','Liberation Mono','DejaVu Sans Mono','Bitstream Vera Sans Mono','Courier New',monospace,sans-serif;white-space:pre-wrap;background-color:rgb(238,238,238)">stonith-enabled="false"</code>.</p><p style="margin:0px 0px 1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(34,34,34);line-height:19.5px">Why won't pacemaker let the resource come back to Node1? What did I miss here?</p><p style="margin:0px 0px 1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(34,34,34);line-height:19.5px"><br>I guess the node1 is still in some failure status, how can I recovery the node? Does anyone can shed some light on my questions? Thank you in advance.</p><p style="margin:0px 0px 1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(34,34,34);line-height:19.5px">Thanks,</p><p style="margin:0px 0px 1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(34,34,34);line-height:19.5px">Jacob</p></div></div>
<p style="margin-top:2.5em;margin-bottom:1em;border-bottom:1px solid #000"></p></blockquote></div></div><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><pre><hr><br>Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br><a href="http://clusterlabs.org/mailman/listinfo/users" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br><br>Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br></pre></blockquote></div></div><div><br>
-- <br>
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.</div></blockquote></div></div></div>