<div dir="ltr"><div>Hi<span name="Andrei Borzenkov" class="gmail-gD"> Andrei</span>,</div><div><br></div><div>Thanks for your quickly reply. Still need help as below :<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jun 6, 2018 at 11:58 AM, Andrei Borzenkov <span dir="ltr"><<a href="mailto:arvidjaar@gmail.com" target="_blank">arvidjaar@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">06.06.2018 04:27, Albert Weng пишет:<br>
<span class="">> Hi All,<br>
> <br>
> I have created active/passive pacemaker cluster on RHEL 7.<br>
> <br>
> Here are my environment:<br>
> clustera : 192.168.11.1 (passive)<br>
> clusterb : 192.168.11.2 (master)<br>
> clustera-ilo4 : 192.168.11.10<br>
> clusterb-ilo4 : 192.168.11.11<br>
> <br>
> cluster resource status :<br>
> cluster_fs started on clusterb<br>
> cluster_vip started on clusterb<br>
> cluster_sid started on clusterb<br>
> cluster_listnr started on clusterb<br>
> <br>
> Both cluster node are online status.<br>
> <br>
> i found my corosync.log contain many records like below:<br>
> <br>
> clustera pengine: info: determine_online_status_<wbr>fencing:<br>
> Node clusterb is active<br>
> clustera pengine: info: determine_online_status: Node<br>
> clusterb is online<br>
> clustera pengine: info: determine_online_status_<wbr>fencing:<br>
> Node clustera is active<br>
> clustera pengine: info: determine_online_status: Node<br>
> clustera is online<br>
> <br>
</span>> *clustera pengine: warning: unpack_rsc_op_failure: Processing<br>
> failed op start for cluster_sid on clustera: unknown error (1)*<br>
> *=> Question : Why pengine always trying to start cluster_sid on the<br>
> passive node? how to fix it? *<br>
> <br>
<br>
pacemaker does not have concept of "passive" or "master" node - it is up<br>
to you to decide when you configure resource placement. By default<br>
pacemaker will attempt to spread resources across all eligible nodes.<br>
You can influence node selection by using constraints. See<br>
<a href="https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/_deciding_which_nodes_a_resource_can_run_on.html" rel="noreferrer" target="_blank">https://clusterlabs.org/<wbr>pacemaker/doc/en-US/Pacemaker/<wbr>1.1/html/Pacemaker_Explained/_<wbr>deciding_which_nodes_a_<wbr>resource_can_run_on.html</a><br>
for details.<br>
<br>
But in any case - all your resources MUST be capable of running of both<br>
nodes, otherwise cluster makes no sense. If one resource A depends on<br>
something that another resource B provides and can be started only<br>
together with resource B (and after it is ready) - you must tell it to<br>
pacemaker by using resource colocations and ordering. See same document<br>
for details.<br>
<span class=""><br>
> clustera pengine: info: native_print: ipmi-fence-clustera<br>
> (stonith:fence_ipmilan): Started clustera<br>
> clustera pengine: info: native_print: ipmi-fence-clusterb<br>
> (stonith:fence_ipmilan): Started clustera<br>
> clustera pengine: info: group_print: Resource Group: cluster<br>
> clustera pengine: info: native_print: cluster_fs<br>
> (ocf::heartbeat:Filesystem): Started clusterb<br>
> clustera pengine: info: native_print: cluster_vip<br>
> (ocf::heartbeat:IPaddr2): Started clusterb<br>
> clustera pengine: info: native_print: cluster_sid<br>
> (ocf::heartbeat:oracle): Started clusterb<br>
> clustera pengine: info: native_print:<br>
> cluster_listnr (ocf::heartbeat:oralsnr): Started clusterb<br>
> clustera pengine: info: get_failcount_full: cluster_sid has<br>
> failed INFINITY times on clustera<br>
> <br>
> <br>
</span>> *clustera pengine: warning: common_apply_stickiness: Forcing<br>
> cluster_sid away from clustera after 1000000 failures (max=1000000)*<br>
> *=> Question: too much trying result in forbid the resource start on<br>
> clustera ?*<br>
> <br>
<br>
Yes.<br></blockquote><div><br></div><div>How to find out the root cause of
1000000 failures? which log will contain the error message?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
> Couple days ago, the clusterb has been stonith by unknown reason, but only<br>
> "cluster_fs", "cluster_vip" moved to clustera successfully, but<br>
> "cluster_sid" and "cluster_listnr" go to "STOP" status.<br>
> like below messages, is it related with "op start for cluster_sid on<br>
> clustera..." ?<br>
> <br>
<br>
</span>Yes. Node clustera is now marked as being incapable of running resource<br>
so if node cluaterb fails, resource cannot be started anywhere.<br>
<div><div class="h5"><br></div></div></blockquote><div>How could i fix it? i need some hint for troubleshooting.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
> clustera pengine: warning: unpack_rsc_op_failure: Processing failed op<br>
> start for cluster_sid on clustera: unknown error (1)<br>
> clustera pengine: info: native_print: ipmi-fence-clustera<br>
> (stonith:fence_ipmilan): Started clustera<br>
> clustera pengine: info: native_print: ipmi-fence-clusterb<br>
> (stonith:fence_ipmilan): Started clustera<br>
> clustera pengine: info: group_print: Resource Group: cluster<br>
> clustera pengine: info: native_print: cluster_fs<br>
> (ocf::heartbeat:Filesystem): Started clusterb (UNCLEAN)<br>
> clustera pengine: info: native_print: cluster_vip<br>
> (ocf::heartbeat:IPaddr2): Started clusterb (UNCLEAN)<br>
> clustera pengine: info: native_print: cluster_sid<br>
> (ocf::heartbeat:oracle): Started clusterb (UNCLEAN)<br>
> clustera pengine: info: native_print: cluster_listnr<br>
> (ocf::heartbeat:oralsnr): Started clusterb (UNCLEAN)<br>
> clustera pengine: info: get_failcount_full: cluster_sid has<br>
> failed INFINITY times on clustera<br>
> clustera pengine: warning: common_apply_stickiness: Forcing<br>
> cluster_sid away from clustera after 1000000 failures (max=1000000)<br>
> clustera pengine: info: rsc_merge_weights: cluster_fs: Rolling<br>
> back scores from cluster_sid<br>
> clustera pengine: info: rsc_merge_weights: cluster_vip: Rolling<br>
> back scores from cluster_sid<br>
> clustera pengine: info: rsc_merge_weights: cluster_sid: Rolling<br>
> back scores from cluster_listnr<br>
> clustera pengine: info: native_color: Resource cluster_sid cannot<br>
> run anywhere<br>
> clustera pengine: info: native_color: Resource cluster_listnr<br>
> cannot run anywhere<br>
> clustera pengine: warning: custom_action: Action cluster_fs_stop_0 on<br>
> clusterb is unrunnable (offline)<br>
> clustera pengine: info: RecurringOp: Start recurring monitor<br>
> (20s) for cluster_fs on clustera<br>
> clustera pengine: warning: custom_action: Action cluster_vip_stop_0 on<br>
> clusterb is unrunnable (offline)<br>
> clustera pengine: info: RecurringOp: Start recurring monitor<br>
> (10s) for cluster_vip on clustera<br>
> clustera pengine: warning: custom_action: Action cluster_sid_stop_0 on<br>
> clusterb is unrunnable (offline)<br>
> clustera pengine: warning: custom_action: Action cluster_sid_stop_0 on<br>
> clusterb is unrunnable (offline)<br>
> clustera pengine: warning: custom_action: Action cluster_listnr_stop_0<br>
> on clusterb is unrunnable (offline)<br>
> clustera pengine: warning: custom_action: Action cluster_listnr_stop_0<br>
> on clusterb is unrunnable (offline)<br>
> clustera pengine: warning: stage6: Scheduling Node clusterb for STONITH<br>
> clustera pengine: info: native_stop_constraints:<br>
> cluster_fs_stop_0 is implicit after clusterb is fenced<br>
> clustera pengine: info: native_stop_constraints:<br>
> cluster_vip_stop_0 is implicit after clusterb is fenced<br>
> clustera pengine: info: native_stop_constraints:<br>
> cluster_sid_stop_0 is implicit after clusterb is fenced<br>
> clustera pengine: info: native_stop_constraints:<br>
> cluster_listnr_stop_0 is implicit after clusterb is fenced<br>
> clustera pengine: info: LogActions: Leave ipmi-fence-db01<br>
> (Started clustera)<br>
> clustera pengine: info: LogActions: Leave ipmi-fence-db02<br>
> (Started clustera)<br>
> clustera pengine: notice: LogActions: Move cluster_fs<br>
> (Started clusterb -> clustera)<br>
> clustera pengine: notice: LogActions: Move cluster_vip<br>
> (Started clusterb -> clustera)<br>
> clustera pengine: notice: LogActions: Stop cluster_sid<br>
> (clusterb)<br>
> clustera pengine: notice: LogActions: Stop cluster_listnr<br>
> (clusterb)<br>
> clustera pengine: warning: process_pe_message: Calculated<br>
> Transition 26821: /var/lib/pacemaker/pengine/pe-<wbr>warn-7.bz2<br>
> clustera crmd: info: do_state_transition: State transition<br>
> S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS<br>
> cause=C_IPC_MESSAGE origin=handle_response ]<br>
> clustera crmd: info: do_te_invoke: Processing graph 26821<br>
> (ref=pe_calc-dc-1526868653-<wbr>26882) derived from<br>
> /var/lib/pacemaker/pengine/pe-<wbr>warn-7.bz2<br>
> clustera crmd: notice: te_fence_node: Executing reboot fencing<br>
> operation (23) on clusterb (timeout=60000)<br>
> <br>
> <br>
> Thanks ~~~~<br>
> <br>
> <br>
> <br>
> <br>
</div></div>> ______________________________<wbr>_________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
> <br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> <br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Kind regards,<br>Albert Weng</div>
</div></div><div id="DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"><br> <table style="border-top:1px solid #d3d4de">
<tr>
<td style="width:55px;padding-top:18px"><a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" target="_blank"><img src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif" alt="" width="46" height="29" style="width: 46px; height: 29px;"></a></td>
<td style="width:470px;padding-top:17px;color:#41424e;font-size:13px;font-family:Arial,Helvetica,sans-serif;line-height:18px">不含病毒。<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" target="_blank" style="color:#4453ea">www.avast.com</a> </td>
</tr>
</table>
<a href="#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1"></a></div>