<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=us-ascii"><meta name=Generator content="Microsoft Word 14 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0cm;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";
mso-fareast-language:EN-US;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 2.0cm 2.0cm 2.0cm;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=IT link=blue vlink=purple><div class=WordSection1><p class=MsoNormal><span lang=EN-US>Hello,<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>I have a problem with pacemaker cluster (3 nodes, SAP production environment)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Node 1<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:00:39 s-xxx-05 lrmd: [12995]: info: operation monitor[85] on ip_wd_WIC_pri for client 12998: pid 27282 exited with return code 0<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 lrmd: [12995]: info: RA output: (ipbck_wd_WIC_pri:monitor:stderr) /usr/lib/ocf/resource.d//heartbeat/IPaddr2: fork: Cannot allocate memory<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 lrmd: [12995]: info: RA output: (ipbck_wd_WIC_pri:monitor:stderr) /usr/lib/ocf/resource.d//heartbeat/IPaddr2: fork: Cannot allocate memory<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 lrmd: [12995]: info: RA output: (ipbck_wd_WIC_pri:monitor:stderr) /usr/lib/ocf/resource.d//heartbeat/IPaddr2: fork: Cannot allocate memory<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 lrmd: [12995]: info: RA output: (ipbck_wd_WIC_pri:monitor:stderr) /usr/lib/ocf/resource.d//heartbeat/IPaddr2: fork: Cannot allocate memory<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 crmd: [12998]: info: process_lrm_event: LRM operation ipbck_wd_WIC_pri_monitor_10000 (call=87, rc=7, cib-update=105, confirmed=false) not running<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 attrd: [12996]: notice: attrd_ais_dispatch: Update relayed from s-xxx-06<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 attrd: [12996]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-ipbck_wd_WIC_pri (1)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 attrd: [12996]: notice: attrd_perform_update: Sent update 28: fail-count-ipbck_wd_WIC_pri=1<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 attrd: [12996]: notice: attrd_ais_dispatch: Update relayed from s-xxx-06<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 attrd: [12996]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-ipbck_wd_WIC_pri (1392116476)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-05 attrd: [12996]: notice: attrd_perform_update: Sent update 31: last-failure-ipbck_wd_WIC_pri=1392116476<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-05 lrmd: [12995]: ERROR: perform_ra_op::3123: fork: Cannot allocate memory<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='color:red'>Feb 11 12:01:17 s-xxx-05 lrmd: [12995]: ERROR: unable to perform_ra_op on operation monitor[14] on usrsap_WBW_pri:2 for client 12998, its parameters: CRM_meta_record_pending=[false] CRM_meta_clone=[2] fstype=[ocfs2] device=[/dev/sapBWPvg/sapWBW] CRM_meta_clone_node_max=[1] CRM_meta_notify=[false] CRM_meta_clone_max=[3] CRM_meta_globally_unique=[false] crm_feature_set=[3.0.6] directory=[/usr/sap/WBW] CRM_meta_name=[monitor] CRM_meta_interval=[60000] CRM_meta_timeout=[60000] <o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='color:red'>Feb 11 12:01:17 s-xxx-05 lrmd: [12995]: ERROR: perform_ra_op::3123: fork: Cannot allocate memory<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='color:red'>Feb 11 12:01:17 s-xxx-05 lrmd: [12995]: ERROR: unable to perform_ra_op on operation stop[95] on webdisp_WIC_pri for client 12998, its parameters: CRM_meta_name=[stop] crm_feature_set=[3.0.6] CRM_meta_record_pending=[false] CRM_meta_timeout=[300000] InstanceName=[WIC_W39_vsicpwd] START_PROFILE=[/sapmnt/WIC/profile/WIC_W39_vsicpwd]<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Node 2<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:00:17 s-xxx-06 pengine: [10338]: notice: process_pe_message: Transition 3196: PEngine Input stored in: /var/lib/pengine/pe-input-476.bz2<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-06 crmd: [10339]: info: process_graph_event: Detected action ipbck_wd_WIC_pri_monitor_10000 from a different transition: 2546 vs. 3196<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-06 crmd: [10339]: info: abort_transition_graph: process_graph_event:476 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=ipbck_wd_WIC_pri_last_failure_0, magic=0:7;321:2546:0:8544b0c8-b0fd-4249-a6ad-0ca818ba5f67, cib=0.1910.325) : Old event<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-06 crmd: [10339]: WARN: update_failcount: Updating failcount for ipbck_wd_WIC_pri on s-xxx-05 after failed monitor: rc=7 (update=value++, time=1392116476)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-06 crmd: [10339]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-06 crmd: [10339]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-s-xxx-05-fail-count-ipbck_wd_WIC_pri, name=fail-count-ipbck_wd_WIC_pri, value=1, magic=NA, cib=0.1910.326) : Transient attribute: update<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:16 s-xxx-06 crmd: [10339]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-s-xxx-05-last-failure-ipbck_wd_WIC_pri, name=last-failure-ipbck_wd_WIC_pri, value=1392116476, magic=NA, cib=0.1910.327) : Transient attribute: update<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: unpack_config: On loss of CCM Quorum: Ignore<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: unpack_nodes: Blind faith: not fencing unseen nodes<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: unpack_rsc_op: Processing failed op sapmnt_ICP_pri:1_last_failure_0 on s-xxx-04: unknown exec error (-2)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: unpack_rsc_op: Processing failed op sapmnt_ICP_pri:2_last_failure_0 on s-xxx-05: unknown exec error (-2)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: unpack_rsc_op: Processing failed op ipbck_wd_WIC_pri_last_failure_0 on s-xxx-05: not running (7)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: common_apply_stickiness: ocfs_global_clone can fail 4 more times on s-xxx-04 before being forced off<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: common_apply_stickiness: ocfs_global_clone can fail 4 more times on s-xxx-04 before being forced off<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: common_apply_stickiness: ocfs_global_clone can fail 4 more times on s-xxx-04 before being forced off<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: common_apply_stickiness: ocfs_global_clone can fail 4 more times on s-xxx-05 before being forced off<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: common_apply_stickiness: ocfs_global_clone can fail 4 more times on s-xxx-05 before being forced off<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: common_apply_stickiness: ocfs_global_clone can fail 4 more times on s-xxx-05 before being forced off<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: common_apply_stickiness: ipbck_wd_WIC_pri can fail 4 more times on s-xxx-05 before being forced off<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='color:red'>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: LogActions: Recover ipbck_wd_WIC_pri (Started s-xxx-05)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='color:red'>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: LogActions: Restart ascs_ICP_pri (Started s-xxx-05)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='color:red'>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: LogActions: Restart webdisp_WIC_pri (Started s-xxx-05)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: info: do_te_invoke: Processing graph 3197 (ref=pe_calc-dc-1392116477-4106) derived from /var/lib/pengine/pe-input-477.bz2<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: info: te_rsc_command: Initiating action 414: stop webdisp_WIC_pri_stop_0 on s-xxx-05<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: WARN: status_from_rc: Action 414 (webdisp_WIC_pri_stop_0) on s-xxx-05 failed (target: 0 vs. rc: -2): Error<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: WARN: update_failcount: Updating failcount for webdisp_WIC_pri on s-xxx-05 after failed stop: rc=-2 (update=INFINITY, time=1392116477)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=webdisp_WIC_pri_last_failure_0, magic=4:-2;414:3197:0:8544b0c8-b0fd-4249-a6ad-0ca818ba5f67, cib=0.1910.328) : Event failed<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: notice: run_graph: ==== Transition 3197 (Complete=2, Pending=0, Fired=0, Skipped=11, Incomplete=0, Source=/var/lib/pengine/pe-input-477.bz2): Stopped<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-s-xxx-05-fail-count-webdisp_WIC_pri, name=fail-count-webdisp_WIC_pri, value=INFINITY, magic=NA, cib=0.1910.329) : Transient attribute: update<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-s-xxx-05-last-failure-webdisp_WIC_pri, name=last-failure-webdisp_WIC_pri, value=1392116477, magic=NA, cib=0.1910.330) : Transient attribute: update<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: process_pe_message: Transition 3197: PEngine Input stored in: /var/lib/pengine/pe-input-477.bz2<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: unpack_config: On loss of CCM Quorum: Ignore<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: unpack_nodes: Blind faith: not fencing unseen nodes<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: unpack_rsc_op: Processing failed op sapmnt_ICP_pri:1_last_failure_0 on s-xxx-04: unknown exec error (-2)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: unpack_rsc_op: Processing failed op sapmnt_ICP_pri:2_last_failure_0 on s-xxx-05: unknown exec error (-2)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: unpack_rsc_op: Processing failed op webdisp_WIC_pri_last_failure_0 on s-xxx-05: unknown exec error (-2)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: pe_fence_node: Node s-xxx-05 will be fenced to recover from resource failure(s)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: WARN: unpack_rsc_op: Processing failed op ipbck_wd_WIC_pri_last_failure_0 on s-xxx-05: not running (7)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: common_apply_stickiness: ocfs_global_clone can fail 4 more times on s-xxx-04 before being forced off<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: LogActions: Move ipbck_wd_WIC_pri (Started s-xxx-05 -> s-xxx-04)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: LogActions: Move ascs_ICP_pri (Started s-xxx-05 -> s-xxx-04)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 pengine: [10338]: notice: LogActions: Move webdisp_WIC_pri (Started s-xxx-05 -> s-xxx-04)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: info: do_te_invoke: Processing graph 3198 (ref=pe_calc-dc-1392116477-4108) derived from /var/lib/pengine/pe-warn-26.bz2<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 crmd: [10339]: notice: te_fence_node: Executing reboot fencing operation (464) on s-xxx-05 (timeout=12000)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-06 stonith-ng: [10335]: info: initiate_remote_stonith_op: Initiating remote operation reboot for s-xxx-05: fff269bd-70f1-490b-a46f-92f2eaaa04f1<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 pengine: [10338]: WARN: process_pe_message: Transition 3198: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-26.bz2<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 pengine: [10338]: notice: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 stonith-ng: [10335]: info: can_fence_host_with_device: Refreshing port list for stonith-sbd_pri<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 stonith-ng: [10335]: info: can_fence_host_with_device: stonith-sbd_pri can fence s-xxx-05: dynamic-list<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 stonith-ng: [10335]: info: call_remote_stonith: Requesting that s-xxx-06 perform op reboot s-xxx-05<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 stonith-ng: [10335]: info: can_fence_host_with_device: stonith-sbd_pri can fence s-xxx-05: dynamic-list<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 stonith-ng: [10335]: info: stonith_fence: Found 1 matching devices for 's-xxx-05'<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 stonith-ng: [10335]: info: stonith_command: Processed st_fence from s-xxx-06: rc=-1<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 sbd: [25130]: info: Delivery process handling /dev/mapper/SBD_LUN_QUORUM<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:18 s-xxx-06 sbd: [25130]: info: Writing reset to node slot s-xxx-05<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Node 3<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:00:01 s-xxx-04 /usr/sbin/cron[22525]: (root) CMD ([ -x /usr/lib64/sa/sa1 ] && exec /usr/lib64/sa/sa1 -S ALL 1 1)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:00:01 s-xxx-04 syslog-ng[4795]: Log statistics; dropped='pipe(/dev/xconsole)=0', dropped='pipe(/dev/tty10)=0', processed='center(queued)=11361', processed='center(received)=6355', processed='destination(messages)=1462', processed='destination(mailinfo)=4893', processed='destination(mailwarn)=0', processed='destination(localmessages)=0', processed='destination(newserr)=0', processed='destination(mailerr)=0', processed='destination(netmgm)=0', processed='destination(warn)=103', processed='destination(console)=5', processed='destination(null)=0', processed='destination(mail)=4893', processed='destination(xconsole)=5', processed='destination(firewall)=0', processed='destination(acpid)=0', processed='destination(newscrit)=0', processed='destination(newsnotice)=0', processed='source(src)=6355'<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-04 stonith-ng: [12951]: info: crm_new_peer: Node s-xxx-06 now has id: 101344266<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-04 stonith-ng: [12951]: info: crm_new_peer: Node 101344266 is now known as s-xxx-06<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:17 s-xxx-04 stonith-ng: [12951]: info: stonith_command: Processed st_query from s-xxx-06: rc=0<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:23 s-xxx-04 corosync[12944]: [TOTEM ] A processor failed, forming new configuration.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Feb 11 12:01:29 s-xxx-04 corosync[12944]: [CLM ] CLM CONFIGURATION CHANGE<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-family:"Arial","sans-serif";color:black'>Can this error “</span><span lang=EN-US>Cannot allocate memory”</span><span lang=EN-US style='font-family:"Arial","sans-serif";color:black'> to indicate that there cannot be any memory allocated for a new Resource Agent instance ? <o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-family:"Arial","sans-serif";color:black'><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-family:"Arial","sans-serif";color:black'>I have 128Gb of RAM<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-family:"Arial","sans-serif";color:black'><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-family:"Arial","sans-serif";color:black'>THP is setting to never </span><span lang=EN-US><o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Version:<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>openais-1.1.4-5.8.7.1<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>libopenais3-1.1.4-5.8.7.1<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>pacemaker-mgmt-2.1.1-0.6.2.17<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>pacemaker-1.1.7-0.13.9<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>drbd-pacemaker-8.4.2-0.6.6.7<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>pacemaker-mgmt-client-2.1.1-0.6.2.17<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>libpacemaker3-1.1.7-0.13.9<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>OS Sles 11 SP2 kernel 3.0.80-0.7-default<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Ask me for more information ?<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Thanks <o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Bye <o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Walter<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p></div></body></html>