<html dir="ltr">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1256">
<style id="owaParaStyle" type="text/css">P {margin-top:0;margin-bottom:0;}</style>
</head>
<body ocsi="0" fpstyle="1">
<div style="direction: ltr;font-family: Tahoma;color: #000000;font-size: 10pt;">Hi<br>
<br>
We have built a cluster on top of the SLES 11 SP1 stack, which manages various Xen VMs.<br>
<br>
In the development phase we used some test VM resources, which have since been removed from the resource list. However I see some remnants of these old resources in the log files, and would like to xclean this up.<br>
<br>
e.g. I see<br>
<br>
Dec 22 12:27:18 node2 pengine: [6262]: info: get_failcount: hvm1 has failed 1 times on node2<br>
Dec 22 12:27:18 node2 pengine: [6262]: notice: common_apply_stickiness: hvm1 can fail 999999 more times on node2 before being forced off<br>
Dec 22 12:27:18 node2 attrd: [6261]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-hvm1 (1)<br>
Dec 22 12:27:18 node2 attrd: [6261]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-hvm1 (1322579680)<br>
<br>
hvm1 was a VM in that test phase.<br>
<br>
If I do a dump of the CIB, I find this section<br>
<br>
  <status><br>
    <node_state uname="node2" ha="active" in_ccm="true" crmd="online" join="member" expected="member" shutdown="0" id="node2" crm-debug-origin="do_state_transition"><br>
      <lrm id="node2"><br>
        <lrm_resources><br>
...<br>
          <lrm_resource id="hvm1" type="Xen" class="ocf" provider="heartbeat"><br>
            <lrm_rsc_op id="hvm1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.2" transition-key="20:11:7:1fd9e9b1-610e-4768-abd5-35ea3ce45c4d" transition-magic="0:7;20:11:7:1fd9e9b1-610e-4768-abd5-35ea3ce45c4d" call-id="27"
 rc-code="7" op-status="0" interval="0" last-run="1322130825" last-rc-change="1322130825" exec-time="550" queue-time="0" op-digest="71594dc818f53dfe034bb5e84c6d80fb"/><br>
            <lrm_rsc_op id="hvm1_stop_0" operation="stop" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.2" transition-key="61:511:0:abda911e-05ed-4e11-8e25-ab03a1bfd7b7" transition-magic="0:0;61:511:0:abda911e-05ed-4e11-8e25-ab03a1bfd7b7" call-id="56"
 rc-code="0" op-status="0" interval="0" last-run="1322580820" last-rc-change="1322580820" exec-time="164320" queue-time="0" op-digest="71594dc818f53dfe034bb5e84c6d80fb"/><br>
            <lrm_rsc_op id="hvm1_start_0" operation="start" crm-debug-origin="build_active_RAs" crm_feature_set="3.0.2" transition-key="59:16:0:1fd9e9b1-610e-4768-abd5-35ea3ce45c4d" transition-magic="0:0;59:16:0:1fd9e9b1-610e-4768-abd5-35ea3ce45c4d" call-id="30"
 rc-code="0" op-status="0" interval="0" last-run="1322131559" last-rc-change="1322131559" exec-time="470" queue-time="0" op-digest="71594dc818f53dfe034bb5e84c6d80fb"/><br>
          </lrm_resource><br>
...<br>
<br>
I tried<br>
<br>
cibadmin -Q > tmp.xml<br>
vi tmp.xml<br>
cibadmin --replace --xml-file tmp.xml<br>
<br>
but this does not do the job, I guess because the problematic bits are in the status section.<br>
<br>
Any clue how to clean this up properly, preferably without any cluster downtime?<br>
<br>
Thanks,<br>
Kevin<br>
<br>
version info<br>
<br>
node2 # rpm -qa | egrep "heartbeat|pacemaker|cluster|openais"<br>
libopenais3-1.1.2-0.5.19<br>
pacemaker-mgmt-2.0.0-0.2.19<br>
openais-1.1.2-0.5.19<br>
cluster-network-kmp-xen-1.4_2.6.32.12_0.6-2.1.73<br>
libpacemaker3-1.1.2-0.2.1<br>
drbd-heartbeat-8.3.7-0.4.15<br>
cluster-glue-1.0.5-0.5.1<br>
drbd-pacemaker-8.3.7-0.4.15<br>
cluster-network-kmp-default-1.4_2.6.32.12_0.6-2.1.73<br>
pacemaker-1.1.2-0.2.1<br>
yast2-cluster-2.15.0-8.6.19<br>
pacemaker-mgmt-client-2.0.0-0.2.19<br>
</div>
</body>
</html>