[Pacemaker] orphaned resource

Shravan Mishra shravan.mishra at gmail.com
Tue Dec 1 14:30:33 UTC 2009


Thanks Andrew, that infact was the problem.
Basically crm shell gives a transactional behaviour when injecting
config, when nodes are running.

I was able to to solve it by by putiing all my nodes on standby and
then importing config in XML and then again putting them back online,
the problem never occurred again.

steps taken :

1. crm node standby     -- for all the nodes
2. <import the config on the master>
3. crm node online -- for all the nodes


-Shravan

On Tue, Dec 1, 2009 at 4:33 AM, Andrew Beekhof <andrew at beekhof.net> wrote:
> Try using the crm shell, its syntax is much nicer than using the raw xml.
>
> Orphan resources are those for which there are entry in the status
> section, but no definition in the resources section.
> The cluster basically thinks you haven't defined the vip resource for
> some reason.
>
> On Sun, Nov 29, 2009 at 5:07 AM, Shravan Mishra
> <shravan.mishra at gmail.com> wrote:
>> Hi guys,
>>
>> My IPaddr2 resource is stopping everytime and when I look at the logs
>> I see the following:
>> ======
>> Nov 28 02:25:41 node2 pengine: [4109]: info: log_data_element:
>> create_fake_resource: Orphan resource <primitive id="vip"
>> type="IPaddr2" class="ocf" provider="heartbeat" />
>> Nov 28 02:25:41 node2 pengine: [4109]: info: process_orphan_resource:
>> Making sure orphan vip is stopped
>> Nov 28 02:25:41 node2 pengine: [4109]: info: get_failcount:
>> node2.itactics.com-stonith has failed 1000000 times on
>> node2.itactics.com
>> ========
>>
>> My config's template is :
>> ========
>> crm_attribute -t crm_config -n no-quorum-policy -v ignore
>> crm_attribute -t crm_config -n symmetric-cluster -v true
>> crm_attribute -t crm_config -n stonith-action -v reboot
>> crm_attribute -t crm_config -n stonith-enabled -v true
>> crm configure property dc-deadtime=3min
>>
>> cibadmin -o resources -C -p<<END
>>  <master id="ms-drbd">
>>        <meta_attributes id="ma-ms-drbd">
>>                         <nvpair id="ma-ms-drbd-1" name="clone_max" value="2"/>
>>          <nvpair id="ma-ms-drbd-2" name="clone-node-max" value="1"/>
>>          <nvpair id="ma-ms-drbd-3" name="notify" value="yes"/>
>>          <nvpair id="ma-ms-drbd-4" name="globally-unique" value="false"/>
>>          <nvpair id="ma-ms-drbd-5" name="master-max" value="1"/>
>>          <nvpair id="ma-ms-drbd-6" name="master-node-max" value="1"/>
>>          <nvpair id="ma-ms-drbd-7" name="target-role" value="started"/>
>>        </meta_attributes>
>>        <primitive id="drbd0" class="ocf" provider="linbit" type="drbd">
>>          <instance_attributes id="ia-drbd">
>>            <nvpair id="ia-drbd-1" name="drbd_resource" value="var_nsm"/>
>>          </instance_attributes>
>>          <operations>
>>            <op id="op-drbd-1" name="monitor" interval="59s"
>> timeout="10s" role="Master"/>
>>            <op id="op-drbd-2" name="monitor" interval="60s"
>> timeout="10s" role="Slave"/>
>>          </operations>
>>        </primitive>
>>      </master>
>> END
>>
>> cibadmin -o resources -C -p<<END
>>  <primitive class="stonith" type="external/safe/ipmi"
>> id="$master_node-stonith">
>>        <operations>
>>          <op id="op-$master_node-stonith-1" name="monitor"
>> timeout="3min" interval="20s"/>
>>        </operations>
>>        <instance_attributes id="$master_node-attributes">
>>          <nvpair id="ia-$master_node-stonith-0" name="target_role"
>> value="started"/>
>>          <nvpair id="ia-$master_node-stonith-1" name="hostname"
>> value="$master_node"/>
>>          <nvpair name="ipaddr" id="ia-$master_node-stonith-2"
>> value="$ipmi_master"/>
>>        </instance_attributes>
>>      </primitive>
>> END
>>
>> cibadmin -o resources -C -p<<END
>>  <primitive class="stonith" type="external/safe/ipmi" id="$slave_node-stonith">
>>        <operations>
>>          <op id="op-$slave_node-stonith-1" name="monitor"
>> timeout="2min" interval="20s"/>
>>        </operations>
>>        <instance_attributes id="$slave_node-attributes">
>>          <nvpair id="ia-$slave_node-stonith-0" name="target_role"
>> value="started"/>
>>          <nvpair id="ia-$slave_node-stonith-1" name="hostname"
>> value="$slave_node"/>
>>          <nvpair name="ipaddr" id="ia-$slave_node-stonith-2"
>> value="$ipmi_slave"/>
>>        </instance_attributes>
>>      </primitive>
>> END
>>
>> cibadmin -o resources -C -p<<END
>>  <group id="svcs_grp">
>>        <meta_attributes id="ma-svcs">
>>          <nvpair id="ma-svcs-1" name="target_role" value="started"/>
>>        </meta_attributes>
>>        <primitive class="ocf" provider="heartbeat" type="Filesystem" id="fs0">
>>          <meta_attributes id="ma-fs0">
>>            <nvpair name="target_role" id="ma-fs0-1" value="stopped"/>
>>          </meta_attributes>
>>          <instance_attributes id="ia-fs0">
>>            <nvpair id="ia-fs0-1" name="fstype" value="xfs"/>
>>            <nvpair id="ia-fs0-2" name="directory" value="/var/nsm"/>
>>            <nvpair id="ia-fs0-3" name="device" value="/dev/drbd1"/>
>>          </instance_attributes>
>>        </primitive>
>>                  <primitive class="ocf" type="safe" provider="itactics" id="safe_svcs">
>>        <operations>
>>          <op name="start" interval="0" id="op-safe-1" timeout="3min"/>
>>          <op interval="0" id="op-safe-2" name="stop" timeout="3min"/>
>>          <op id="op-safe-3" name="monitor" timeout="30min" interval="20s"/>
>>        </operations>
>>        <instance_attributes id="ia-safe">
>>          <nvpair id="ia-safe-1" name="target-role" value="Started"/>
>>          <nvpair id="ia-safe-2" name="is-managed" value="true"/>
>>        </instance_attributes>
>>      </primitive>
>>   </group>
>> END
>>
>> cibadmin -o resources -C -p<<END
>>         <primitive id="vip" class="ocf" type="IPaddr2" provider="heartbeat">
>>          <operations>
>>                 <op id="op-vip-1" name="monitor" timeout="1min" interval="20s"/>
>>          </operations>
>>          <instance_attributes id="ia-vip">
>>                 <nvpair id="vip-addr" name="ip" value="$v_ip"/>
>>          </instance_attributes>
>>        </primitive>
>> END
>>
>> cibadmin -o constraints -C -p<<END
>>  <rsc_colocation id="vip-on-safe_svcs" rsc="vip" score="INFINITY"
>> with-rsc="safe_svcs"/>
>> END
>>
>> cibadmin -o constraints -C -p<<END
>>        <rsc_order id="vip-after-safe_svcs" first="safe_svcs" then="vip"/>
>> END
>>
>>
>> cibadmin -o constraints -C -p<<END
>>   <rsc_location id="$master_node-stonith-placement" rsc="$master_node-stonith">
>>        <rule id="ri-$master_node-stonith-placement-1" score="INFINITY">
>>          <expression id="ex-$master_node-stonith-placement-1"
>> value="$slave_node" attribute="#uname" operation="eq"/>
>>        </rule>
>>      </rsc_location>
>> END
>>
>> cibadmin -o constraints -C -p<<END
>>  <rsc_location id="$slave_node-stonith-placement" rsc="$slave_node-stonith">
>>        <rule id="ri-$slave_node-stonith-placement-1" score="INFINITY">
>>          <expression id="ex-$slave_node-stonith-placement-1"
>> value="$master_node" attribute="#uname" operation="eq"/>
>>        </rule>
>>      </rsc_location>
>> END
>>
>> cibadmin -o constraints -C -p<<END
>>    <rsc_location id="drbd-master" rsc="ms-drbd">
>>        <rule id="ri-drbd-master-1" role="master" score="100">
>>          <expression id="ex-drbd-master-1" attribute="#uname"
>> operation="eq" value="$master_node"/>
>>        </rule>
>>      </rsc_location>
>> END
>>
>> cibadmin -o constraints -C -p<<END
>>  <rsc_order first="ms-drbd" first-action="promote"
>> id="ms-drbd-before-svcs-group" score="INFINITY" then="svcs_grp"
>> then-action="start"/>
>> END
>>
>> cibadmin -o constraints -C -p<<END
>>  <rsc_colocation id="svcs-grp-on-ms-drbd" rsc="svcs_grp"
>> score="INFINITY" with-rsc="ms-drbd" with-rsc-role="Master"/>
>> END
>>
>>
>>
>> ========
>>
>> My hunch is that I'm making mistake in the constraints section related
>> to "vip" but I cannot figure out what.
>> My question is what is an orphaned resource, why is this resource
>> becoming or being considered orphaned.
>>
>> Appreciate the help.
>>
>> Thanks
>> Shravan
>>
>> _______________________________________________
>> Pacemaker mailing list
>> Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>




More information about the Pacemaker mailing list