[ClusterLabs] starting primitive resources of a group without starting the complete group - unclear behaviour
Ken Gaillot
kgaillot at redhat.com
Fri Apr 21 01:24:50 CEST 2017
On 04/20/2017 02:53 PM, Lentes, Bernd wrote:
> Hi,
>
> just for the sake of completeness i'd like to figure out what happens if i start one resource, which is a member of a group, but only this resource.
> I'd like to see what the other resources of that group are doing. Also if it maybe does not make much sense. Just for learning and understanding.
>
> But i'm getting mad about my test results:
>
> first test:
>
> crm(live)# status
> Last updated: Thu Apr 20 20:56:08 2017
> Last change: Thu Apr 20 20:46:35 2017 by root via cibadmin on ha-idg-2
> Stack: classic openais (with plugin)
> Current DC: ha-idg-2 - partition with quorum
> Version: 1.1.12-f47ea56
> 2 Nodes configured, 2 expected votes
> 14 Resources configured
>
>
> Online: [ ha-idg-1 ha-idg-2 ]
>
> Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
> Started: [ ha-idg-1 ha-idg-2 ]
> prim_stonith_ipmi_ha-idg-1 (stonith:external/ipmi): Started ha-idg-2
> prim_stonith_ipmi_ha-idg-2 (stonith:external/ipmi): Started ha-idg-1
>
> crm(live)# resource start prim_vnc_ip_mausdb
>
> crm(live)# status
> Last updated: Thu Apr 20 20:56:44 2017
> Last change: Thu Apr 20 20:56:44 2017 by root via crm_resource on ha-idg-1
> Stack: classic openais (with plugin)
> Current DC: ha-idg-2 - partition with quorum
> Version: 1.1.12-f47ea56
> 2 Nodes configured, 2 expected votes
> 14 Resources configured
>
>
> Online: [ ha-idg-1 ha-idg-2 ]
>
> Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
> Started: [ ha-idg-1 ha-idg-2 ]
> prim_stonith_ipmi_ha-idg-1 (stonith:external/ipmi): Started ha-idg-2
> prim_stonith_ipmi_ha-idg-2 (stonith:external/ipmi): Started ha-idg-1
> Resource Group: group_vnc_mausdb
> prim_vnc_ip_mausdb (ocf::heartbeat:IPaddr): Started ha-idg-1 <=======
> prim_vm_mausdb (ocf::heartbeat:VirtualDomain): Started ha-idg-1 <=======
>
>
>
> second test:
>
> crm(live)# status
> Last updated: Thu Apr 20 21:24:19 2017
> Last change: Thu Apr 20 21:20:04 2017 by root via cibadmin on ha-idg-2
> Stack: classic openais (with plugin)
> Current DC: ha-idg-2 - partition with quorum
> Version: 1.1.12-f47ea56
> 2 Nodes configured, 2 expected votes
> 14 Resources configured
>
>
> Online: [ ha-idg-1 ha-idg-2 ]
>
> Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
> Started: [ ha-idg-1 ha-idg-2 ]
> prim_stonith_ipmi_ha-idg-1 (stonith:external/ipmi): Started ha-idg-2
> prim_stonith_ipmi_ha-idg-2 (stonith:external/ipmi): Started ha-idg-1
>
>
> crm(live)# resource start prim_vnc_ip_mausdb
>
>
> crm(live)# status
> Last updated: Thu Apr 20 21:26:05 2017
> Last change: Thu Apr 20 21:25:55 2017 by root via cibadmin on ha-idg-2
> Stack: classic openais (with plugin)
> Current DC: ha-idg-2 - partition with quorum
> Version: 1.1.12-f47ea56
> 2 Nodes configured, 2 expected votes
> 14 Resources configured
>
>
> Online: [ ha-idg-1 ha-idg-2 ]
>
> Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
> Started: [ ha-idg-1 ha-idg-2 ]
> prim_stonith_ipmi_ha-idg-1 (stonith:external/ipmi): Started ha-idg-2
> prim_stonith_ipmi_ha-idg-2 (stonith:external/ipmi): Started ha-idg-1
> Resource Group: group_vnc_mausdb
> prim_vnc_ip_mausdb (ocf::heartbeat:IPaddr): Started ha-idg-1 <=======
> prim_vm_mausdb (ocf::heartbeat:VirtualDomain): (target-role:Stopped) Stopped <=======
target-role=Stopped prevents a resource from being started.
In a group, each member of the group depends on the previously listed
members, same as if ordering and colocation constraints had been created
between each pair. So, starting a resource in the "middle" of a group
will also start everything before it.
>
> Once the second resource of the group is started with the first resource, the other time not !?!
> Why this unclear behaviour ?
>
> This is my configuration:
>
> primitive prim_vm_mausdb VirtualDomain \
> params config="/var/lib/libvirt/images/xml/mausdb_vm.xml" \
> params hypervisor="qemu:///system" \
> params migration_transport=ssh \
> op start interval=0 timeout=120 \
> op stop interval=0 timeout=130 \
> op monitor interval=30 timeout=30 \
> op migrate_from interval=0 timeout=180 \
> op migrate_to interval=0 timeout=190 \
> meta allow-migrate=true is-managed=true \
> utilization cpu=4 hv_memory=8006
>
>
> primitive prim_vnc_ip_mausdb IPaddr \
> params ip=146.107.235.161 nic=br0 cidr_netmask=24 \
> meta target-role=Started
>
>
> group group_vnc_mausdb prim_vnc_ip_mausdb prim_vm_mausdb \
> meta target-role=Stopped is-managed=true
Everything in the group inherits this target-role=Stopped. However,
prim_vnc_ip_mausdb has its own target-role=Started, which overrides that.
I'm not sure what target-role was on each resource at each step in your
tests, but the behavior should match that.
>
> Failcounts for the group and the vm are zero on both nodes. Scores for the vm on both nodes is -INFINITY.
>
>
> Starting the vm in the second case (resource start prim_vm_mausdb) succeeds, than i have both resources running.
>
> Any ideas ?
>
>
> Bernd
>
>
More information about the Users
mailing list