[ClusterLabs] Antw: starting primitive resources of a group without starting the complete group - unclear behaviour

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Fri Apr 21 06:06:52 UTC 2017


>>> "Lentes, Bernd" <bernd.lentes at helmholtz-muenchen.de> schrieb am 20.04.2017 um
21:53 in Nachricht
<1649590422.18260279.1492718032265.JavaMail.zimbra at helmholtz-muenchen.de>:
> Hi,
> 
> just for the sake of completeness i'd like to figure out what happens if i 
> start one resource, which is a member of a group, but only this resource.
> I'd like to see what the other resources of that group are doing. Also if it 
> maybe does not make much sense. Just for learning and understanding.

The resource in the group is restricted to what the group enforces.

> 
> But i'm getting mad about my test results:

You should have explained what your group looks like and what resource you are testing.

> 
> first test:
> 
> crm(live)# status
> Last updated: Thu Apr 20 20:56:08 2017
> Last change: Thu Apr 20 20:46:35 2017 by root via cibadmin on ha-idg-2
> Stack: classic openais (with plugin)
> Current DC: ha-idg-2 - partition with quorum
> Version: 1.1.12-f47ea56
> 2 Nodes configured, 2 expected votes
> 14 Resources configured
> 
> 
> Online: [ ha-idg-1 ha-idg-2 ]
> 
>  Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml 
> [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
>      Started: [ ha-idg-1 ha-idg-2 ]
>  prim_stonith_ipmi_ha-idg-1     (stonith:external/ipmi):        Started 
> ha-idg-2
>  prim_stonith_ipmi_ha-idg-2     (stonith:external/ipmi):        Started 
> ha-idg-1
> 
> crm(live)# resource start prim_vnc_ip_mausdb
> 
> crm(live)# status
> Last updated: Thu Apr 20 20:56:44 2017
> Last change: Thu Apr 20 20:56:44 2017 by root via crm_resource on ha-idg-1
> Stack: classic openais (with plugin)
> Current DC: ha-idg-2 - partition with quorum
> Version: 1.1.12-f47ea56
> 2 Nodes configured, 2 expected votes
> 14 Resources configured
> 
> 
> Online: [ ha-idg-1 ha-idg-2 ]
> 
>  Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml 
> [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
>      Started: [ ha-idg-1 ha-idg-2 ]
>  prim_stonith_ipmi_ha-idg-1     (stonith:external/ipmi):        Started 
> ha-idg-2
>  prim_stonith_ipmi_ha-idg-2     (stonith:external/ipmi):        Started 
> ha-idg-1
>  Resource Group: group_vnc_mausdb
>      prim_vnc_ip_mausdb (ocf::heartbeat:IPaddr):        Started ha-idg-1   
> <=======
>      prim_vm_mausdb     (ocf::heartbeat:VirtualDomain): Started ha-idg-1   
> <=======
> 
> 
> 
> second test:

What's the status before the test?

> 
> crm(live)# status
> Last updated: Thu Apr 20 21:24:19 2017
> Last change: Thu Apr 20 21:20:04 2017 by root via cibadmin on ha-idg-2
> Stack: classic openais (with plugin)
> Current DC: ha-idg-2 - partition with quorum
> Version: 1.1.12-f47ea56
> 2 Nodes configured, 2 expected votes
> 14 Resources configured
> 
> 
> Online: [ ha-idg-1 ha-idg-2 ]
> 
>  Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml 
> [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
>      Started: [ ha-idg-1 ha-idg-2 ]
>  prim_stonith_ipmi_ha-idg-1     (stonith:external/ipmi):        Started 
> ha-idg-2
>  prim_stonith_ipmi_ha-idg-2     (stonith:external/ipmi):        Started 
> ha-idg-1
> 
> 
> crm(live)# resource start prim_vnc_ip_mausdb
> 
> 
> crm(live)# status
> Last updated: Thu Apr 20 21:26:05 2017
> Last change: Thu Apr 20 21:25:55 2017 by root via cibadmin on ha-idg-2
> Stack: classic openais (with plugin)
> Current DC: ha-idg-2 - partition with quorum
> Version: 1.1.12-f47ea56
> 2 Nodes configured, 2 expected votes
> 14 Resources configured
> 
> 
> Online: [ ha-idg-1 ha-idg-2 ]
> 
>  Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml 
> [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
>      Started: [ ha-idg-1 ha-idg-2 ]
>  prim_stonith_ipmi_ha-idg-1     (stonith:external/ipmi):        Started 
> ha-idg-2
>  prim_stonith_ipmi_ha-idg-2     (stonith:external/ipmi):        Started 
> ha-idg-1
>  Resource Group: group_vnc_mausdb
>      prim_vnc_ip_mausdb (ocf::heartbeat:IPaddr):        Started ha-idg-1   
> <=======
>      prim_vm_mausdb     (ocf::heartbeat:VirtualDomain): 
> (target-role:Stopped) Stopped   <=======
> 
> 
> Once the second resource of the group is started with the first resource, 
> the other time not !?!
> Why this unclear behaviour ?

With your incomplete status and test description it's hard to say.

> 
> This is my configuration:
> 
> primitive prim_vm_mausdb VirtualDomain \
>         params config="/var/lib/libvirt/images/xml/mausdb_vm.xml" \
>         params hypervisor="qemu:///system" \
>         params migration_transport=ssh \
>         op start interval=0 timeout=120 \
>         op stop interval=0 timeout=130 \
>         op monitor interval=30 timeout=30 \
>         op migrate_from interval=0 timeout=180 \
>         op migrate_to interval=0 timeout=190 \
>         meta allow-migrate=true is-managed=true \
>         utilization cpu=4 hv_memory=8006
> 
> 
> primitive prim_vnc_ip_mausdb IPaddr \
>         params ip=146.107.235.161 nic=br0 cidr_netmask=24 \
>         meta target-role=Started
> 
> 
> group group_vnc_mausdb prim_vnc_ip_mausdb prim_vm_mausdb \
>         meta target-role=Stopped is-managed=true
> 
> 
> Failcounts for the group and the vm are zero on both nodes. Scores for the 
> vm on both nodes is -INFINITY.
> 
> 
> Starting the vm in the second case (resource start prim_vm_mausdb) succeeds, 
> than i have both resources running.
> 
> Any ideas ?

What about the logs?

Regards,
Ulrich






More information about the Users mailing list