[ClusterLabs] starting primitive resources of a group without starting the complete group - unclear behaviour

Lentes, Bernd bernd.lentes at helmholtz-muenchen.de
Thu Apr 20 21:53:52 CEST 2017


Hi,

just for the sake of completeness i'd like to figure out what happens if i start one resource, which is a member of a group, but only this resource.
I'd like to see what the other resources of that group are doing. Also if it maybe does not make much sense. Just for learning and understanding.

But i'm getting mad about my test results:

first test:

crm(live)# status
Last updated: Thu Apr 20 20:56:08 2017
Last change: Thu Apr 20 20:46:35 2017 by root via cibadmin on ha-idg-2
Stack: classic openais (with plugin)
Current DC: ha-idg-2 - partition with quorum
Version: 1.1.12-f47ea56
2 Nodes configured, 2 expected votes
14 Resources configured


Online: [ ha-idg-1 ha-idg-2 ]

 Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
     Started: [ ha-idg-1 ha-idg-2 ]
 prim_stonith_ipmi_ha-idg-1     (stonith:external/ipmi):        Started ha-idg-2
 prim_stonith_ipmi_ha-idg-2     (stonith:external/ipmi):        Started ha-idg-1

crm(live)# resource start prim_vnc_ip_mausdb

crm(live)# status
Last updated: Thu Apr 20 20:56:44 2017
Last change: Thu Apr 20 20:56:44 2017 by root via crm_resource on ha-idg-1
Stack: classic openais (with plugin)
Current DC: ha-idg-2 - partition with quorum
Version: 1.1.12-f47ea56
2 Nodes configured, 2 expected votes
14 Resources configured


Online: [ ha-idg-1 ha-idg-2 ]

 Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
     Started: [ ha-idg-1 ha-idg-2 ]
 prim_stonith_ipmi_ha-idg-1     (stonith:external/ipmi):        Started ha-idg-2
 prim_stonith_ipmi_ha-idg-2     (stonith:external/ipmi):        Started ha-idg-1
 Resource Group: group_vnc_mausdb
     prim_vnc_ip_mausdb (ocf::heartbeat:IPaddr):        Started ha-idg-1   <=======
     prim_vm_mausdb     (ocf::heartbeat:VirtualDomain): Started ha-idg-1   <=======



second test:

crm(live)# status
Last updated: Thu Apr 20 21:24:19 2017
Last change: Thu Apr 20 21:20:04 2017 by root via cibadmin on ha-idg-2
Stack: classic openais (with plugin)
Current DC: ha-idg-2 - partition with quorum
Version: 1.1.12-f47ea56
2 Nodes configured, 2 expected votes
14 Resources configured


Online: [ ha-idg-1 ha-idg-2 ]

 Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
     Started: [ ha-idg-1 ha-idg-2 ]
 prim_stonith_ipmi_ha-idg-1     (stonith:external/ipmi):        Started ha-idg-2
 prim_stonith_ipmi_ha-idg-2     (stonith:external/ipmi):        Started ha-idg-1


crm(live)# resource start prim_vnc_ip_mausdb


crm(live)# status
Last updated: Thu Apr 20 21:26:05 2017
Last change: Thu Apr 20 21:25:55 2017 by root via cibadmin on ha-idg-2
Stack: classic openais (with plugin)
Current DC: ha-idg-2 - partition with quorum
Version: 1.1.12-f47ea56
2 Nodes configured, 2 expected votes
14 Resources configured


Online: [ ha-idg-1 ha-idg-2 ]

 Clone Set: clone_group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml [group_prim_dlm_clvmd_vg_cluster_01_ocfs2_fs_lv_xml]
     Started: [ ha-idg-1 ha-idg-2 ]
 prim_stonith_ipmi_ha-idg-1     (stonith:external/ipmi):        Started ha-idg-2
 prim_stonith_ipmi_ha-idg-2     (stonith:external/ipmi):        Started ha-idg-1
 Resource Group: group_vnc_mausdb
     prim_vnc_ip_mausdb (ocf::heartbeat:IPaddr):        Started ha-idg-1   <=======
     prim_vm_mausdb     (ocf::heartbeat:VirtualDomain): (target-role:Stopped) Stopped   <=======


Once the second resource of the group is started with the first resource, the other time not !?!
Why this unclear behaviour ?

This is my configuration:

primitive prim_vm_mausdb VirtualDomain \
        params config="/var/lib/libvirt/images/xml/mausdb_vm.xml" \
        params hypervisor="qemu:///system" \
        params migration_transport=ssh \
        op start interval=0 timeout=120 \
        op stop interval=0 timeout=130 \
        op monitor interval=30 timeout=30 \
        op migrate_from interval=0 timeout=180 \
        op migrate_to interval=0 timeout=190 \
        meta allow-migrate=true is-managed=true \
        utilization cpu=4 hv_memory=8006


primitive prim_vnc_ip_mausdb IPaddr \
        params ip=146.107.235.161 nic=br0 cidr_netmask=24 \
        meta target-role=Started


group group_vnc_mausdb prim_vnc_ip_mausdb prim_vm_mausdb \
        meta target-role=Stopped is-managed=true


Failcounts for the group and the vm are zero on both nodes. Scores for the vm on both nodes is -INFINITY.


Starting the vm in the second case (resource start prim_vm_mausdb) succeeds, than i have both resources running.

Any ideas ?


Bernd


-- 
Bernd Lentes 

Systemadministration 
institute of developmental genetics 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum München 
bernd.lentes at helmholtz-muenchen.de 
phone: +49 (0)89 3187 1241 
fax: +49 (0)89 3187 2294 

Erst wenn man sich auf etwas festlegt kann man Unrecht haben 
Scott Adams
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671




More information about the Users mailing list