[ClusterLabs] Antw: About globally unique resource instances distribution per node

Daniel Hernández danyboy1104 at gmail.com
Wed Dec 30 18:43:15 UTC 2015


On 12/30/15, Ulrich Windl <Ulrich.Windl at rz.uni-regensburg.de> wrote:
> Hi!
>
> I would expect if you set the cpu utilization per primitive (used in clone)
> to
> one and set the cpu capacity per node to the correct number that no node has
> more primitives than the cpu number allows and that primitives are
> distributed
> among all available nodes. Isn't that true in your case?
>
> What exactly does not work in your opinion?
>
> Regards,
> Ulrich
>
>>>> Daniel Hernández <danyboy1104 at gmail.com> schrieb am 29.12.2015 um 16:21
> in
> Nachricht
> <CAMhkz_BUEjidSkeJ7uJrTJ1v-vkA+s2YWR69PF6=gOwrOSFwUg at mail.gmail.com>:
>> Good day, I work at Datys Soluciones Tecnológicas, we use Corosync and
>> Pacemaker in production to run a service infrastructure since 3 years
>> ago. Versions
>> used are Centos 6.3, corosync 1.4.1 and pacemaker 1.1.7. We have a web
>> server, gearman
>> job manager, and globally unique resource clones as gearman workers to
>> balance the load distributed by gearman. My question is if there exist
>> a way or a workaround to configure globally unique resource clone
>> number of instances to start per node. As example: Say have 3 nodes:
>> node1, node2 and node3, and a globally unique resource clone of 6
>> instances with name clone_example and want to start 1 instance on
>> node1, 2 instances on node2 and 3 instances on node3, as the following
>> example shows.
>>
>> Clone Set: clone_example [example] (unique)
>>          example:0 (ocf:heartbeat:example): Started nodo3
>>          example:1 (ocf:heartbeat:example): Started nodo2
>>          example:2 (ocf:heartbeat:example): Started nodo2
>>          example:3 (ocf:heartbeat:example): Started nodo1
>>          example:4 (ocf:heartbeat:example): Started nodo3
>>          example:5 (ocf:heartbeat:example): Started nodo3
>>
>> The reason we want to configure the resource this way is because one
>> resource clone instance consume one node cpu, and the nodes have
>> different number of cpu:
>> node1 = 1cpu, node2 = 2cpu, node3 = 3cpu in the example.
>>
>> I read Pacemaker Cluster from Scratch and Cluster Configuration
>> Explained to find a way and see Chapter 11. Utilization and Placement
>> Strategy. I make a test with clones and resources but the clones where
>> not distributed as I expected and some instances were not started, I
>> used the 3 placement-strategies and similar behaviour. I know the
>> cluster use a
>> best effort algorithm to distribute the resources when this option is
>> used, and maybe that's the reason, so I am searching for a way to do
>> it. I browse the mailing list archives to see if there exist a similar
>> post on this topic and couldn't find it, maybe I miss it. Any response
>> will be appreciated.
>> Thanks for your time
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

Hi Ulrich, thanks for your response. I take your sugestion and test my example.
I create a 3 node cluster with one dummy resource, use the following commands.

crm configure primitive example1 ocf:heartbeat:Dummy \
op monitor interval=30s

crm configure clone clone_example1 example1 \
	meta globally-unique="true" clone-max="6" clone-node-max="6"


crm node utilization node1 set cpu 1
crm node utilization node2 set cpu 2
crm node utilization node3 set cpu 3
crm resource utilization example1 set cpu 1
crm configure property placement-strategy="balanced"

Online: [ node1 node2 node3 ]

 Clone Set: clone_example1 [example] (unique)
     example1:0	(ocf::heartbeat:Dummy):	Started node1
     example1:1	(ocf::heartbeat:Dummy):	Started node2
     example1:2	(ocf::heartbeat:Dummy):	Started node3
     example1:3	(ocf::heartbeat:Dummy):	Started node3
     example1:4	(ocf::heartbeat:Dummy):	Started node2
     example1:5	(ocf::heartbeat:Dummy):	Started node3

That work and was no expected by me, it work different on another
scenario and is really why my question arrived.
I tested an scenario in which I want to run 4 instances of resource
example1 on node1, 3 instances on node2, 5 instances on node 3. The
cpu capacity per node is, on node1 6, on node2 9, on node3 8, that
because I will have other resources besides example1.

With the following cluster configuration:

crm node utilization node1 set cpu 6
crm node utilization node2 set cpu 9
crm node utilization node3 set cpu 8

crm resource meta clone_example1 set clone-max 12
crm resource meta clone_example1 set clone-node-max 12

The result in the cluster is:
Online: [ node1 node2 node3 ]

 Clone Set: clone_example1 [example1] (unique)
     example1:0	(ocf::heartbeat:Dummy):	Started node3
     example1:1	(ocf::heartbeat:Dummy):	Started node2
     example1:2	(ocf::heartbeat:Dummy):	Started node3
     example1:3	(ocf::heartbeat:Dummy):	Started node1
     example1:4	(ocf::heartbeat:Dummy):	Started node2
     example1:5	(ocf::heartbeat:Dummy):	Started node3
     example1:6	(ocf::heartbeat:Dummy):	Started node2
     example1:7	(ocf::heartbeat:Dummy):	Started node2
     example1:8	(ocf::heartbeat:Dummy):	Started node1
     example1:9	(ocf::heartbeat:Dummy):	Started node3
     example1:10	(ocf::heartbeat:Dummy):	Started node2
     example1:11	(ocf::heartbeat:Dummy):	Started node1

The cluster start 3 instances of example1 on node1 and not 4 as I
want. That happen when I have more than 1 resource to allocate. I also
notice that I am not setting the cluster to start 3 instances of
example1 on node1, Is there any way to do it ?




More information about the Users mailing list