[Pacemaker] Question about node-action-limit and migration-limit

Kazunori INOUE kazunori.inoue3 at gmail.com
Wed Dec 18 10:51:22 UTC 2013


Hi,

When I set only migration-limit without setting node-action-limit in
pacemaker-1.1,
the number of 'operation' other than migrate_to/from was limited to
the value of migration-limit.
(The node that I used has 8 cores.)

[cib]
property \
  no-quorum-policy="freeze" \
  stonith-enabled="true" \
  startup-fencing="false" \
  migration-limit="3"

...snip...

[log]
$ egrep "warning: cluster_option|debug: throttle_update:" /var/log/ha-debug
Dec 12 16:35:23 [7416] bl460g1n7       crmd:    debug:
throttle_update:         Host bl460g1n6 supports a maximum of 16 jobs
and throttle mode 0000.  New job limit is 16
Dec 12 16:35:25 [7416] bl460g1n7       crmd:  warning: cluster_option:
 Using deprecated name 'migration-limit' for cluster option
'node-action-limit'
Dec 12 16:35:25 [7416] bl460g1n7       crmd:  warning: cluster_option:
 Using deprecated name 'migration-limit' for cluster option
'node-action-limit'
Dec 12 16:35:25 [7416] bl460g1n7       crmd:  warning: cluster_option:
 Using deprecated name 'migration-limit' for cluster option
'node-action-limit'
Dec 12 16:35:26 [7416] bl460g1n7       crmd:    debug:
throttle_update:         Host bl460g1n7 supports a maximum of 3 jobs
and throttle mode 0000.  New job limit is 3
Dec 12 16:35:28 [7416] bl460g1n7       crmd:    debug:
throttle_update:         Host bl460g1n8 supports a maximum of 3 jobs
and throttle mode 0000.  New job limit is 3

$ egrep "do_lrm_rsc_op: Performing .* op=prmVM|process_lrm_event: LRM
operation prmVM" /var/log/ha-log|grep -v monitor
Dec 12 16:35:28 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=24:1:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM1_start_0
Dec 12 16:35:28 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=26:1:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM2_start_0
Dec 12 16:35:28 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=28:1:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM3_start_0
Dec 12 16:35:30 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM2_start_0 (call=27, rc=0, cib-update=23,
confirmed=true) ok
Dec 12 16:35:30 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM1_start_0 (call=26, rc=0, cib-update=24,
confirmed=true) ok
Dec 12 16:35:30 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM3_start_0 (call=28, rc=0, cib-update=25,
confirmed=true) ok
Dec 12 16:35:32 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=15:2:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM4_start_0
Dec 12 16:35:32 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=17:2:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM5_start_0
Dec 12 16:35:32 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=19:2:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM6_start_0
Dec 12 16:35:34 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM4_start_0 (call=32, rc=0, cib-update=29,
confirmed=true) ok
Dec 12 16:35:34 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM5_start_0 (call=33, rc=0, cib-update=30,
confirmed=true) ok
Dec 12 16:35:34 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM6_start_0 (call=34, rc=0, cib-update=31,
confirmed=true) ok

Dec 12 16:37:26 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=12:4:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM1_stop_0
Dec 12 16:37:26 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=13:4:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM2_stop_0
Dec 12 16:37:26 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=14:4:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM3_stop_0
Dec 12 16:37:39 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM1_stop_0 (call=39, rc=0, cib-update=35, confirmed=true)
ok
Dec 12 16:37:39 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=15:4:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM4_stop_0
Dec 12 16:37:39 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM2_stop_0 (call=41, rc=0, cib-update=36, confirmed=true)
ok
Dec 12 16:37:39 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=16:4:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM5_stop_0
Dec 12 16:37:40 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM3_stop_0 (call=43, rc=0, cib-update=37, confirmed=true)
ok
Dec 12 16:37:40 bl460g1n7 crmd[7416]:     info: do_lrm_rsc_op:
Performing key=17:4:0:12d6cc8e-9dd9-40b4-8a07-5e9639050d12
op=prmVM6_stop_0
Dec 12 16:37:51 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM4_stop_0 (call=45, rc=0, cib-update=38, confirmed=true)
ok
Dec 12 16:37:52 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM5_stop_0 (call=47, rc=0, cib-update=39, confirmed=true)
ok
Dec 12 16:37:52 bl460g1n7 crmd[7416]:   notice: process_lrm_event: LRM
operation prmVM6_stop_0 (call=49, rc=0, cib-update=40, confirmed=true)
ok

Is this the designed behavior?
How is this change?

diff --git a/crmd/control.c b/crmd/control.c
index ed0689d..066cce6 100644
--- a/crmd/control.c
+++ b/crmd/control.c
@@ -856,7 +856,7 @@ pe_cluster_option crmd_opts[] = {
   "The maximum amount of system resources that should be used by
nodes in the cluster",
   "The cluster will slow down its recovery process when the amount of
system resources used"
           " (currently CPU) approaches this limit", },
- { "node-action-limit", "migration-limit", "integer", NULL, "0", &check_number,
+ { "node-action-limit", NULL, "integer", NULL, "0", &check_number,
           "The maximum number of jobs that can be scheduled per node.
Defaults to 2x cores"},
  { XML_CONFIG_ATTR_ELECTION_FAIL, "election_timeout", "time", NULL,
"2min", &check_timer, "*** Advanced Use Only ***.", "If need to adjust
this value, it probably indicates the presence of a bug." },
  { XML_CONFIG_ATTR_FORCE_QUIT, "shutdown_escalation", "time", NULL,
"20min", &check_timer, "*** Advanced Use Only ***.", "If need to
adjust this value, it probably indicates the presence of a bug." },

Best Regards,
Kazunori INOUE




More information about the Pacemaker mailing list