[Pacemaker] dependent resource reach max fail count

Michael Fung mike at 3open.org
Sat Jul 3 03:10:21 UTC 2010


On 2010/7/2 下午 09:16, Michael Fung wrote:
> 
> It seems to indicate that pacemaker do not shutdown dependent resources
> in an orderly manner when the multi-state depended resource change
> state. The dependent resources just "crashed". Am I right?
> 

I was wrong.

>From the extracted log file, lrmd stopped the dependent resources orderly:

lrmd: [806]: info: rsc:ve1011:119: stop
lrmd: [806]: info: rsc:vz_svc:121: stop
lrmd: [806]: info: rsc:vz_fs:122: stop
lrmd: [806]: info: rsc:drbd_r0:0:123: demote
lrmd: [806]: info: RA output: (drbd_r0:0:demote:stdout)
lrmd: [806]: info: rsc:drbd_r0:0:124: notify
lrmd: [806]: info: RA output: (drbd_r0:0:notify:stdout)
lrmd: [806]: info: rsc:drbd_r0:0:125: notify
lrmd: [806]: info: rsc:drbd_r0:0:126: notify
lrmd: [806]: info: RA output: (drbd_r0:0:notify:stdout)
lrmd: [806]: info: rsc:vz_fs:127: start
lrmd: [806]: info: RA output: (vz_fs:start:stderr) /dev/drbd0: Wrong
medium type

But the problem is, at the second last line, we see lrmd was trying to
start vz_fs while it already demoted drbd_r0. This results in error and
probably lead to vz_fs fail-counts set as INFINITY. Isn't this behavior
violates:
  order ms_drbd_r0-b4-vz_fs inf: ms_drbd_r0:promote group_vz:start
  group group_vz vz_fs vz_svc ve1011

Any ideas?

Rgds,
Michael




More information about the Pacemaker mailing list