[ClusterLabs] Fwd: dlm not starting

G Spot gspot.afdb at gmail.com
Sun Feb 7 06:21:21 UTC 2016


Hi Ken,

Thanks for your response, am using ocf:pacemaker:controld resource agent
and stonith-enabled=false do I need to configure stonith device to
make this work?

Regards,
afdb

On Fri, Feb 5, 2016 at 5:46 PM, Ken Gaillot <kgaillot at redhat.com> wrote:

> > am configuring shared storage for 2 nodes (Cent 7) installed
> > pcs/gfs2-utils/lvm2-cluster when creating resource unable to start dlm
> >
> >  crm_verify -LV
> >    error: unpack_rsc_op:        Preventing dlm from re-starting anywhere:
> > operation start failed 'not configured' (6)
>
> Are you using the ocf:pacemaker:controld resource agent for dlm?
> Normally it logs what the problem is when returning 'not configured',
> but I don't see it below. As far as I know, it will return 'not
> configured' if stonith-enabled=false or globally-unique=true, as those
> are incompatible with DLM.
>
> There is also a rare cluster error condition that will be reported as
> 'not configured', but it will always be accompanied by "Invalid resource
> definition" in the logs.
>
> >
> > Feb 05 13:34:26 [24262] libcompute1    pengine:     info:
> > determine_online_status:      Node libcompute1 is online
> > Feb 05 13:34:26 [24262] libcompute1    pengine:     info:
> > determine_online_status:      Node libcompute2 is online
> > Feb 05 13:34:26 [24262] libcompute1    pengine:  warning:
> > unpack_rsc_op_failure:        Processing failed op start for dlm on
> > libcompute1: not configured (6)
> > Feb 05 13:34:26 [24262] libcompute1    pengine:    error: unpack_rsc_op:
> >      Preventing dlm from re-starting anywhere: operation start failed
> 'not
> > configured' (6)
> > Feb 05 13:34:26 [24262] libcompute1    pengine:  warning:
> > unpack_rsc_op_failure:        Processing failed op start for dlm on
> > libcompute1: not configured (6)
> > Feb 05 13:34:26 [24262] libcompute1    pengine:    error: unpack_rsc_op:
> >      Preventing dlm from re-starting anywhere: operation start failed
> 'not
> > configured' (6)
> > Feb 05 13:34:26 [24262] libcompute1    pengine:     info: native_print:
> dlm
> >     (ocf::pacemaker:controld):      FAILED libcompute1
> > Feb 05 13:34:26 [24262] libcompute1    pengine:     info:
> > get_failcount_full:   dlm has failed INFINITY times on libcompute1
> > Feb 05 13:34:26 [24262] libcompute1    pengine:  warning:
> > common_apply_stickiness:      Forcing dlm away from libcompute1 after
> > 1000000 failures (max=1000000)
> > Feb 05 13:34:26 [24262] libcompute1    pengine:     info: native_color:
> > Resource dlm cannot run anywhere
> > Feb 05 13:34:26 [24262] libcompute1    pengine:   notice: LogActions:
> > Stop    dlm     (libcompute1)
> > Feb 05 13:34:26 [24262] libcompute1    pengine:   notice:
> > process_pe_message:   Calculated Transition 59:
> > /var/lib/pacemaker/pengine/pe-input-176.bz2
> > Feb 05 13:34:26 [24263] libcompute1       crmd:     info:
> > do_state_transition:  State transition S_POLICY_ENGINE ->
> > S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE
> > origin=handle_response ]
> > Feb 05 13:34:26 [24263] libcompute1       crmd:     info: do_te_invoke:
> > Processing graph 59 (ref=pe_calc-dc-1454697266-177) derived from
> > /var/lib/pacemaker/pengine/pe-input-176.bz2
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20160207/29548e1a/attachment.htm>


More information about the Users mailing list