[ClusterLabs] No match for shutdown action on <nodeid>

Ken Gaillot kgaillot at redhat.com
Tue Jan 10 18:47:15 UTC 2017


On 01/10/2017 11:38 AM, Denis Gribkov wrote:
> Hi Everyone,
> 
> When I run:
> 
> # pcs resource cleanup resource_name
> 
> I'm getting a block of messages in log on current DC node:
> 
> Jan 10 18:12:13 node1 crmd[21635]:  warning: No match for shutdown
> action on node2
> Jan 10 18:12:13 node1 crmd[21635]:  warning: No match for shutdown
> action on node3
> Jan 10 18:12:14 node1 crmd[21635]:  warning: No match for shutdown
> action on node4
> Jan 10 18:12:14 node1 crmd[21635]:  warning: No match for shutdown
> action on node5
> Jan 10 18:12:14 node1 crmd[21635]:  warning: No match for shutdown
> action on node6
> Jan 10 18:12:14 node1 crmd[21635]:  warning: No match for shutdown
> action on node7
> Jan 10 18:12:14 node1 crmd[21635]:  warning: No match for shutdown
> action on node8
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node5
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node3
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node4
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node8
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node9
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node10
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node11
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node3
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node12
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node4
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node5
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node13
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node8
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node14
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node15
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node6
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node6
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node7
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node7
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node2
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node2
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node16
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node1
> Jan 10 18:12:18 node1 crmd[21635]:  warning: No match for shutdown
> action on node1
> Jan 10 18:12:23 node1 cib[21630]:  warning: A-Sync reply to crmd failed:
> No message of desired type
> 
> At the same time other nodes didn't getting these messages.
> 
> Does anybody know why this issue happening in such cases and how
> possible to fix it?
> 
> Cluster Properties:
>  cluster-infrastructure: cman
>  cluster-recheck-interval: 5min
>  dc-version: 1.1.14-8.el6-70404b0
>  expected-quorum-votes: 3
>  have-watchdog: false
>  last-lrm-refresh: 1484068350
>  maintenance-mode: false
>  no-quorum-policy: ignore
>  pe-error-series-max: 1000
>  pe-input-series-max: 1000
>  pe-warn-series-max: 1000
>  stonith-action: reboot
>  stonith-enabled: false
>  symmetric-cluster: false
> 
> Thank you.

The message is harmless and can be ignored. It only shows up on the node
that is DC.

The "fix" is to upgrade. :-) In later versions, the message was changed
to the more accurate "No reason to expect node <N> to be down", and it
is now only printed if the node was actually lost.

In the version you have, the message was printed whenever the cluster
checked to see if any known event would have brought down a node,
regardless of whether there was any actual problem. If there is an
actual problem, there will be other messages about that (e.g. node lost
or fenced).




More information about the Users mailing list