[Pacemaker] crm resource status and HAWK display differ after manually mounting filesystem resource
Sebastian Kaps
sebastian.kaps at imail.de
Sun Aug 28 11:43:24 UTC 2011
Hi,
on our two-node cluster (SLES11-SP1+HAE; corosync 1.3.1, pacemaker 1.1.5) we have defined the following FS resource and its corresponding clone:
primitive p_fs_wwwdata ocf:heartbeat:Filesystem \
params device="/dev/drbd1" \
directory="/mnt/wwwdata" fstype="ocfs2" \
options="rw,noatime,noacl,nouser_xattr,commit=30,data=writeback" \
op start interval="0" timeout="90s" \
op stop interval="0" timeout="300s"
clone c_fs_wwwdata p_fs_wwwdata \
params master-max="2" clone-max="2" \
meta target-role="Started" is-managed="true"
one of the nodes (node01) went down last night and I started it with the cluster put into maintenance-mode.
After checking everything else, I mounted the ocfs2-resource manually, did some "crm resource reprobe/cleanup" to make the cluster aware of this and finally turned off the maintenance-mode.
Looking at the output of crm_mon, everything looks good again:
Clone Set: c_fs_wwwdata [p_fs_wwwdata]
Started: [ node01 node02 ]
alternatively looking at "crm_mon -n":
Node node02: online
p_fs_wwwdata:1 (ocf::heartbeat:Filesystem) Started
Node node01: online
p_fs_wwwdata:0 (ocf::heartbeat:Filesystem) Started
but the HAWK web interface (version 0.3.6 coming with SLES11SP1-HAE) displays this:
Clone Set: c_fs_wwwdata
- p_fs_wwwdata:0: Started: node01, node02
- p_fs_wwwdata:1: Stopped
Does anybody know why there is a difference?
Did I make a mistake when manually mounting the FS while it was unmanaged?
Or is this only a cosmetical issue with HAWK?
When these resources are started by pacemaker, HAWK shows exactly what's expected: two started resoures, one per node.
Thanks in advance!
--
Sebastian
More information about the Pacemaker
mailing list