[ClusterLabs] can't start pacemaker resources

solarflow99 solarflow99 at gmail.com
Thu Feb 21 21:15:40 EST 2019


I have tried to create NFS shares using the ceph backend, but I can't seem
to get the resources to start.  It doesn't show me much as to why, does
anyone have an idea?



Feb 21 15:11:40 cephmgr101.corp.mydomain.com crmd[10306]: warning: Input
I_DC_TIMEOUT received in state S_PENDING from crm_timer_popped

Feb 21 15:11:40 cephmgr101.corp.mydomain.com crmd[10306]: notice: State
transition S_ELECTION -> S_PENDING

Feb 21 15:11:40 cephmgr101.corp.mydomain.com crmd[10306]: notice: State
transition S_PENDING -> S_NOT_DC

Feb 21 15:11:42 cephmgr101.corp.mydomain.com lrmd[10303]: notice:
p_rbd_map_1_monitor_0:10835:stderr [ error: unknown option 'device list' ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com lrmd[10303]: notice:
p_rbd_map_1_monitor_0:10835:stderr [ ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com crmd[10306]: notice: Result of
probe operation for p_rbd_map_1 on cephmgr101.corp.mydomain.com: 7 (not
running)

Feb 21 15:11:42 cephmgr101.corp.mydomain.com crmd[10306]: notice:
cephmgr101.corp.mydomain.com-p_rbd_map_1_monitor_0:5 [ error: unknown
option 'device list'\n\n ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com
exportfs(p_nfs_export_root_1)[10911]: INFO: Directory /mnt/nfsroot is not
exported to * (stopped).

Feb 21 15:11:42 cephmgr101.corp.mydomain.com crmd[10306]: notice: Result of
probe operation for p_nfs_export_root_1 on cephmgr101.corp.mydomain.com: 7
(not running)

Feb 21 15:11:42 cephmgr101.corp.mydomain.com
nfsserver(p_nfs_server)[10951]: INFO: Status: rpcbind

Feb 21 15:11:42 cephmgr101.corp.mydomain.com
nfsserver(p_nfs_server)[10958]: INFO: Status: nfs-mountd

Feb 21 15:11:42 cephmgr101.corp.mydomain.com rbd.in(p_rbd_map_1)[10962]:
ERROR: error: unknown option 'device map rbd/nfs1' usage: rbd <command> ...
Command-line interface for managing Ceph RBD images. Positional arguments:
<command> bench Simple benchmark. children Display children of snapshot.
clone Clone a snapshot into a COW child image. copy (cp) Copy src image to
dest. create Create an empty image. diff Print extents that differ since a
previous snap, or image creation. disk-usage (du) Show disk usage stats for
pool, image or snapshot export Export image to file. export-diff Export
incremental diff to file. feature disable Disable the specified image
feature. feature enable Enable the specified image feature. flatten Fill
clone with parent data (make it independent). image-meta get Image metadata
get the value associated with the key. image-meta list Image metadata list
keys with values. image-meta remove Image metadata remove the key and value
associated. image-meta set Image metadata set key with value. import Import
image from file. import-diff Import an incremental diff. info Show
information about image size, striping, etc. journal client disconnect Flag
image journal client as disconnected. journal export Export image journal.
journal import Import image journal. journal info Show information about
image journal. journal inspect Inspect image journal for structural errors.
journal reset Reset image journal. journal status Show status of image
journal. list (ls) List rbd images. lock add Take a lock on an image. lock
list (lock ls) Show locks held on an image. lock remove (lock rm) Release a
lock on an image. map Map image to a block device using the kernel.
merge-diff Merge two diff exports together. mirror image demote Demote an
image to non-primary for RBD mirroring. mirror image disable Disable RBD
mirroring for an image. mirror image enable Enable RBD mirroring for an
image. mirror image promote Promote an image to primary for RBD mirroring.
mirror image resync Force resync to primary image for RBD mirroring. mirror
image status Show RDB mirroring status for an image. mirror pool demote
Demote all primary images in the pool. mirror pool disable Disable RBD
mirroring by default within a pool. mirror pool enable Enable RBD mirroring
by default within a pool. mirror pool info Show information about the pool
mirroring configuration. mirror pool peer add Add a mirroring peer to a
pool. mirror pool peer remove Remove a mirroring peer from a pool. mirror
pool peer set Update mirroring peer settings. mirror pool promote Promote
all non-primary images in the pool. mirror pool status Show status for all
mirrored images in the pool. nbd list (nbd ls) List the nbd devices already
used. nbd map Map image to a nbd device. nbd unmap Unmap a nbd device.
object-map check Verify the object map is correct. object-map rebuild
Rebuild an invalid object map. pool init Initialize pool for use by RBD.
remove (rm) Delete an image. rename (mv) Rename image within pool. resize
Resize (expand or shrink) image. showmapped Show the rbd images mapped by
the kernel. snap create (snap add) Create a snapshot. snap limit clear
Remove snapshot limit. snap limit set Limit the number of snapshots. snap
list (snap ls) Dump list of image snapshots. snap protect Prevent a
snapshot from being deleted. snap purge Delete all snapshots. snap remove
(snap rm) Delete a snapshot. snap rename Rename a snapshot. snap rollback
(snap revert) Rollback image to snapshot. snap unprotect Allow a snapshot
to be deleted. status Show the status of this image. trash list (trash ls)
List trash images. trash move (trash mv) Move an image to the trash. trash
remove (trash rm) Remove an image from trash. trash restore Restore an
image from trash. unmap Unmap a rbd device that was used by the kernel.
watch Watch events on image. Optional arguments: -c [ --conf ] arg path to
cluster configuration --cluster arg cluster name --id arg client id
(without 'client.' prefix) --user arg client id (without 'client.' prefix)
-n [ --name ] arg client name -m [ --mon_host ] arg monitor host --secret
arg path to secret key (deprecated) -K [ --keyfile ] arg path to secret key
-k [ --keyring ] arg path to keyring See 'rbd help <command>' for help on a
specific command.

Feb 21 15:11:42 cephmgr101.corp.mydomain.com lrmd[10303]: notice:
p_rbd_map_1_start_0:10922:stderr [ error: unknown option 'device list' ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com lrmd[10303]: notice:
p_rbd_map_1_start_0:10922:stderr [ ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com crmd[10306]: notice: Result of
start operation for p_rbd_map_1 on cephmgr101.corp.mydomain.com: 1 (unknown
error)

Feb 21 15:11:42 cephmgr101.corp.mydomain.com crmd[10306]: notice:
cephmgr101.corp.mydomain.com-p_rbd_map_1_start_0:14 [ error: unknown option
'device list'\n\n ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com
nfsserver(p_nfs_server)[10973]: ERROR: nfs-mountd is not running

Feb 21 15:11:42 cephmgr101.corp.mydomain.com lrmd[10303]: notice:
p_nfs_server_monitor_0:10852:stderr [ ocf-exit-reason:nfs-mountd is not
running ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com crmd[10306]: notice: Result of
probe operation for p_nfs_server on cephmgr101.corp.mydomain.com: 7 (not
running)

Feb 21 15:11:42 cephmgr101.corp.mydomain.com crmd[10306]: notice:
cephmgr101.corp.mydomain.com-p_nfs_server_monitor_0:9 [
ocf-exit-reason:nfs-mountd is not running\n ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com rbd.in(p_rbd_map_1)[11002]:
INFO: Resource is already stopped

Feb 21 15:11:42 cephmgr101.corp.mydomain.com lrmd[10303]: notice:
p_rbd_map_1_stop_0:10978:stderr [ error: unknown option 'device list' ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com lrmd[10303]: notice:
p_rbd_map_1_stop_0:10978:stderr [ ]

Feb 21 15:11:42 cephmgr101.corp.mydomain.com crmd[10306]: notice: Result of
stop operation for p_rbd_map_1 on cephmgr101.corp.mydomain.com: 0 (ok)

Feb 21 15:11:43 cephmgr101.corp.mydomain.com Filesystem(p_fs_rbd_1)[11051]:
WARNING: Couldn't find device [/dev/rbd0]. Expected /dev/??? to exist

Feb 21 15:11:43 cephmgr101.corp.mydomain.com crmd[10306]: notice: Result of
probe operation for p_fs_rbd_1 on cephmgr101.corp.mydomain.com: 7 (not
running)

Feb 21 15:11:43 cephmgr101.corp.mydomain.com
nfsserver(p_nfs_server)[11164]: INFO: Status: rpcbind

Feb 21 15:11:43 cephmgr101.corp.mydomain.com
nfsserver(p_nfs_server)[11186]: INFO: Status: nfs-mountd

Feb 21 15:11:43 cephmgr101.corp.mydomain.com crmd[10306]: notice: Result of
probe operation for p_ip_nfs_1 on cephmgr101.corp.mydomain.com: 7 (not
running)

Feb 21 15:11:43 cephmgr101.corp.mydomain.com
nfsserver(p_nfs_server)[11218]: ERROR: nfs-mountd is not running

Feb 21 15:11:43 cephmgr101.corp.mydomain.com
nfsserver(p_nfs_server)[11258]: INFO: Starting NFS server ...

Feb 21 15:11:43 cephmgr101.corp.mydomain.com
nfsserver(p_nfs_server)[11286]: INFO: Start: rpcbind i: 1

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Starting
Preprocess NFS configuration...

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Started Preprocess
NFS configuration.

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Starting NFS
status monitor for NFSv2/3 locking....

Feb 21 15:11:43 cephmgr101.corp.mydomain.com rpc.statd[11310]: Version
1.3.0 starting

Feb 21 15:11:43 cephmgr101.corp.mydomain.com rpc.statd[11310]: Flags: TI-RPC

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Started NFS status
monitor for NFSv2/3 locking..

Feb 21 15:11:43 cephmgr101.corp.mydomain.com
nfsserver(p_nfs_server)[11325]: INFO: Start: v3locking: 0

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Starting
Preprocess NFS configuration...

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Reached target
rpc_pipefs.target.

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Starting
rpc_pipefs.target.

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Started Preprocess
NFS configuration.

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Starting NFSv4
ID-name mapping service...

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Starting NFS Mount
Daemon...

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Started NFSv4
ID-name mapping service.

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Started NFS Mount
Daemon.

Feb 21 15:11:43 cephmgr101.corp.mydomain.com systemd[1]: Starting NFS
server and services...

Feb 21 15:11:43 cephmgr101.corp.mydomain.com rpc.mountd[11354]: Version
1.3.0 starting
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20190221/9986e852/attachment-0001.html>


More information about the Users mailing list