[ClusterLabs] nfs-daemon will not start

Jones, Keven Keven.Jones at ncr.com
Wed Sep 18 16:49:37 EDT 2019


I have 2 centos7.6  VM’s setup. Was able to successfully create cluster, setup LVM, NFSHARE but not able to get the
nfs-daemon (ocf::heartbeat:nfsserver): to start successfully.


[root at cs-nfs1 ~]# pcs status
Cluster name: cluster_pr
Stack: corosync
Current DC: cs-nfs1.saas.local (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
Last updated: Wed Sep 18 16:43:37 2019
Last change: Wed Sep 18 16:18:32 2019 by root via cibadmin on cs-nfs1.saas.local

2 nodes configured
4 resources configured

Online: [ cs-nfs1.saas.local cs-nfs2.saas.local ]

Full list of resources:

vm_fence       (stonith:fence_vmware_soap):    Started cs-nfs1.saas.local
Resource Group: group_Nfs
     lvm_res    (ocf::heartbeat:LVM):   Started cs-nfs2.saas.ncr.local
     nfsshare   (ocf::heartbeat:Filesystem):    Started cs-nfs2.saas.local
     nfs-daemon (ocf::heartbeat:nfsserver):     Stopped

Failed Actions:
* nfs-daemon_start_0 on cs-nfs1.saas.local 'unknown error' (1): call=25, status=Timed Out, exitreason='',
    last-rc-change='Wed Sep 18 16:31:48 2019', queued=0ms, exec=40001ms
* nfs-daemon_start_0 on cs-nfs2.saas.local 'unknown error' (1): call=22, status=Timed Out, exitreason='',
    last-rc-change='Wed Sep 18 16:31:06 2019', queued=0ms, exec=40002ms


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

[root at cs-nfs1 ~]# pcs resource debug-start nfs-daemon
Operation start for nfs-daemon (ocf:heartbeat:nfsserver) failed: 'Timed Out' (2)
>  stdout: STATDARG="--no-notify"
>  stdout: * rpc-statd.service - NFS status monitor for NFSv2/3 locking.
>  stdout:    Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; static; vendor preset: disabled)
>  stdout:    Active: inactive (dead) since Wed 2019-09-18 16:32:28 EDT; 13min ago
>  stdout:   Process: 7054 ExecStart=/usr/sbin/rpc.statd $STATDARGS (code=exited, status=0/SUCCESS)
>  stdout:  Main PID: 7055 (code=exited, status=0/SUCCESS)
>  stdout:
>  stdout: Sep 18 16:31:48 cs-nfs1 systemd[1]: Starting NFS status monitor for NFSv2/3 locking....
>  stdout: Sep 18 16:31:48 cs-nfs1 rpc.statd[7055]: Version 1.3.0 starting
>  stdout: Sep 18 16:31:48 cs-nfs1 rpc.statd[7055]: Flags: TI-RPC
>  stdout: Sep 18 16:31:48 cs-nfs1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
>  stdout: Sep 18 16:32:28 cs-nfs1 systemd[1]: Stopping NFS status monitor for NFSv2/3 locking....
>  stdout: Sep 18 16:32:28 cs-nfs1 systemd[1]: Stopped NFS status monitor for NFSv2/3 locking..
>  stderr: Sep 18 16:46:08 INFO: Starting NFS server ...
>  stderr: Sep 18 16:46:08 INFO: Start: rpcbind i: 1
>  stderr: Sep 18 16:46:08 INFO: Start: nfs-mountd i: 1
>  stderr: Job for nfs-idmapd.service failed because the control process exited with error code. See "systemctl status nfs-idmapd.service" and "journalctl -xe" for details.
>  stderr: Sep 18 16:46:08 INFO: Start: nfs-idmapd i: 1
>  stderr: Sep 18 16:46:09 INFO: Start: nfs-idmapd i: 2
>  stderr: Sep 18 16:46:10 INFO: Start: nfs-idmapd i: 3
>  stderr: Sep 18 16:46:11 INFO: Start: nfs-idmapd i: 4
>  stderr: Sep 18 16:46:12 INFO: Start: nfs-idmapd i: 5
>  stderr: Sep 18 16:46:13 INFO: Start: nfs-idmapd i: 6
>  stderr: Sep 18 16:46:14 INFO: Start: nfs-idmapd i: 7
>  stderr: Sep 18 16:46:15 INFO: Start: nfs-idmapd i: 8
>  stderr: Sep 18 16:46:16 INFO: Start: nfs-idmapd i: 9
>  stderr: Sep 18 16:46:17 INFO: Start: nfs-idmapd i: 10
>  stderr: Sep 18 16:46:18 INFO: Start: nfs-idmapd i: 11
>  stderr: Sep 18 16:46:19 INFO: Start: nfs-idmapd i: 12
>  stderr: Sep 18 16:46:20 INFO: Start: nfs-idmapd i: 13
>  stderr: Sep 18 16:46:21 INFO: Start: nfs-idmapd i: 14
>  stderr: Sep 18 16:46:22 INFO: Start: nfs-idmapd i: 15
>  stderr: Sep 18 16:46:24 INFO: Start: nfs-idmapd i: 16
>  stderr: Sep 18 16:46:25 INFO: Start: nfs-idmapd i: 17
>  stderr: Sep 18 16:46:26 INFO: Start: nfs-idmapd i: 18
>  stderr: Sep 18 16:46:27 INFO: Start: nfs-idmapd i: 19
>  stderr: Sep 18 16:46:28 INFO: Start: nfs-idmapd i: 20
>  stderr: Sep 18 16:46:29 INFO: Start: nfs-idmapd i: 21
>  stderr: Sep 18 16:46:30 INFO: Start: nfs-idmapd i: 22
>  stderr: Sep 18 16:46:31 INFO: Start: nfs-idmapd i: 23
>  stderr: Sep 18 16:46:32 INFO: Start: nfs-idmapd i: 24
>  stderr: Sep 18 16:46:33 INFO: Start: nfs-idmapd i: 25
>  stderr: Sep 18 16:46:34 INFO: Start: nfs-idmapd i: 26
>  stderr: Sep 18 16:46:35 INFO: Start: nfs-idmapd i: 27
>  stderr: Sep 18 16:46:36 INFO: Start: nfs-idmapd i: 28
>  stderr: Sep 18 16:46:37 INFO: Start: nfs-idmapd i: 29
>  stderr: Sep 18 16:46:38 INFO: Start: nfs-idmapd i: 30
>  stderr: Sep 18 16:46:39 INFO: Start: nfs-idmapd i: 31
>  stderr: Sep 18 16:46:40 INFO: Start: nfs-idmapd i: 32
>  stderr: Sep 18 16:46:41 INFO: Start: nfs-idmapd i: 33
>  stderr: Sep 18 16:46:42 INFO: Start: nfs-idmapd i: 34
>  stderr: Sep 18 16:46:43 INFO: Start: nfs-idmapd i: 35
>  stderr: Sep 18 16:46:44 INFO: Start: nfs-idmapd i: 36
>  stderr: Sep 18 16:46:45 INFO: Start: nfs-idmapd i: 37
>  stderr: Sep 18 16:46:46 INFO: Start: nfs-idmapd i: 38
>  stderr: Sep 18 16:46:47 INFO: Start: nfs-idmapd i: 39

[root at cs-nfs1 ~]# systemctl status nfs-idmapd.service
● nfs-idmapd.service - NFSv4 ID-name mapping service
   Loaded: loaded (/usr/lib/systemd/system/nfs-idmapd.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Wed 2019-09-18 16:46:08 EDT; 1min 25s ago
  Process: 8699 ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS (code=exited, status=1/FAILURE)
Main PID: 5334 (code=killed, signal=TERM)

Sep 18 16:46:08 cs-nfs1 systemd[1]: Starting NFSv4 ID-name mapping service...
Sep 18 16:46:08 cs-nfs1 systemd[1]: nfs-idmapd.service: control process exited, code=exited status=1
Sep 18 16:46:08 cs-nfs1 systemd[1]: Failed to start NFSv4 ID-name mapping service.
Sep 18 16:46:08 cs-nfs1 systemd[1]: Unit nfs-idmapd.service entered failed state.
Sep 18 16:46:08 cs-nfs1 systemd[1]: nfs-idmapd.service failed

[root at cs-nfs1 ~]# journalctl -xe
-- The start-up result is done.
Sep 18 16:46:08 cs-nfs1 rpc.idmapd[8711]: main: open(/var/lib/nfs/rpc_pipefs//nfs): No such file or directory
Sep 18 16:46:08 cs-nfs1 systemd[1]: Started NFS server and services.
-- Subject: Unit nfs-server.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit nfs-server.service has finished starting up.
--
-- The start-up result is done.
Sep 18 16:46:08 cs-nfs1 systemd[1]: nfs-idmapd.service: control process exited, code=exited status=1
Sep 18 16:46:08 cs-nfs1 systemd[1]: Failed to start NFSv4 ID-name mapping service.
-- Subject: Unit nfs-idmapd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit nfs-idmapd.service has failed.
--
-- The result is failed.
Sep 18 16:46:08 cs-nfs1 systemd[1]: Unit nfs-idmapd.service entered failed state.
Sep 18 16:46:08 cs-nfs1 systemd[1]: nfs-idmapd.service failed.
Sep 18 16:47:29 cs-nfs1 crmd[6252]:   notice: State transition S_IDLE -> S_POLICY_ENGINE
Sep 18 16:47:29 cs-nfs1 pengine[6251]:  warning: Processing failed start of nfs-daemon on cs-nfs1.saas.local: unknown error
Sep 18 16:47:29 cs-nfs1 pengine[6251]:  warning: Processing failed start of nfs-daemon on cs-nfs2.saas.local: unknown error
Sep 18 16:47:29 cs-nfs1 pengine[6251]:  warning: Forcing nfs-daemon away from cs-nfs1.saas.ncr.local after 1000000 failures (max=10000
Sep 18 16:47:29 cs-nfs1 pengine[6251]:  warning: Forcing nfs-daemon away from cs-nfs2.saas.ncr.local after 1000000 failures (max=10000
Sep 18 16:47:29 cs-nfs1 pengine[6251]:   notice: Calculated transition 7, saving inputs in /var/lib/pacemaker/pengine/pe-input-91.bz2
Sep 18 16:47:29 cs-nfs1 crmd[6252]:   notice: Transition 7 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/p
Sep 18 16:47:29 cs-nfs1 crmd[6252]:   notice: State transition S_TRANSITION_ENGINE -> S_IDLE
[root at cs-nfs1 ~]#

Not sure where to go next. Has anyone seen this ? thanks!


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20190918/e06ff849/attachment-0001.html>


More information about the Users mailing list