<div dir="ltr">Solved this by using the nfsserver resource agent instead of exportfs, and using a ceph block device as the nfs shared info dir.</div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 28, 2015 at 9:51 AM, Steve Dainard <span dir="ltr"><<a href="mailto:sdainard@spd1.com" target="_blank">sdainard@spd1.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hello,<div><br></div><div>I'm configuring a cluster which maps and mounts ceph rbd's, then exports each mount over nfs, mostly following Sebastien's post here: <a href="http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/" target="_blank">http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/</a> but there are some differences using pacemaker 1.2 with systemd.</div><div><br></div><div>The issue I'm running into is I can get ceph working without issue, but the nfs exports error with:</div><div><br></div><div><div># pcs status</div><div>Cluster name: nfs</div><div>Last updated: Thu May 28 09:21:13 2015</div><div>Last change: Wed May 27 17:47:06 2015</div><div>Stack: corosync</div><div>Current DC: node1 (1) - partition with quorum</div><div>Version: 1.1.12-a14efad</div><div>3 Nodes configured</div><div>24 Resources configured</div><div><br></div><div><br></div><div>Online: [ node1 node2 node3 ]</div><div><br></div><div>Full list of resources:</div><div><br></div><div> Resource Group: group_rbd_fs_nfs_vip</div><div> rbd_vol1 (ocf::ceph:<a href="http://rbd.in" target="_blank">rbd.in</a>): Started node1 </div><div> ...</div><div> rbd_vol8 (ocf::ceph:<a href="http://rbd.in" target="_blank">rbd.in</a>): Started node1 </div><div> fs_vol1 (ocf::heartbeat:Filesystem): Started node1 </div><div> ...</div><div> fs_vol8 (ocf::heartbeat:Filesystem): Started node1 </div><div> export_vol1 (ocf::heartbeat:exportfs): Stopped </div><div> ...</div><div> export_vol8 (ocf::heartbeat:exportfs): Stopped </div><div><br></div><div><b>Failed actions:</b></div><div><b> export_vol1_start_0 on node1 'unknown error' (1): call=262, status=complete, exit-reason='none', last-rc-change='Wed May 27 17:42:37 2015', queued=0ms, exec=56ms</b></div><div><b> export_vol1_start_0 on node2 'unknown error' (1): call=196, status=complete, exit-reason='none', last-rc-change='Wed May 27 17:43:04 2015', queued=0ms, exec=63ms</b></div><div><b> export_vol1_start_0 on node3 'unknown error' (1): call=196, status=complete, exit-reason='none', last-rc-change='Wed May 27 17:43:27 2015', queued=0ms, exec=69ms</b></div><div><br></div><div><br></div><div>PCSD Status:</div><div> node1: Online</div><div> node2: Online</div><div> node3: Online</div><div><br></div><div>Daemon Status:</div><div> corosync: active/disabled</div><div> pacemaker: active/disabled</div><div> pcsd: active/enabled</div></div><div><br></div><div>I thought this was an issue with the nfsd kernel module not loading, so I manually loaded it on each host, but no change.</div><div><br></div><div>I'm also wondering if there's an error with my export resources config:</div><div><div> Resource: export_vol1 (class=ocf provider=heartbeat type=exportfs)</div><div> Attributes: directory=/mnt/vol1 clientspec=<a href="http://10.0.231.0/255.255.255.0" target="_blank">10.0.231.0/255.255.255.0</a> options=rw,no_subtree_check,no_root_squash fsid=1 </div><div> Operations: stop interval=0s timeout=120 (export_vol1-stop-timeout-120)</div><div> monitor interval=10s timeout=20s (export_vol1-monitor-interval-10s)</div><div> start interval=0 timeout=40s (export_vol1-start-interval-0)</div><div> Resource: export_vol2 (class=ocf provider=heartbeat type=exportfs)</div><div> Attributes: directory=/mnt/vol2 clientspec=<a href="http://10.0.231.0/255.255.255.0" target="_blank">10.0.231.0/255.255.255.0</a> options=rw,no_subtree_check,no_root_squash fsid=2 </div><div> Operations: stop interval=0s timeout=120 (export_vol2-stop-timeout-120)</div><div> monitor interval=10s timeout=20s (export_vol2-monitor-interval-10s)</div><div> start interval=0 timeout=40s (export_vol2-start-interval-0)</div></div><div>... (8 in total)</div><div><br></div><div><br></div><div><div>exportfs is throwing an error, and even more odd its breaking the nfs subnet auth into IP's (10.0.231.103,10.0.231.100) which I have no idea where its getting those IP addresses from.</div></div><div>Logs: <a href="http://pastebin.com/xEX2L7m1" target="_blank">http://pastebin.com/xEX2L7m1</a><br></div><div><br></div><div>/etc/exportfs:</div><div><div>/mnt/vol1 <a href="http://10.0.231.0/24(rw,no_subtree_check,no_root_squash)" target="_blank">10.0.231.0/24(rw,no_subtree_check,no_root_squash)</a></div><div>...</div><div>/mnt/vol8 <a href="http://10.0.231.0/24(rw,no_subtree_check,no_root_squash)" target="_blank">10.0.231.0/24(rw,no_subtree_check,no_root_squash)</a></div></div><div><br></div><div><div># getenforce </div><div>Permissive</div></div><div><br></div><div><div># systemctl status nfs</div><div>nfs-server.service - NFS server and services</div><div> Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)</div><div> Active: inactive (dead)</div></div><div><br></div><div><div>Thanks,</div></div><div>Steve</div></div>
</blockquote></div><br></div>