theonlydude1 Posted July 19 Share Posted July 19 (edited) Hello, Maybe beginner question here since I'm only using unraid for few days. I'm running unraid 6.12.11 with, until recently, a simple array with 1 cache disk and 1 TB drive. All my data are access throught NFS from an Ubuntu 22.04 server with no issue. My NFS configuration from unraid is the standard one (I've just activated support for hard link). I've bought 2 new disks to serve as parity drive and to extend the global capacity. I've follow-up the documentation to add the parity drive first (stop array, plug drive, assign drive to parity and start array), and during the whole time of the parity drive creation my NFS share was not accessible from my ubuntu server. Everything was green on unraid side (share visible, no change in NFS configuration, docker and VM running fine) but my data was unaccesible remotely. Once the parity drive finished to be fully configured, a restart of my server gibe me back access through NFS. I'm now in the middle of the second drive implementation (as data drive) and... issue is back. On client side, my NFS mount configuration is really simple: 192.168.x.x:/mnt/user/data /media/UNRAID/data nfs defaults 0 0 (used nfs version is 4.2) but when I try to umount or mount again the share, I face a never ending operation. Did I miss something ? Is it an expected behavior from unraid ? Thanks ! tower-diagnostics-20240719-1320.zip Edited July 19 by theonlydude1 add diag files Quote Link to comment
JorgeB Posted July 19 Share Posted July 19 The remote nfs shares you have mounted keep working right? The problem is the Unraid share? Quote Link to comment
theonlydude1 Posted July 19 Author Share Posted July 19 (edited) On unraid side the share is visible, seems to run fine and my array is started. NFS export is set to yes and security is set to public But from my ubuntu server (or from macOS) I'm not able to parse it, to unmount it or to force mount it Edited July 19 by theonlydude1 Quote Link to comment
JorgeB Posted July 19 Share Posted July 19 Nothing jumps out to me in the diags, @dlandonany ideas? Quote Link to comment
xavierh Posted July 19 Share Posted July 19 So, I'm not sure if I'm having the same issue. I upgraded from 6.12.10 to 6.12.11, and since I had to reboot, I said well, let me add a few disks to the array. I did not clear them beforehand. I didn't mind the wait, and they were used previously, so the drives are good. * There is a drive with a smart error but its not being used at all. NFS was working between 2 unraid servers perfectly. After bringing up the array, server 1, Unraid is disk-clearing the drives. I've lost access to the NFS shares from my other server (server 2). I'm using Unassigned Devices to mount the shares. Everything looks good from both servers. When I search for NFS Shares from server 2, I don't see any shares available from server 1. Samba shares from server 1 are visible. I created an NFS share on server 2, and I can see it on server 1, and it is mountable. I guess I'll wait 12 hours for the clean to complete and see if it fixes itself. frankie-diagnostics-20240719-1213.zip Quote Link to comment
JorgeB Posted July 19 Share Posted July 19 Clear disk should not affect nfs shares, but I'll see if I can reproduce. Quote Link to comment
xavierh Posted July 19 Share Posted July 19 By the way, server 2 was/is on 6.12.11. I lost power a few days ago. While parity was running, NFS shares were working. Quote Link to comment
xavierh Posted July 19 Share Posted July 19 I forgot this, from server 2 Jul 19 12:16:09 server2 unassigned.devices: Warning: shell_exec(/sbin/mount -t 'nfs' -o rw,soft,relatime,retrans=4,timeo=300 'server1:/mnt/user/AudioBooks' '/mnt/remotes/server1_AudioBooks' 2>&1) took longer than 15s! Jul 19 12:16:09 server2 unassigned.devices: NFS mount failed: 'command timed out'. Jul 19 12:16:09 server2 unassigned.devices: Remote Share 'server1:/mnt/user/AudioBooks' failed to mount. Jul 19 12:16:16 server2 unassigned.devices: Warning: shell_exec(/sbin/mount -t 'nfs' -o ro,soft,relatime,retrans=4,timeo=300 'server1:/mnt/user/Movies' '/mnt/remotes/server1_Movies' 2>&1) took longer than 15s! Jul 19 12:16:16 server2 unassigned.devices: NFS mount failed: 'command timed out'. Jul 19 12:16:16 server2 unassigned.devices: Remote Share 'server1:/mnt/user/Movies' failed to mount. Quote Link to comment
dlandon Posted July 19 Share Posted July 19 It looks like you have some disk errors: Jul 19 12:07:17 Tower kernel: ldm_validate_partition_table(): Disk read failed. Jul 19 12:07:17 Tower kernel: Buffer I/O error on dev md2p1, logical block 0, async page read Jul 19 12:07:17 Tower kernel: md2p1: unable to read partition table Jul 19 12:07:17 Tower kernel: md2p1: running, size: 5860522532 blocks Jul 19 12:07:18 Tower emhttpd: shcmd (225): udevadm settle Jul 19 12:07:18 Tower emhttpd: Opening encrypted volumes... Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 0, async page read ### [PREVIOUS LINE REPEATED 1 TIMES] ### Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 4, async page read Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 8, async page read Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 16, async page read Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 32, async page read Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 64, async page read Quote Link to comment
Solution theonlydude1 Posted July 19 Author Solution Share Posted July 19 Hello, I've made more testing on my side and my issue wasn't due to the array operation but to fact that I had to stop and start the array for adding my disks. I can reproduce the issue just by doing a stop / start on the array. To restore the access, i need to restart the NFS server service on unraid with the command: usr/local/etc/rc.d/rc.nfsd restart This behavior is expected ? Quote Link to comment
JorgeB Posted July 20 Share Posted July 20 10 hours ago, theonlydude1 said: This behavior is expected ? Nope, just to confirm, NFS works fine after initial array start, but not after a subsequent restart correct? Quote Link to comment
theonlydude1 Posted July 20 Author Share Posted July 20 Yes you are correct. No issue after a full reboot of my unraid server, just when I perform a stop / start of the array Quote Link to comment
JorgeB Posted July 21 Share Posted July 21 I tried to reproduce this yesterday and couldn't, but I'll do a few more tests Monday. Quote Link to comment
JorgeB Posted July 22 Share Posted July 22 OK, I'm sorry, somehow was under the idea that this was happening with 7.0.0-beta, I can reproduce the issue with 6.12.11, 6.12.10 works fine, will report it to LT. 1 Quote Link to comment
xavierh Posted July 22 Share Posted July 22 On 7/19/2024 at 4:07 PM, theonlydude1 said: Hello, I've made more testing on my side and my issue wasn't due to the array operation but to fact that I had to stop and start the array for adding my disks. I can reproduce the issue just by doing a stop / start on the array. To restore the access, i need to restart the NFS server service on unraid with the command: usr/local/etc/rc.d/rc.nfsd restart This behavior is expected ? Restarting rc.nfsd restart fixed my issue. I tried via the Web UI but that didn't work enabling/disabling. Quote Link to comment
theonlydude1 Posted July 23 Author Share Posted July 23 16 hours ago, JorgeB said: OK, I'm sorry, somehow was under the idea that this was happening with 7.0.0-beta, I can reproduce the issue with 6.12.11, 6.12.10 works fine, will report it to LT. Thanks ! I'll wait for the next update so. Quote Link to comment
poedenon Posted July 24 Share Posted July 24 Thanks for this. I just started to use Unraid and this was a blocker for me as well. I am running 6.12.11 too. This is what worked for me. /usr/local/etc/rc.d/rc.nfsd restart Quote Link to comment
shdwkeeper Posted July 30 Share Posted July 30 (edited) My NFS Share mounts were working fine in 6.12.10, upgraded to 6.12.11 and I couldnt figure out what was wrong with my NFS share mounts, read something about .11 broke the mounts. Downgraded back to 6.12.10 and everything is working again. Edited July 30 by shdwkeeper Quote Link to comment
JorgeB Posted July 30 Share Posted July 30 11 minutes ago, shdwkeeper said: read something about .11 broke the mounts. The mount should work at first array start, but if you restart the array after that you must manually start NFS. Quote Link to comment
shdwkeeper Posted August 27 Share Posted August 27 (edited) Does anyone know if 6.12.12 or 6.12.13 fixed this? 6.12.12 says this in release notes Fix: After stopping and then restarting the array, nfsd is not running Edited August 27 by shdwkeeper Quote Link to comment
JorgeB Posted August 27 Share Posted August 27 Yes, that specific issue should be resolved, though it looks like there may still be some other NFS related issues. Quote Link to comment
shdwkeeper Posted August 28 Share Posted August 28 On 8/27/2024 at 7:29 AM, JorgeB said: Yes, that specific issue should be resolved, though it looks like there may still be some other NFS related issues. oh really, like what? Have you tested .12 with this NFS issue above and can confirmed nfsd is running after a restart? Quote Link to comment
JorgeB Posted August 29 Share Posted August 29 I don't use NFS, please retest and report back if it's fixed. Quote Link to comment
shdwkeeper Posted September 1 Share Posted September 1 On 8/29/2024 at 12:20 AM, JorgeB said: I don't use NFS, please retest and report back if it's fixed. Ok, I'll try it when I have some down time. Quote Link to comment
marveljam Posted September 11 Share Posted September 11 (edited) I have a similar issue while running Unraid 7.0.0-beta.2. The remote servers can log in and supposedly mount the NFS share. The share shows as O B used and 0 B Free. I am trying to self-diagnose the issue. I am leaving this here as an FYI. If I learn of a fix I will leave it here. Sadly, my server is clearing a newly added drive. I can't move forward to resolve it. This means many services running on other Unraid servers cannot reach the data and therefore are not functioning. I re-read the posts in this thread and tried /usr/local/etc/rc.d/rc.nfsd restart on all the related servers. That did not fix the access. Edited September 11 by marveljam Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.