Jump to content

NFS Share not accessible during Array operation


Go to solution Solved by theonlydude1,

Recommended Posts

Hello,

Maybe beginner question here since I'm only using unraid for few days.

I'm running unraid 6.12.11 with, until recently, a simple array with 1 cache disk and 1 TB drive. All my data are access throught NFS from an Ubuntu 22.04 server with no issue.

My NFS configuration from unraid is the standard one (I've just activated support for hard link).

 

I've bought 2 new disks to serve as parity drive and to extend the global capacity.

I've follow-up the documentation to add the parity drive first (stop array, plug drive, assign drive to parity and start array), and during the whole time of the parity drive creation my NFS share was not accessible from my ubuntu server. Everything was green on unraid side (share visible, no change in NFS configuration, docker and VM running fine) but my data was unaccesible remotely.

Once the parity drive finished to be fully configured, a restart of my server gibe me back access through NFS.

 

I'm now in the middle of the second drive implementation (as data drive) and... issue is back.

On client side, my NFS mount configuration is really simple: 192.168.x.x:/mnt/user/data /media/UNRAID/data nfs defaults 0 0 (used nfs version is 4.2) but when I try to umount or mount again the share, I face a never ending operation.

Did I miss something ? Is it an expected behavior from unraid ?

 

Thanks !

 

tower-diagnostics-20240719-1320.zip

Edited by theonlydude1
add diag files
Link to comment
Posted (edited)

On unraid side the share is visible, seems to run fine and my array is started. NFS export is set to yes and security is set to public

But from my ubuntu server (or from macOS) I'm not able to parse it, to unmount it or to force mount it

Edited by theonlydude1
Link to comment

So, I'm not sure if I'm having the same issue. I upgraded from 6.12.10 to 6.12.11, and since I had to reboot, I said well, let me add a few disks to the array. I did not clear them beforehand. I didn't mind the wait, and they were used previously, so the drives are good.

 

* There is a drive with a smart error but its not being used at all.

 

NFS was working between 2 unraid servers perfectly. After bringing up the array, server 1, Unraid is disk-clearing the drives. I've lost access to the NFS shares from my other server (server 2). I'm using Unassigned Devices to mount the shares. Everything looks good from both servers. When I search for NFS Shares from server 2, I don't see any shares available from server 1.

 

Samba shares from server 1 are visible.

 

I created an NFS share on server 2, and I can see it on server 1, and it is mountable.

 

I guess I'll wait 12 hours for the clean to complete and see if it fixes itself.

 

 

frankie-diagnostics-20240719-1213.zip

Link to comment

I forgot this, from server 2

Jul 19 12:16:09 server2 unassigned.devices: Warning: shell_exec(/sbin/mount -t 'nfs' -o rw,soft,relatime,retrans=4,timeo=300 'server1:/mnt/user/AudioBooks' '/mnt/remotes/server1_AudioBooks' 2>&1) took longer than 15s!
Jul 19 12:16:09 server2 unassigned.devices: NFS mount failed: 'command timed out'.
Jul 19 12:16:09 server2 unassigned.devices: Remote Share 'server1:/mnt/user/AudioBooks' failed to mount.
Jul 19 12:16:16 server2 unassigned.devices: Warning: shell_exec(/sbin/mount -t 'nfs' -o ro,soft,relatime,retrans=4,timeo=300 'server1:/mnt/user/Movies' '/mnt/remotes/server1_Movies' 2>&1) took longer than 15s!
Jul 19 12:16:16 server2 unassigned.devices: NFS mount failed: 'command timed out'.
Jul 19 12:16:16 server2 unassigned.devices: Remote Share 'server1:/mnt/user/Movies' failed to mount.

Link to comment

It looks like you have some disk errors:

Jul 19 12:07:17 Tower kernel: ldm_validate_partition_table(): Disk read failed.
Jul 19 12:07:17 Tower kernel: Buffer I/O error on dev md2p1, logical block 0, async page read
Jul 19 12:07:17 Tower kernel: md2p1: unable to read partition table
Jul 19 12:07:17 Tower kernel: md2p1: running, size: 5860522532 blocks
Jul 19 12:07:18 Tower emhttpd: shcmd (225): udevadm settle
Jul 19 12:07:18 Tower emhttpd: Opening encrypted volumes...
Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 0, async page read
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 4, async page read
Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 8, async page read
Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 16, async page read
Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 32, async page read
Jul 19 12:07:18 Tower kernel: Buffer I/O error on dev md2p1, logical block 64, async page read

 

Link to comment
  • Solution

Hello,

I've made more testing on my side and my issue wasn't due to the array operation but to fact that I had to stop and start the array for adding my disks. I can reproduce the issue just by doing a stop / start on the array.

To restore the access, i need to restart the NFS server service on unraid with the command: usr/local/etc/rc.d/rc.nfsd restart

 

This behavior is expected ?

Link to comment
On 7/19/2024 at 4:07 PM, theonlydude1 said:

Hello,

I've made more testing on my side and my issue wasn't due to the array operation but to fact that I had to stop and start the array for adding my disks. I can reproduce the issue just by doing a stop / start on the array.

To restore the access, i need to restart the NFS server service on unraid with the command: usr/local/etc/rc.d/rc.nfsd restart

 

This behavior is expected ?

Restarting rc.nfsd restart fixed my issue. I tried via the Web UI but that didn't work enabling/disabling.

Link to comment

My NFS Share mounts were working fine in 6.12.10, upgraded to 6.12.11 and I couldnt figure out what was wrong with my NFS share mounts, read something about .11 broke the mounts.  Downgraded back to 6.12.10 and everything is working again.

Edited by shdwkeeper
Link to comment
  • 4 weeks later...
On 8/27/2024 at 7:29 AM, JorgeB said:

Yes, that specific issue should be resolved, though it looks like there may still be some other NFS related issues.

oh really, like what?  Have you tested .12 with this NFS issue above and can confirmed nfsd is running after a restart?

Link to comment
  • 2 weeks later...

I have a similar issue while running Unraid 7.0.0-beta.2. The remote servers can log in and supposedly mount the NFS share. The share shows as O B used and 0 B Free. I am trying to self-diagnose the issue. I am leaving this here as an FYI. If I learn of a fix I will leave it here. Sadly, my server is clearing a newly added drive. I can't move forward to resolve it. This means many services running on other Unraid servers cannot reach the data and therefore are not functioning. I re-read the posts in this thread and tried /usr/local/etc/rc.d/rc.nfsd restart on all the related servers. That did not fix the access.

Edited by marveljam
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...