• NFSv4 in 6.10.0-rc2d inital success


    gcolds

    With this version that has nfs4 enabled in the kernel, I was able mount all my mount points with version 4 instead of version 3.

    To enable nfs4 I made no changes on the Unraid side (other than just upgrading to 6.10.0-rc2d).

     

    I use systemd to mount my unraid nfs shares.  For each of my .mount files, I simply changed

    Type=nfs

    to

    Type=nfs4

     

    I then rebooted to see everything automount as expected.

    I also manually mounted a share with the following command

     

    sudo mount -t nfs4 -o proto=tcp,port=2049 x.x.x.x:/mnt/user/BookLibrary /mnt/ADrive

    I checked the output of nfsstat -m to verify:

     

    ❯ nfsstat -m 
    
    /nfs_mnt/AtlasBackups from x.x.x.x:/mnt/user/AtlasBackups
     Flags: rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.1.3,local_lock=none,addr=x.x.x.x
    
    /nfs_mnt/AtlasMedia from x.x.x.x:/mnt/user/AtlasMedia
     Flags: rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.1.3,local_lock=none,addr=x.x.x.x
    
    /nfs_mnt/GameUtils from x.x.x.x:/mnt/user/GameUtils
     Flags: rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.1.3,local_lock=none,addr=x.x.x.x
    
    /nfs_mnt/syslog_share from x.x.x.x:/mnt/user/syslog_share
     Flags: rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.1.3,local_lock=none,addr=x.x.x.x
    
    /nfs_mnt/GamesStorage from x.x.x.x:/mnt/user/GamesStorage
     Flags: rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.1.3,local_lock=none,addr=x.x.x.x
    
    /nfs_mnt/unsorted from x.x.x.x:/mnt/user/unsorted
     Flags: rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.1.3,local_lock=none,addr=x.x.x.x
    
    /mnt/ADrive from x.x.x.x:/mnt/user/BookLibrary
     Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.1.3,local_lock=none,addr=x.x.x.x

     

    So it looks like it mounted ok.  Everything on my Manjaro linux desktop that uses those mounts at least starts ok.  I'll do some more rigorous testing over the weekend but right now I'm running a backup and having Steam install a game to see how things shake out.  So far so good though, fingers crossed!

     

    -Greg

    • Like 4
    • Thanks 2



    User Feedback

    Recommended Comments

    Ok, I just figured out that if you are using systemd to mount on your desktop, you should not change the Type from nfs to nfs4.  That caused my mounts to stop mounting after I rebooted unraid.

     

    -Greg

    Link to comment

    Ok, over the weekend I've been messing around with the v4 support and wanted to report that no matter what I did I found  my NFSv4 mounts to be as rock solid reliable as my Synology and QNAP boxes were (Unraid replaced those boxes but I still keep them around as examples of well optimized SMB and NFS configs).

     

    My Steam game library is mostly stored on my Unraid box and that was hit or miss in terms of if a given game would launch or not.  That has changed and every game that launches locally on my system also launches when run from the nfsv4 share.  I make the distinction because most of my games are Windows games being run with Valve's Proton compatibility layer and Proton is flaky when run over NFS3, don't know why.  I have not had any of this flakiness at all. Everything launches except games that are generally not compatible with Proton for some reason (DRM, Anticheat, etc).

     

    Stale file handle errors only occurred 1 time this whole weekend. With hard links on,  I did all the things I recall would reliably cause the issue and the only time it occurred was when I stopped and then restarted the Array without remounting my NFS shares on my desktop and then doing an ls in one of the share directories.  I'm not ready to declare stale file handles a non-issue after only one weekend of testing but it certainly seems to be an improvement.

     

    Since My Unraid box has an AMD Threaddripper 2990wx and my primary desktop has an AMD Threadripper 3097x  (both with 64 GB of RAM) and I'm running a 10G network at home (just because I can, not because I need it), performance was always very good (if not always reliable).  I did not notice an improvement since I always had rather fast speeds.  I got more wrapped up in testing all the scenarios that would cause reliability issues (including data corruption or loss) this weekend.  I hope to do some NFSv3 vs NFSv4 bench-marking next weekend

     

     @limetech reported not being able to get another Unraid box to mount in v4 mode.  I can confirm this.  I used the following command to manually mount a share from my Synology box:

     

    mount -t nfs4 -o proto=tcp,port=2049 10.0.1.10/volume1/stuff /mnt/remotes/10.0.1.10_stuff

     This give me the following error:

    mount.nfs4: No such device

     

    After some investigation of my Synology, QNAP, and Manjaro Linux boxes, I think the problem is a missing kernel module (nfsv4.ko.zx). So if that gets added, I'll try mounting again.

     

    Misc thought:  It would be nice if the Unraid GUI exposed the ability to configure the NFS server to the extent you can configure Samba.  It could be nice to be able to put custom parameters in nfs.conf as you can with the smb.conf without going through the terminal.

    Thanks again @limetech

     

    -Greg

    Edited by gcolds
    • Like 1
    • Thanks 1
    Link to comment
    58 minutes ago, gcolds said:

     

    After some investigation of my Synology, QNAP, and Manjaro Linux boxes, I think the problem is a missing kernel module (nfsv4.ko.zx). So if that gets added, I'll try mounting again.

    I've run into the mounting problem with UD.  I've passed this along to LT.  I'm anxious to get UD working with NFSv4.

    Edited by dlandon
    Link to comment
    19 hours ago, gcolds said:

    Stale file handle errors only occurred 1 time this whole weekend. With hard links on,  I did all the things I recall would reliably cause the issue and the only time it occurred was when I stopped and then restarted the Array without remounting my NFS shares on my desktop and then doing an ls in one of the share directories.  I'm not ready to declare stale file handles a non-issue after only one weekend of testing but it certainly seems to be an improvement.

    Thank you for all your testing!  In case above, did client 'recover' automatically or did you need to remount?

     

    19 hours ago, gcolds said:

    I think the problem is a missing kernel module (nfsv4.ko.zx)

    Correct, not sure how I missed that one, but indeed having that kernel module solves the local mount problem.

     

    19 hours ago, gcolds said:

    isc thought:  It would be nice if the Unraid GUI exposed the ability to configure the NFS server to the extent you can configure Samba.

    Agreed, there are several areas worth customizing:

    /etc/nfs.conf

    /etc/nfsmount.conf

    plus the 'options' list and 'export options' on individual share lines in /etc/exports

     

    Changes to support these customizations will have to wait until after 6.10 has been released (I cannot hold back the release for the time it will take to implement and test unfortunately, hopefully everyone understands this).

    • Thanks 2
    Link to comment
    3 hours ago, limetech said:

    Changes to support these customisation will have to wait until after 6.10 has been released (I cannot hold back the release for the time it will take to implement and test unfortunately, hopefully everyone understands this).


    a lot of people have been waiting some time for NFS4 I think they’d understand/still appreciate if you took an iterative approach. Supporting NFS4 at all in 6.10 would be cool..

    • Like 1
    Link to comment
    4 hours ago, limetech said:

    Thank you for all your testing!  In case above, did client 'recover' automatically or did you need to remount?

    I did have to to manually remount all my nfs shares to get them to work again.

     

    5 hours ago, limetech said:

    Changes to support these customizations will have to wait until after 6.10 has been released (I cannot hold back the release for the time it will take to implement and test unfortunately, hopefully everyone understands this).

    Understood, just wanted to put the thought out there.  Honestly I'm just over the moon at having v4 support and can easily live with the current situation.

     

    -Greg

    • Thanks 1
    Link to comment

    Just wanted to finish this out with a bit of performance bench marking that I promised to last weekend.  I've attached a markdown file to this post with my testing details since I didn't want to make this post too long.

    This testing has been an odyssey for me as I understand now that I've had my NSF mounts configured in ways that caused me some strange issues that I didn't know were related to my mount settings.  I also learned that using systemd for mounts lets me recover from stale file handle errors much more easily.

     

    I'll just summarize by saying performance is not improved on a 10G network consistently when using v4 though things seem to be much more robust with v4.  I still have to experiment with mount options and NFS server options, but I think I'm done for now unless I get a specific request.

     

    -Greg

     

     

    NFSv4 Testing.md

    Edited by gcolds
    • Thanks 1
    Link to comment
    On 8/30/2021 at 5:58 PM, gcolds said:

    Stale file handle errors only occurred 1 time this whole weekend. With hard links on,  I did all the things I recall would reliably cause the issue and the only time it occurred was when I stopped and then restarted the Array without remounting my NFS shares on my desktop and then doing an ls in one of the share directories.  I'm not ready to declare stale file handles a non-issue after only one weekend of testing but it certainly seems to be an improvement.

    State file handle errors happen on all my VMs for no apparent reason all the time. I know it's been a couple of months since your post but can you comment back on your milleage since August with nfsv4 in Unraid? The problem you describe (state file handle errors while and after restarting the array) isn't an issue for me (only using NFS for VMs hosted on Unraid, so rebooting the array means rebooting the VMs in my case).

    Link to comment

    Sorry for taking so long to respond.  I just noticed you're post.  I have not had a single Stale File handle error on my system since I disabled hard links.  It has made me have to find an alternative backup solution for my linux desktop (I moved from BackInTime to Borg with Vorta) but nothing else has been a problem since.

     

    -Greg

    Link to comment

    @gcolds thank you for testing this! Since @limetech asked a while ago why NFSv4: I've been planning to build a very simple Kubernetes cluster in my homelab for a while, but a lot of deployments would need a persistent volume. I'd like to use Unraid as my central storage and the most sensible way would be to use a NFS storage driver in the cluster. This obviously requires a stable NFS solution in Unraid, and before 6.10 all the reports about unstable mounts kept me from actually trying to implement this. Thank you for getting NFSv4 in 6.10!

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.