Jump to content
  • 6.12.11 two issues after upgrading from 6.12.10


    warpspeed
    • Minor

    After upgrading from 6.12.10 to 6.12.11 I found I had two issues:

     

    1. This error spewing in my syslog after a while:

     

    Jul 20 16:55:11 unraid nginx: 2024/07/20 16:55:11 [alert] 12022#12022: worker process 671 exited on signal 6
    Jul 20 16:55:12 unraid nginx: 2024/07/20 16:55:12 [alert] 12022#12022: worker process 738 exited on signal 6
    Jul 20 16:55:13 unraid nginx: 2024/07/20 16:55:13 [alert] 12022#12022: worker process 790 exited on signal 6
    Jul 20 16:55:14 unraid nginx: 2024/07/20 16:55:14 [alert] 12022#12022: worker process 791 exited on signal 6
    Jul 20 16:55:15 unraid nginx: 2024/07/20 16:55:15 [alert] 12022#12022: worker process 1098 exited on signal 6
    Jul 20 16:55:16 unraid nginx: 2024/07/20 16:55:16 [alert] 12022#12022: worker process 1167 exited on signal 6
    Jul 20 16:55:18 unraid nginx: 2024/07/20 16:55:18 [alert] 12022#12022: worker process 1335 exited on signal 6
    Jul 20 16:55:18 unraid nginx: 2024/07/20 16:55:18 [alert] 12022#12022: worker process 1396 exited on signal 6
    Jul 20 16:55:19 unraid nginx: 2024/07/20 16:55:19 [alert] 12022#12022: worker process 1440 exited on signal 6
    Jul 20 16:55:20 unraid nginx: 2024/07/20 16:55:20 [alert] 12022#12022: worker process 1478 exited on signal 6
    Jul 20 16:55:20 unraid nginx: 2024/07/20 16:55:20 [alert] 12022#12022: worker process 1494 exited on signal 6
    Jul 20 16:55:21 unraid nginx: 2024/07/20 16:55:21 [alert] 12022#12022: worker process 1516 exited on signal 6
    Jul 20 16:55:22 unraid nginx: 2024/07/20 16:55:22 [alert] 12022#12022: worker process 1700 exited on signal 6
    Jul 20 16:55:22 unraid nginx: 2024/07/20 16:55:22 [alert] 12022#12022: worker process 1746 exited on signal 6

     

    2. NFS mounts from an Ubuntu 22.04 LTS using autofs wouldn't work.

     

    Regarding this, I tried rebooting the Ubuntu server and the UNRAID server. I tried a manual mount, manual mount worked. It's just autofs that failed.

     

    So I downgraded back to 6.12.10 and now it's working again.

     

    With NFS here's the export options I use

     

    *(ro) 192.168.1.201(sec=sys,rw,anonuid=99,anongid=100,all_squash) 192.168.1.205(sec=sys,rw,anonuid=99,anongid=100,all_squash) 192.168.1.199(sec=sys,rw,anonuid=99,anongid=100,all_squash)

     

    and here's the mount config and options on the Ubuntu machine in /etc/auto.nfs:

    sharename		-users,rw,auto,noatime,async,hard,rsize=32768,wsize=32768		192.168.1.200:/mnt/user/sharename

     

    here's what the output of mount looks like on ubuntu with the share mounted:

     

    192.168.1.200:/mnt/user/sharename on /nfs/sharename type nfs4 (rw,nosuid,nodev,noexec,noatime,vers=4.2,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.205,local_lock=none,addr=192.168.1.200)

     




    User Feedback

    Recommended Comments

    joggs

    Posted

    Same here.

    I think this was the same error we had a year ago or so, where the gui also starts to fail. The docker page does not load properly, and the cpu stats etc on the dashbord also fails

    The temporary fix was to do

    /etc/rc.d/rc.nginx restart
    /etc/rc.d/rc.nginx reload

    but it was then fixed.

     

     

    dlandon

    Posted

    Quote

    2. NFS mounts from an Ubuntu 22.04 LTS using autofs wouldn't work.

     

    Regarding this, I tried rebooting the Ubuntu server and the UNRAID server. I tried a manual mount, manual mount worked. It's just autofs that failed.

     

    So I downgraded back to 6.12.10 and now it's working again.

    Are there any log entries on Ubuntu that relate to NFS mount failures?

     

    If you restart autofs on Ubuntu, will the mounts work?

    warpspeed

    Posted

    No log entries on the Ubuntu machine that were obvious. When I 'cd' to the nfs mount it just came up directory not found.

    I did have NFS mounts from that Ubuntu machine to another machine and they continued to work just fine.

    I tried restarting the Ubuntu machine and restarting autofs, and restarting unraid, and none of that worked.

    But manually mounting did work, even with exactly the same settings as I had for autofs. Which was a bit of a head scratcher for me.

    As soon as I downgraded unraid, the autofs mounts worked again.

    murkus

    Posted (edited)

    I had a similar NFS problem. After upgrading to 6.12.11 my OSMC box running KODI was unable to mount via NFS V3 (which worked using 6.12.10).
    However, mounting manually from the shell on that machine worked and that was automatically using NFS V4.
    When I configured KODI to use NFS V4 for the NFS client, KODI worked again.

    I conclude that NFS V3 does not seem to be supported per default any more. This will break a lot more other stuff in my environment.
    Is it possible to re-enable NFS V3?
     

    Edited by murkus
    cakes044

    Posted

    I'm also having NFS issues, still not sure if its related to the upgrade but I'm just going to downgrade to confirm if they still occur.

    I am using autofs on debian and they were working correctly then stopped working after a bit of time on the server. I restarted the NFS service and still couldn't reconnect, kept timing out.

    Tried both autofs and just a manual mount with no positive results, it felt once the NFS service stopped working I couldn't get it working again until I restarted the entire server/array.

     

    Will report back if I still have this issue after downgrading, but was definitely getting this daily.

     

    murkus

    Posted (edited)

    I found the problem:

    NFS3 clients go through portmapper to access nfsd.

    Up to unraid 6.12.10 nfsd was registered with portmapper, so nfs3 clients worked.

    Starting with 6.12.11 nfsd isn't registered with portmapper (gets lost when array starts), so nfs3 clients fail to mount shares from unraid nfsd.

    NFS4 clients to not go through portmapper to access nfsd, this is why nfs4 clients still work with 6.12.11.

     

    However, the KODI NFS 4 clients seems to be buggy. It works for video files but it doesn't work for many music files.

    So with KODI you really want to use NFS3. But you can't with unraid 6.12.11

     

    I have filed an according bug report.

    I have downgraded to 6.12.10 and NFS3 works fine because all rpc daemons remain registered with portmapper after starting the array.

     

    Edited by murkus
    • Thanks 1
    ziggy99

    Posted

    On 8/9/2024 at 10:14 PM, murkus said:

    I found the problem:

    NFS3 clients go through portmapper to access nfsd.

    Up to unraid 6.12.10 nfsd was registered with portmapper, so nfs3 clients worked.

    Starting with 6.12.11 nfsd isn't registered with portmapper (gets lost when array starts), so nfs3 clients fail to mount shares from unraid nfsd.

    NFS4 clients to not go through portmapper to access nfsd, this is why nfs4 clients still work with 6.12.11.

     

     

    Do you know if there is a workaround to get NFS3 working again? Or is downgrading the only option?

    dlandon

    Posted

    8 hours ago, ziggy99 said:

    Do you know if there is a workaround to get NFS3 working again? Or is downgrading the only option?

    A user has reported that removing tailscale has resolved their issue with NFSv3.

    ziggy99

    Posted

    1 minute ago, dlandon said:

    A user has reported that removing tailscale has resolved their issue with NFSv3.

     

    Yes, that user was actually me 😀 Removing Tailscale plugin might be what fixed it for me. Seems to work for now.

    ziggy99

    Posted

    Just now, murkus said:

    I don't have tailscale.

     

    My issue was that when I booted everything worked fine, and I could see NFS when I ran rpcinfo -p.

    But after some minutes, it suddenly stopped working, and rpcinfo would not have those NFS ports anymore.

     

    I saw such things in my log:

    Aug 24 18:31:57 stardustunraid network: reload service: rpc

    and

    Aug 24 18:32:01 stardustunraid network: reload service: nfsd

     

    I removed Tailscale plugin and I have not had this issue after that. I am not sure if that was the issue, but it might have triggered something. I think Tailscale might run in some docker container - so that might be the trigger.

    I would check if your log file has these reload service entries and try to find out why it reloads.

     

    EDACerton

    Posted (edited)

    On 8/24/2024 at 3:05 PM, ziggy99 said:

     

    Yes, that user was actually me 😀 Removing Tailscale plugin might be what fixed it for me. Seems to work for now.

     

    The Tailscale plugin runs the Unraid-provided "reload services" script (/usr/local/emhttp/webGui/scripts/reload_services) when Tailscale connects. This is required so that the Unraid services will start listening on the Tailscale IP.

     

    Unfortunately, a service reload is also what triggers the NFS bug.

     

    As an interim solution, you could change the "Unraid services listen on Tailscale IP" to "No" until the NFS bug is fixed. That will prevent the plugin from doing the service reload and triggering the bug. (It will also make Unraid services unavailable on the Tailscale IP -- you'll be able to get to containers, but the WebGUI/SMB/NFS/SSH won't be accessible.)

     

    Was this fixed in .12/.13?

    Edited by EDACerton
    • Thanks 1
    ziggy99

    Posted

    1 hour ago, EDACerton said:

     

    The Tailscale plugin runs the Unraid-provided "reload services" script (/usr/local/emhttp/webGui/scripts/reload_services) when Tailscale connects. This is required so that the Unraid services will start listening on the Tailscale IP.

     

    Unfortunately, a service reload is also what triggers the NFS bug.

     

    As an interim solution, you could change the "Unraid services listen on Tailscale IP" to "No" until the NFS bug is fixed. That will prevent the plugin from doing the service reload and triggering the bug. (It will also make Unraid services unavailable on the Tailscale IP -- you'll be able to get to containers, but the WebGUI/SMB/NFS/SSH won't be accessible.)

     

    Was this fixed in .12/.13?

     

    Thanks for a good explanation! The Unraid support said that the same issue occured if array was not set to autostart - I guess starting the array would also trigger run of this "reload services" script.

    This also tells me that this will be an issue not only with Tailscale plugin - but in fact all plugins that triggers reload services script.

     

    I have for now removed the Tailscale plugin until the issue is fixed.

     

    The same error continues in .12/.13 - not fixed there.

    warpspeed

    Posted

    On 8/26/2024 at 2:30 PM, ziggy99 said:

     

    The same error continues in .12/.13 - not fixed there.

    @dlandon related to the other thread I just tagged you in, curious if any fixes for this are on the radar for the next release?

    ziggy99

    Posted

    @dlandon Any news on a 6.12.14 release soon? This NFS bug is preventing us from using Tailscale. I see now that Unraid has a tighter cooperation with Tailscale - so this should really be fixed soon.

    Karyudo

    Posted

    I think I'm a victim of this broken aspect of 6.12.11 onwards. Plex can't connect from an Ubuntu 24.04 LTS box to files shared from an UnRAID 6.12.13 server via autofs. Guess I'll have to downgrade to 6.12.10... but it'd be nice if there were an ETA on the 6.12.14 update that will fix this!

    dlandon

    Posted

    On 10/28/2024 at 3:43 AM, ziggy99 said:

    @dlandon Any news on a 6.12.14 release soon? This NFS bug is preventing us from using Tailscale. I see now that Unraid has a tighter cooperation with Tailscale - so this should really be fixed soon.

    We have worked on several things related to the NFS issues.  We are still working on getting Unraid 7.0 released and will then get back to the 6.14 release.

     

    The issue that NFS is having is our 'rc.nfsd stop' script no longer works because the worker tasks will not die when a kill command is used.  To be clear, this came from a Linux Kernel change on one of the updates in the 6.X series.

     

    I know several users are changing ports that NFS is to use and then doing either a 'rc.nfsd reatart', or a 'rc.nfsd stop' and 'rc.nfsd.start' sequence so the port changes will take effect.  Once Unraid has booted, don't do anything that would execute a 'rc.nfsd stop' command.  Instead use the 'rc.nfsd update' 'rc.nfsd reload' command.

    • Thanks 1
    ziggy99

    Posted

    @dlandon I upgraded now to 6.12.14 and NFS seems to work again. Can you confirm that .14 has a NFS fix?

    dlandon

    Posted

    Changes were made to address the NFS situation.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...