Jump to content
  • 6.12.11 two issues after upgrading from 6.12.10


    warpspeed
    • Minor

    After upgrading from 6.12.10 to 6.12.11 I found I had two issues:

     

    1. This error spewing in my syslog after a while:

     

    Jul 20 16:55:11 unraid nginx: 2024/07/20 16:55:11 [alert] 12022#12022: worker process 671 exited on signal 6
    Jul 20 16:55:12 unraid nginx: 2024/07/20 16:55:12 [alert] 12022#12022: worker process 738 exited on signal 6
    Jul 20 16:55:13 unraid nginx: 2024/07/20 16:55:13 [alert] 12022#12022: worker process 790 exited on signal 6
    Jul 20 16:55:14 unraid nginx: 2024/07/20 16:55:14 [alert] 12022#12022: worker process 791 exited on signal 6
    Jul 20 16:55:15 unraid nginx: 2024/07/20 16:55:15 [alert] 12022#12022: worker process 1098 exited on signal 6
    Jul 20 16:55:16 unraid nginx: 2024/07/20 16:55:16 [alert] 12022#12022: worker process 1167 exited on signal 6
    Jul 20 16:55:18 unraid nginx: 2024/07/20 16:55:18 [alert] 12022#12022: worker process 1335 exited on signal 6
    Jul 20 16:55:18 unraid nginx: 2024/07/20 16:55:18 [alert] 12022#12022: worker process 1396 exited on signal 6
    Jul 20 16:55:19 unraid nginx: 2024/07/20 16:55:19 [alert] 12022#12022: worker process 1440 exited on signal 6
    Jul 20 16:55:20 unraid nginx: 2024/07/20 16:55:20 [alert] 12022#12022: worker process 1478 exited on signal 6
    Jul 20 16:55:20 unraid nginx: 2024/07/20 16:55:20 [alert] 12022#12022: worker process 1494 exited on signal 6
    Jul 20 16:55:21 unraid nginx: 2024/07/20 16:55:21 [alert] 12022#12022: worker process 1516 exited on signal 6
    Jul 20 16:55:22 unraid nginx: 2024/07/20 16:55:22 [alert] 12022#12022: worker process 1700 exited on signal 6
    Jul 20 16:55:22 unraid nginx: 2024/07/20 16:55:22 [alert] 12022#12022: worker process 1746 exited on signal 6

     

    2. NFS mounts from an Ubuntu 22.04 LTS using autofs wouldn't work.

     

    Regarding this, I tried rebooting the Ubuntu server and the UNRAID server. I tried a manual mount, manual mount worked. It's just autofs that failed.

     

    So I downgraded back to 6.12.10 and now it's working again.

     

    With NFS here's the export options I use

     

    *(ro) 192.168.1.201(sec=sys,rw,anonuid=99,anongid=100,all_squash) 192.168.1.205(sec=sys,rw,anonuid=99,anongid=100,all_squash) 192.168.1.199(sec=sys,rw,anonuid=99,anongid=100,all_squash)

     

    and here's the mount config and options on the Ubuntu machine in /etc/auto.nfs:

    sharename		-users,rw,auto,noatime,async,hard,rsize=32768,wsize=32768		192.168.1.200:/mnt/user/sharename

     

    here's what the output of mount looks like on ubuntu with the share mounted:

     

    192.168.1.200:/mnt/user/sharename on /nfs/sharename type nfs4 (rw,nosuid,nodev,noexec,noatime,vers=4.2,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.205,local_lock=none,addr=192.168.1.200)

     




    User Feedback

    Recommended Comments

    Same here.

    I think this was the same error we had a year ago or so, where the gui also starts to fail. The docker page does not load properly, and the cpu stats etc on the dashbord also fails

    The temporary fix was to do

    /etc/rc.d/rc.nginx restart
    /etc/rc.d/rc.nginx reload

    but it was then fixed.

     

     

    Link to comment
    Quote

    2. NFS mounts from an Ubuntu 22.04 LTS using autofs wouldn't work.

     

    Regarding this, I tried rebooting the Ubuntu server and the UNRAID server. I tried a manual mount, manual mount worked. It's just autofs that failed.

     

    So I downgraded back to 6.12.10 and now it's working again.

    Are there any log entries on Ubuntu that relate to NFS mount failures?

     

    If you restart autofs on Ubuntu, will the mounts work?

    Link to comment

    No log entries on the Ubuntu machine that were obvious. When I 'cd' to the nfs mount it just came up directory not found.

    I did have NFS mounts from that Ubuntu machine to another machine and they continued to work just fine.

    I tried restarting the Ubuntu machine and restarting autofs, and restarting unraid, and none of that worked.

    But manually mounting did work, even with exactly the same settings as I had for autofs. Which was a bit of a head scratcher for me.

    As soon as I downgraded unraid, the autofs mounts worked again.

    Link to comment

    I had a similar NFS problem. After upgrading to 6.12.11 my OSMC box running KODI was unable to mount via NFS V3 (which worked using 6.12.10).
    However, mounting manually from the shell on that machine worked and that was automatically using NFS V4.
    When I configured KODI to use NFS V4 for the NFS client, KODI worked again.

    I conclude that NFS V3 does not seem to be supported per default any more. This will break a lot more other stuff in my environment.
    Is it possible to re-enable NFS V3?
     

    Edited by murkus
    Link to comment

    I'm also having NFS issues, still not sure if its related to the upgrade but I'm just going to downgrade to confirm if they still occur.

    I am using autofs on debian and they were working correctly then stopped working after a bit of time on the server. I restarted the NFS service and still couldn't reconnect, kept timing out.

    Tried both autofs and just a manual mount with no positive results, it felt once the NFS service stopped working I couldn't get it working again until I restarted the entire server/array.

     

    Will report back if I still have this issue after downgrading, but was definitely getting this daily.

     

    Link to comment

    I found the problem:

    NFS3 clients go through portmapper to access nfsd.

    Up to unraid 6.12.10 nfsd was registered with portmapper, so nfs3 clients worked.

    Starting with 6.12.11 nfsd isn't registered with portmapper (gets lost when array starts), so nfs3 clients fail to mount shares from unraid nfsd.

    NFS4 clients to not go through portmapper to access nfsd, this is why nfs4 clients still work with 6.12.11.

     

    However, the KODI NFS 4 clients seems to be buggy. It works for video files but it doesn't work for many music files.

    So with KODI you really want to use NFS3. But you can't with unraid 6.12.11

     

    I have filed an according bug report.

    I have downgraded to 6.12.10 and NFS3 works fine because all rpc daemons remain registered with portmapper after starting the array.

     

    Edited by murkus
    • Thanks 1
    Link to comment
    On 8/9/2024 at 10:14 PM, murkus said:

    I found the problem:

    NFS3 clients go through portmapper to access nfsd.

    Up to unraid 6.12.10 nfsd was registered with portmapper, so nfs3 clients worked.

    Starting with 6.12.11 nfsd isn't registered with portmapper (gets lost when array starts), so nfs3 clients fail to mount shares from unraid nfsd.

    NFS4 clients to not go through portmapper to access nfsd, this is why nfs4 clients still work with 6.12.11.

     

     

    Do you know if there is a workaround to get NFS3 working again? Or is downgrading the only option?

    Link to comment
    8 hours ago, ziggy99 said:

    Do you know if there is a workaround to get NFS3 working again? Or is downgrading the only option?

    A user has reported that removing tailscale has resolved their issue with NFSv3.

    Link to comment
    1 minute ago, dlandon said:

    A user has reported that removing tailscale has resolved their issue with NFSv3.

     

    Yes, that user was actually me 😀 Removing Tailscale plugin might be what fixed it for me. Seems to work for now.

    Link to comment
    Just now, murkus said:

    I don't have tailscale.

     

    My issue was that when I booted everything worked fine, and I could see NFS when I ran rpcinfo -p.

    But after some minutes, it suddenly stopped working, and rpcinfo would not have those NFS ports anymore.

     

    I saw such things in my log:

    Aug 24 18:31:57 stardustunraid network: reload service: rpc

    and

    Aug 24 18:32:01 stardustunraid network: reload service: nfsd

     

    I removed Tailscale plugin and I have not had this issue after that. I am not sure if that was the issue, but it might have triggered something. I think Tailscale might run in some docker container - so that might be the trigger.

    I would check if your log file has these reload service entries and try to find out why it reloads.

     

    Link to comment
    On 8/24/2024 at 3:05 PM, ziggy99 said:

     

    Yes, that user was actually me 😀 Removing Tailscale plugin might be what fixed it for me. Seems to work for now.

     

    The Tailscale plugin runs the Unraid-provided "reload services" script (/usr/local/emhttp/webGui/scripts/reload_services) when Tailscale connects. This is required so that the Unraid services will start listening on the Tailscale IP.

     

    Unfortunately, a service reload is also what triggers the NFS bug.

     

    As an interim solution, you could change the "Unraid services listen on Tailscale IP" to "No" until the NFS bug is fixed. That will prevent the plugin from doing the service reload and triggering the bug. (It will also make Unraid services unavailable on the Tailscale IP -- you'll be able to get to containers, but the WebGUI/SMB/NFS/SSH won't be accessible.)

     

    Was this fixed in .12/.13?

    Edited by EDACerton
    • Thanks 1
    Link to comment
    1 hour ago, EDACerton said:

     

    The Tailscale plugin runs the Unraid-provided "reload services" script (/usr/local/emhttp/webGui/scripts/reload_services) when Tailscale connects. This is required so that the Unraid services will start listening on the Tailscale IP.

     

    Unfortunately, a service reload is also what triggers the NFS bug.

     

    As an interim solution, you could change the "Unraid services listen on Tailscale IP" to "No" until the NFS bug is fixed. That will prevent the plugin from doing the service reload and triggering the bug. (It will also make Unraid services unavailable on the Tailscale IP -- you'll be able to get to containers, but the WebGUI/SMB/NFS/SSH won't be accessible.)

     

    Was this fixed in .12/.13?

     

    Thanks for a good explanation! The Unraid support said that the same issue occured if array was not set to autostart - I guess starting the array would also trigger run of this "reload services" script.

    This also tells me that this will be an issue not only with Tailscale plugin - but in fact all plugins that triggers reload services script.

     

    I have for now removed the Tailscale plugin until the issue is fixed.

     

    The same error continues in .12/.13 - not fixed there.

    Link to comment
    On 8/26/2024 at 2:30 PM, ziggy99 said:

     

    The same error continues in .12/.13 - not fixed there.

    @dlandon related to the other thread I just tagged you in, curious if any fixes for this are on the radar for the next release?

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...