nraygun Posted September 14, 2020 Share Posted September 14, 2020 I only started having this "stale file handle" problem when I switched to using mounts in /etc/fstab. Prior to that I was just using shortcuts in Thunar. I think this method uses gvfs. I noticed that the shares were faster when I specified them in /etc/fstab and browsing to my /mnt directory in the file system. Other than specifying SMB1 in /etc/fstab, has anyone found a better solution that uses SMB3? Quote Link to comment
Adam H Posted September 24, 2020 Share Posted September 24, 2020 I just encountered this issue as well. I was running Unraid 6.8.2 for a long while - mounting NFS shares using fstab on Linux Mint 20, with no issues. I just updated to 6.8.3 and I'm experiencing the same behavior as everyone else. As soon as a file is written over a cache-enabled share, the share dies. Subsequent access attempts give me the "stale file handle" error. I tested a share with CIFS and it does the same thing. Ultimately I disabled the cache on each share and it works like others have mentioned. It's far from an ideal solution though. Quote Link to comment
gm.cinalli Posted September 25, 2020 Share Posted September 25, 2020 10 hours ago, Adam H said: I just encountered this issue as well. I was running Unraid 6.8.2 for a long while - mounting NFS shares using fstab on Linux Mint 20, with no issues. I just updated to 6.8.3 and I'm experiencing the same behavior as everyone else. As soon as a file is written over a cache-enabled share, the share dies. Subsequent access attempts give me the "stale file handle" error. I tested a share with CIFS and it does the same thing. Ultimately I disabled the cache on each share and it works like others have mentioned. It's far from an ideal solution though. So, it seems to be a problem only of 6.8.3? Really strange, however in my case, with SMB I don't have this problem. Quote Link to comment
nraygun Posted October 16, 2020 Share Posted October 16, 2020 Any further updates on this issue? I'm about to post a related issue. I can't believe this is a problem for my fairly simple setup. Right now I'm using fstab and SMBv1. Quote Link to comment
trapexit Posted October 16, 2020 Share Posted October 16, 2020 The complexity of one's setup isn't generally speaking relevant to this issue. It's due to out of band changes to the underlying storage and NFS or Samba losing track of the files. I don't know enough about unraid's union filesystem but from what I gather some people's issues are likely directly related to that. Others might just be casual apps doing out of band changes. Quote Link to comment
nraygun Posted October 17, 2020 Share Posted October 17, 2020 Generally speaking, agreed, but in this specific case a description of the simple use case versus a more involved one is relevant I believe. Quote Link to comment
Fremulon Posted October 31, 2020 Share Posted October 31, 2020 It also applies to me. 6.8.3. I get stale file handle all the time and have to restart my docker containers Quote Link to comment
trurl Posted October 31, 2020 Share Posted October 31, 2020 2 hours ago, Fremulon said: restart my docker containers Quote Link to comment
Ansuz Posted November 18, 2020 Share Posted November 18, 2020 To anyone still having this problem, I manged to resolve it by setting Tunable (support Hard Links) in Settings -> Global Share Settings to No 2 Quote Link to comment
autumnwalker Posted November 18, 2020 Author Share Posted November 18, 2020 7 minutes ago, Ansuz said: To anyone still having this problem, I manged to resolve it by setting Tunable (support Hard Links) in Settings -> Global Share Settings to No My server is currently offline so I will have to validate later, but I'm pretty sure I have this off on my box as well. I'll double check and confirm. Quote Link to comment
klogg Posted November 28, 2020 Share Posted November 28, 2020 On 11/18/2020 at 9:52 AM, Ansuz said: To anyone still having this problem, I manged to resolve it by setting Tunable (support Hard Links) in Settings -> Global Share Settings to No This solved it for me, no need to regress to SMB v1.0. Thank you @Ansuz!! Quote Link to comment
autumnwalker Posted December 1, 2020 Author Share Posted December 1, 2020 On 11/18/2020 at 11:00 AM, autumnwalker said: My server is currently offline so I will have to validate later, but I'm pretty sure I have this off on my box as well. I'll double check and confirm. Turns out this feature is enabled! I could have sworn I disabled it. Once parity check is finished I'll try disabling and see if it solves the issue for me. I'm still a way's away before getting my homelab back up so I won't be able to fully test until later. Quote Link to comment
comboy Posted December 12, 2020 Share Posted December 12, 2020 same problem here. setting Tunable (support Hard Links) in Settings -> Global Share Settings to No is not a solution for me, as is breaks my back in time backups from my desktop machine (rsync based backup with hardlink versioning) as back in time can no longer make hardlinks via ssh Occurs for both, NFS and CIFS shares Quote Link to comment
RinkyDinkBear Posted January 2, 2021 Share Posted January 2, 2021 I wish we could get some guidance on this issue. Using SMB v1.0 or turning off cache hurts in many ways. SImilarly, turning off hardlinks is not practical for many unRAID users' setup. Add me to the list of those affected. Quote Link to comment
trapexit Posted January 2, 2021 Share Posted January 2, 2021 (edited) There simply isn't a practical way to manage the situation which might be why there isn't much of a response (though there perhaps should be.) UnRAID is in the same boat as any other filesystem. If you change things out of band from SMB or NFS they get unhappy. union filesystems make this more likely but it is a difference of degrees not kind. Statelessness of older protocols was given up in favor of stateful so as to increase performance. Could SMB and NFS have been designed to hide some of the issues from out of band changes when detected? Perhaps. But that's not the world we live in and to attempt to present to SMB / NFS a consistent world view when the union filesystem itself doesn't know is very non-trivial. mergerfs uses some optional tricks to help with the situation but they aren't perfect. NFS and the kernel can notice certain situations where an inode should have changed and didn't or shouldn't have and did. Edited January 2, 2021 by trapexit Quote Link to comment
Ntouchable Posted January 15, 2021 Share Posted January 15, 2021 I am also affected by this issue. I have several shares which I mount via using NFS in fstab on Ubuntu 20.04. When I access my Windows VM and mount a share using SMB, it creates a stale file handle on Ubuntu. What are the downside of turning off hardlinks? Quote Link to comment
autumnwalker Posted January 15, 2021 Author Share Posted January 15, 2021 On 12/1/2020 at 9:04 AM, autumnwalker said: Turns out this feature is enabled! I could have sworn I disabled it. Once parity check is finished I'll try disabling and see if it solves the issue for me. I'm still a way's away before getting my homelab back up so I won't be able to fully test until later. I flipped hardlinks off a couple of weeks ago and it seems to have resolved my issues ... noting this is not a workable solution for others. At least we are getting somewhere with root cause. Quote Link to comment
SuberSeb Posted April 11, 2021 Share Posted April 11, 2021 Same issue for me. Is there any solution of this? SMBv1, turning off hard links or turning off cache is not an option for me. Quote Link to comment
noja Posted May 14, 2021 Share Posted May 14, 2021 (edited) On 4/11/2021 at 1:52 AM, SuberSeb said: Is there any solution of this? SMBv1, turning off hard links or turning off cache is not an option for me. Don't think so. I think I understand what trapexit is arguing, but there has to be a better solution. I've been using autofs on an ubuntu setup and it still ends up with stale file handles all the time. I should note that I have hard links off and no cache for the share. The only other option I can see is the Tunable (fuse_remember): setting under Settings->NFS, but the warning about out of memory errors has me a little skittish for setting that to -1. Edited May 14, 2021 by noja Quote Link to comment
trapexit Posted May 14, 2021 Share Posted May 14, 2021 I'm not really familiar with unraid's filesystem so I can't speak to what this "hardlink off" thing is or does. No such feature in mergerfs. Unless there is some faking of hardlinks I'm not sure I see how it'd matter here. FUSE's "remember" or "noforget" is about the internal node value. The 2x64bit node + generation value used between the kernel and the FUSE server to keep track of entries in the filesystem. Normally the kernel will tell the server to forget an entry/node when it no longer references it. It might ask about it again in the future in which case you use a new node + generation values. However, as I understand it, NFS may store that info and ask for it even if the kernel told the fuse server to forget about it. So those options cause libfuse to keep the values around for a certain amount of time or forever *just in case* someone asks about them. If NFS asks and that entry is no longer known... stale file handle error. Theoretically it's possible to store those values to disk so as to limit memory consumption but that's not a feature of the library. There would also be other tradeoffs there. If a user facing inode value changes out of band that too can cause the stale file error as from the perspective of NFS shit just changed out of band and it can't trust the situation. This happens in mergerfs when someone has a pool and the move a file from say... local disk to rclone mount. From your perspective it's the same file but the change was out of band. Neither mergerfs or NFS know it's safe to treat the same. The inode changing means it's a different file even if other values like size and timestamps are the same. mergerfs just doesn't care. It reports what it sees. One idea is to optionally cache inode values. Whatever value is seen the first time we see the value we always use it going forward. There is an increased memory usage of course to keep track of that as well as a slight compute cost but would probably help in these out of band move situations. Adding it to mergerfs of course wouldn't help unraid but could be something they may look to add (assuming that's the/a cause here.) Quote Link to comment
mangocrysis Posted June 9, 2021 Share Posted June 9, 2021 Facing this issue as well. I've turned hardlinks off.. Even if that's an option it doesn't seem to help. Quote Link to comment
dlandon Posted June 9, 2021 Share Posted June 9, 2021 Mounting remote shares with UD using SMB does not cause the stale file handles. It has had a fix to prevent that issue. NFS stale file handles cannot be prevented unless the hard links and/or cache for the share is disabled. This is not an Unraid issue, it is the nature of NFS. There is a possibility that NFSv4 can provide a fix for the stale handles. It uses virtual file handles that might prevent the stale file handle problem. There isn't enough information available to know if it will fix the issue, and I can't do any testing until Unraid provides NFSv4 support. I recommend that you use UD and mount all your remote shares with SMB to prevent the stale file handle problem. Quote Link to comment
dlandon Posted August 25, 2021 Share Posted August 25, 2021 Please see this post: Quote Link to comment
capp3 Posted August 27, 2021 Share Posted August 27, 2021 I am unable to see your post, tells me i don't have permission! On 8/25/2021 at 1:10 PM, dlandon said: Please see this post: Quote Link to comment
touz Posted December 1, 2021 Share Posted December 1, 2021 I'm just posting this here: I didn't want to enable SMB1 or turn off hardlinking, and I was experiencing the stale file handle issue while mounting shares as SMB3. It turns out that adding the parameter noserverino to the mount command seems to solve the issue for me. I've been using it for a few days and didn't encounter the issue. I don't know if it could introduce other issues, I'm by no means a pro regarding this, but I hope this solves the issue for other people. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.