Ansuz Posted April 1, 2020 Share Posted April 1, 2020 Having the same issue here. Using the vers=1.0 flag in fstab "resolves" the issue, but this isn't a long-term fix due to security issues with SMBv1. Quote Link to comment
autumnwalker Posted April 1, 2020 Author Share Posted April 1, 2020 Reflecting further on this - I don't think the issue is actually with CIFS. I have the same behavior using NFS if cache is enabled on the share. So perhaps vers=1.0 does something with CIFS to suppress the problem, but it's not strictly a CIFS issue. Quote Link to comment
Dave1337 Posted April 1, 2020 Share Posted April 1, 2020 (edited) I have multiple shares mounted, and it seems this is only happening on the ones using cache. Going to turn off cache drive for these and see what happens. Edit: Disabling cache drive on these shares seems to have fixed the issue. Not ideal, but works for me in this case. Edited April 1, 2020 by Dave1337 My solution. Quote Link to comment
Chad Kunsman Posted April 9, 2020 Share Posted April 9, 2020 Same issue happening to me. On 6.8.0. SMB shares mounted on Ubuntu. Switched to version1.0 for now to see how it goes. Quote Link to comment
autumnwalker Posted April 23, 2020 Author Share Posted April 23, 2020 So it has been close to one month now with cache disabled on the shares. Have not had a single instance of "stale file handle" across CIFS or NFS. Clearly there is a relation to cache / mover. Quote Link to comment
danioj Posted April 25, 2020 Share Posted April 25, 2020 (edited) I had an issue like this. On my backup unraid server, I have two important shares mounted from my unraid main server that i use to store my important documents and photos. The purpose of these shares is so that Duplicati can use them to run scheduled backups of those shares onto the backup server. Most of the time the setup works fine, but I have recently been experiencing the stale share "issue" which has caused the scheduled backup to fail. I have followed most of the suggestions listed in this thread but it still happens. Yesterday morning, when I checked if Duplicati had run at 1am - you guessed it - backup failed, stale share. I decided to band-aid the issue. I wrote the following - very inelegant script - that checks if the shares are alive and mounted and if not mounts them. If the share is mounted, it does nothing. I use the User Scripts plugin (available in CA) to schedule this to run in the background on a custom 5 mins schedule (*/5 * * * *). It run all yesterday and kept the share mounted and yes, my backups were successful. Checking the unraid log, the script had to mount them 3 times throughout the day without any obvious reason why. Like I said, it is very inelegant and could be improved greatly by feeding the motored shares into an array and looping through them but I didn't want to spend that much time on it as it was for just me and those two shares. For those with a little bash script knowledge, amending the script to meet your needs should be straightforward. Sharing in the hope that someone who has a similar situation and setup can use it. #!/bin/bash # shares the script is monitoring to keep alive. documents_share=/mnt/disks/UNRAID_documents/ photos_share=/mnt/disks/UNRAID_photos/ # test if the shares are alive by seeing if ls errors but suppress that error. documents_share_isalive=$(ls $documents_share 2> /dev/null) photos_share_isalive=$(ls $photos_share 2> /dev/null) # set need to mount variable to 0 as it will be tested needtomount=0 if [ -z "$documents_share_isalive" ] then echo "$documents_share is either unmounted or stale." # unmount the documents share. uad command to remount won't work yet if its stale. if it is already unmounted then it will just error. if it is stale then it will be unmounted. umount $documents_share 2> /dev/null # set a flag to mount later needtomount=1 else echo "$documents_share is mounted and alive. nothing to do here." fi if [ -z "$photos_share_isalive" ] then echo "$photos_share is either unmounted or stale." # unmount the photos share. uad command to remount won't work yet if its stale. if it is already unmounted then it will just error. if it is stale then it will be unmounted. umount $photos_share 2> /dev/null # set a flag to mount later needtomount=1 else echo "$photos_share is mounted and alive. nothing to do here." fi if [ $needtomount = "1" ] then echo "one of the monitored shares was identified as being either unmounted or stale and was therefore unmounted." echo "lets try mounting them again ..." /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.unassigned mount auto echo "done. all monitored shares are now mounted and alive." else echo "there was nothing for me to do, all monitored shares are mounted and alive." fi Edited April 25, 2020 by danioj Quote Link to comment
Manipulate Posted May 2, 2020 Share Posted May 2, 2020 Having the same problem on Pop OS 20.04 and automatic mounted shares via fstab, and when the stale file handler issue happened, the only change I made was disabling the cache drive for the particular share I was having problems accessing, and it started working immediately without remounting the share, rebooting or even closing Gnome Files on Pop OS. I'll try this share without the cache drive enabled for a bit, and see it works consistently or not. Quote Link to comment
fysmd Posted May 18, 2020 Share Posted May 18, 2020 I have see the same on ubuntu 18 host AND a 2nd unrid server using unassigned devices to mount. Spent ages thinking it was a problem with unassigned devices plugin... I notice that the web UI on the server machine runs VERY slowly once this issue begins. Noting obvs in any logs anywhere Switching to vers=1.0.. Quote Link to comment
BrianAz Posted May 22, 2020 Share Posted May 22, 2020 I'm having this issue as well... I have two Ubuntu 16.04 servers and they both will throw an error after a while on my most used folders. I have the mover enabled (FYI), but haven't tied any of this to it yet. Quote ls: cannot access '/mnt/Tower/TV': Stale file handle This had not been a problem until recently. I guess I'll try to use 1.0 and see if it fixes it. Very frustrating. Quote Link to comment
JonathanM Posted May 22, 2020 Share Posted May 22, 2020 15 hours ago, BrianAz said: I guess I'll try to use 1.0 and see if it fixes it. It will. Quote Link to comment
fysmd Posted May 23, 2020 Share Posted May 23, 2020 So I've been stable since changing to force vers=1.0 Is this a bug - as there's nothing logged how do we report it (usefully!)? Quote Link to comment
sonic6 Posted June 11, 2020 Share Posted June 11, 2020 (edited) Got the same Problem with my SMB and NFS Share to my Plex Ubuntu Maschine, everytime the SABnzbd Docker was writing to that share. I found this: But for the first i will try to deactivate the cache on this shares, because i don't understand what are the Tunable Hardlinks for. I am worried about breaking functionality on my unraid. Edited June 11, 2020 by sonic6 Quote Link to comment
FlamongOle Posted June 11, 2020 Share Posted June 11, 2020 3 hours ago, sonic6 said: Got the same Problem with my SMB and NFS Share to my Plex Ubuntu Maschine, everytime the SABnzbd Docker was writing to that share. I found this: But for the first i will try to deactivate the cache on this shares, because i don't understand what are the Tunable Hardlinks for. I am worried about breaking functionality on my unraid. I have run unraid with this setting to "No" with cache enabled since January 3rd, and it works. But yes, Unraid documentation is pretty appalling. But using the "Help" function at the "Global Share Settings" would give you some hints about what happens. From the help function: Quote If set to Yes then support the link() operation. If set to No then hard links are not supported. Notes: Setting to Yes may cause problems for older media and dvd/bluray players accessing shares using NFS. No matter how this is set, the mover will still properly handle any detected hard links. Soooo, I would say: just try. Doubt you'll loose any real functionality. Quote Link to comment
gberg Posted July 9, 2020 Share Posted July 9, 2020 So... Is this something we simply have to live with, if we access Unraid shares from a linux system we have do disable cache so that the mover doesn't make "stale file handler". Or is there something that can be done on the linux clients side to stop this? Quote Link to comment
skois Posted July 16, 2020 Share Posted July 16, 2020 I have the same problem as the OP, but i only have it on ONE share! I have like 10 mounted shares, but only 1 have this problem. This share is the one that i have mapped to Handbrake. This is a cached share, only using smb. I umounted the share and remounted. Run mover. I can still access the share. It seems most likely to disconnect only after handbrake runs a scan on the watched folder inside the share, but not always Quote Link to comment
trapexit Posted August 12, 2020 Share Posted August 12, 2020 Stale File Handle errors occur due to NFS or Samba losing track of a file it was referencing. Discounting bugs that means out of band changes. The cache feature of unraid (with the moving of that data) is exactly that. I'm not expert on the SMB/CIFS protocol but I'm guessing v1.0 is more stateless and therefore not as prone to the problem. I created a work around in mergerfs for this but it does have downsides. When dealing with out of band changes it's not that easy to manage this situation. Quote Link to comment
SuberSeb Posted September 1, 2020 Share Posted September 1, 2020 Have this issue with Unraid 6.8.3. I have 2 servers: J3455Unraid and R7Unraid. I have mounted on J3455Unraid share from R7Unraid. Sometimes I got this error "stale file handle" so I need manully unmount and mount this share. How can I fix that? SMB1 is not an option. Also I tried disable hard links in Global Share Settings. Quote Link to comment
gm.cinalli Posted September 3, 2020 Share Posted September 3, 2020 On 9/1/2020 at 2:13 PM, SuberSeb said: Have this issue with Unraid 6.8.3. I have 2 servers: J3455Unraid and R7Unraid. I have mounted on J3455Unraid share from R7Unraid. Sometimes I got this error "stale file handle" so I need manully unmount and mount this share. How can I fix that? SMB1 is not an option. Also I tried disable hard links in Global Share Settings. Hi, do you have the cache enabled on this share? Quote Link to comment
SuberSeb Posted September 3, 2020 Share Posted September 3, 2020 6 hours ago, Gian Marco Cinalli said: Hi, do you have the cache enabled on this share? Cache disabled and I don't have cache at all. Quote Link to comment
gm.cinalli Posted September 3, 2020 Share Posted September 3, 2020 34 minutes ago, SuberSeb said: Cache disabled and I don't have cache at all. In my case that was the problem, so I don't know what to tell you! Quote Link to comment
Guido Posted September 3, 2020 Share Posted September 3, 2020 9 hours ago, Gian Marco Cinalli said: Hi, do you have the cache enabled on this share? It is on a new raid set that will only be available as a "cache". It is set to cache only. I need it to be like this cause I didn't want it to be part of the main raid setup. Quote Link to comment
gm.cinalli Posted September 3, 2020 Share Posted September 3, 2020 29 minutes ago, Guido said: It is on a new raid set that will only be available as a "cache". It is set to cache only. I need it to be like this cause I didn't want it to be part of the main raid setup. Have you ever tried the Unassigned Devices plugin? It does exactly that. Quote Link to comment
Guido Posted September 3, 2020 Share Posted September 3, 2020 2 hours ago, Gian Marco Cinalli said: Have you ever tried the Unassigned Devices plugin? It does exactly that. I have and it doesn't as far as I can see... I need a raid set, and with the Unassigned Devices plugin I can only share a single drive... Or if it is possible to set a few drives in a raid mode, I haven't found out how... a pointer in the right direction would be nice in that case. Quote Link to comment
gm.cinalli Posted September 4, 2020 Share Posted September 4, 2020 16 hours ago, Guido said: I have and it doesn't as far as I can see... I need a raid set, and with the Unassigned Devices plugin I can only share a single drive... Or if it is possible to set a few drives in a raid mode, I haven't found out how... a pointer in the right direction would be nice in that case. How did you go about setting up a raid that is not part of the array? Quote Link to comment
Guido Posted September 4, 2020 Share Posted September 4, 2020 7 hours ago, Gian Marco Cinalli said: How did you go about setting up a raid that is not part of the array? This is done by creating a 2nd or 3rd or whatever Pool Device (it's a new function in the beta versions). Therefore I have a main raid set and an ohter raid set in the pool device (in my case a 4 disk btrf raid 1 set) and the main set is a 3 disk raid 5 (xfs) Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.