miccos Posted December 15, 2016 Author Share Posted December 15, 2016 They were not on the cache Quote Link to comment
Squid Posted December 15, 2016 Share Posted December 15, 2016 They were not on the cache Then its 100% definitely an issue with your set up for Sonarr Quote Link to comment
miccos Posted December 15, 2016 Author Share Posted December 15, 2016 It's sickbeard not sonar and I have showed screen shots of my settings, they all point to the share. There is no cache option in sickbeard and yet when I turn 'use cache' ON on the share and run the mover it moves the files to where they should be. The files even though on the cache are counted towards the share and show a file on the share as well. There is no settings in sickbeard to 'use cache' all this functionality is coming from unraid. I do appreciate everyone's help thanks Quote Link to comment
remotevisitor Posted December 16, 2016 Share Posted December 16, 2016 You have previously mentioned that you use SAB and Sickbeard .... So where is SAB configured to download to? I am wondering if it is SAB that is configured to download to the cache drive, and Sickbeard is just performing a move operation from where ever SAB has downloaded the file which leaves the file on the cache disk. [Note I don't use, and never have used, either SAB or Sickbeard so I could be completely misunderstanding how they are configured.] Quote Link to comment
miccos Posted December 16, 2016 Author Share Posted December 16, 2016 SAB goes to a folder on the cache under its own folder structure, ...\Completed Sickbeard monitors this folder and is supposed to move the show from ...\Completed to \mnt\user\TV series. It has been the same setup for 3-4 years, all this has only started happening within the last month or so. Quote Link to comment
remotevisitor Posted December 16, 2016 Share Posted December 16, 2016 i suspect that this change in behaviour started to happen when support for linking files was added to user shares .... This means that the move operation is now done by linking the file to the new names and then the link to the old name is removed; the by product is that the file remains on the same disk .... In your case the cache drive ... And is a very efficient operation as it just involves updating a few directory entries and the file contents are never actually moved. Previously the link operation would have failed and the move would have instead been done by physically copying the file which would have resulted in your old behaviour of the file moving from the cache drive onto the data disk. Quote Link to comment
miccos Posted December 16, 2016 Author Share Posted December 16, 2016 How do I check if this is the case?? support for linking files was added to user shares When I run the mover it deletes the files from the cache. This means that the move operation is now done by linking the file to the new names and then the link to the old name is removed; the by product is that the file remains on the same disk .... In your case the cache drive . Quote Link to comment
Squid Posted December 16, 2016 Share Posted December 16, 2016 i suspect that this change in behaviour started to happen when support for linking files was added to user shares .... This means that the move operation is now done by linking the file to the new names and then the link to the old name is removed; the by product is that the file remains on the same disk .... In your case the cache drive ... And is a very efficient operation as it just involves updating a few directory entries and the file contents are never actually moved. Mover doesn't work like that. (And if it did, it would be rather pointless) It physically moves the file. If anything its actually far more inefficient since hardlink support came into effect with 6.2 Quote Link to comment
nick5429 Posted December 16, 2016 Share Posted December 16, 2016 The responses here are centered around "OP has something misconfigured", but I am hitting this too. I just noticed a similar problem this morning with my Crashplan share -- It appears there was a bug introduced somewhere here. The common modality is docker, but that doesn't necessarily mean that's the source. I use the Crashplan docker, and my Crashplan share is not, and has not recently been, configured to write to the cache drive. In the unraid server UI, I have "included disks=disk4" and "use cache = no" for my Crashplan share to keep all of my backups contained on one disk, and that's it. The docker is passed a mountpoint of /mnt/user/ -- nowhere do I give it any method where the docker or the app would even be capable of writing to the cache drive, and yet I have ~250GB of recently-written files in /mnt/cache/Crashplan. It's got to be from the underlying unraid mechanism that determines where to write files. root@nickserver:/mnt/user# du -hs /mnt/cache/Crashplan/ 278G /mnt/cache/Crashplan/ root@nickserver:/mnt/user# du -hs /mnt/disk4/Crashplan/ 713G /mnt/disk4/Crashplan/ root@nickserver:/mnt/user# du -hs /mnt/user/Crashplan/ 1.4T /mnt/user/Crashplan/ root@nickserver:/mnt/user# du -hs /mnt/user0/Crashplan/ 1.2T /mnt/user0/Crashplan/ # ls -lh /mnt/cache/Crashplan/503826726370413061/cpbf0000000000013241371 total 2.4G -rw-rw-rw- 1 nobody users 23 Oct 13 20:44 503826726370413061 -rw-rw-rw- 1 nobody users 2.4G Dec 11 15:37 cpbdf -rw-rw-rw- 1 nobody users 1.8M Dec 12 11:51 cpbmf There are still files being correctly written to /mnt/disk4/Crashplan though, so the failure is apparently intermittent. Unraid should not be writing files to /mnt/cache/Crashplan, ever, and the docker/app/me aren't doing it manually. Attached screenshots show docker and share configuration Quote Link to comment
nick5429 Posted December 16, 2016 Share Posted December 16, 2016 Investigating further, I see the same issue on a share (/mnt/user/Nick) which is only ever accessed over SMB or commandline, which I definitely would not have manually specified /mnt/cache/Nick. Share "Nick" is set "cache=no, excluded disks=disk1". Plenty of space on the array and on the individual relevant array drives for both these shares root@nickserver:/mnt/user# df -h /mnt/user Filesystem Size Used Avail Use% Mounted on shfs 23T 19T 4.3T 82% /mnt/user root@nickserver:/mnt/user# df -h /mnt/disk* Filesystem Size Used Avail Use% Mounted on /dev/md1 1.4T 22G 1.4T 2% /mnt/disk1 /dev/md10 2.8T 2.6T 175G 94% /mnt/disk10 /dev/md11 1.9T 1.5T 337G 82% /mnt/disk11 /dev/md12 1.9T 1.5T 338G 82% /mnt/disk12 /dev/md4 1.9T 1.5T 403G 79% /mnt/disk4 /dev/md5 1.9T 1.7T 174G 91% /mnt/disk5 /dev/md6 1.9T 1.7T 175G 91% /mnt/disk6 /dev/md7 2.8T 2.6T 161G 95% /mnt/disk7 /dev/md8 2.8T 2.6T 171G 94% /mnt/disk8 /dev/md9 2.8T 2.6T 175G 94% /mnt/disk9 root@nickserver:/mnt/user# df -h /mnt/user0 Filesystem Size Used Avail Use% Mounted on shfs 22T 18T 3.4T 85% /mnt/user0 Quote Link to comment
remotevisitor Posted December 16, 2016 Share Posted December 16, 2016 i suspect that this change in behaviour started to happen when support for linking files was added to user shares .... This means that the move operation is now done by linking the file to the new names and then the link to the old name is removed; the by product is that the file remains on the same disk .... In your case the cache drive ... And is a very efficient operation as it just involves updating a few directory entries and the file contents are never actually moved. Mover doesn't work like that. (And if it did, it would be rather pointless) It physically moves the file. If anything its actually far more inefficient since hardlink support came into effect with 6.2 I wasn't clear enough ... my statement was about how the file is moved/renamed between SAB and Sickbeard, not how mover copies the file from the cache disk to the data disk. Quote Link to comment
Squid Posted December 16, 2016 Share Posted December 16, 2016 i suspect that this change in behaviour started to happen when support for linking files was added to user shares .... This means that the move operation is now done by linking the file to the new names and then the link to the old name is removed; the by product is that the file remains on the same disk .... In your case the cache drive ... And is a very efficient operation as it just involves updating a few directory entries and the file contents are never actually moved. Mover doesn't work like that. (And if it did, it would be rather pointless) It physically moves the file. If anything its actually far more inefficient since hardlink support came into effect with 6.2 I wasn't clear enough ... my statement was about how the file is moved/renamed between SAB and Sickbeard, not how mover copies the file from the cache disk to the data disk. Actually doesn't even matter there either. Because /mnt/user/TV Shows and /mnt/user/Downloads are different mount points, the system will always do a copy / delete. No linking involved. But, that might be the key here. Vast majority of users add multiple paths to a docker app instead of just passing in /mnt/user So now you've got a single mount point passed through to a docker app that ultimately contains multiple mount points. I would surmise that the docker system is puking at that and just doing the rename instead of actually following the rules for following mounts. Solution is to pass separate mounts to all the docker apps involved of /downloads and /tv and make all the internal references point to them and ditch the mapping of /mnt or /mnt/user If this works, then its probably also @Nick5429's issue as he also is apparently passing through /mnt/user and then its an actual issue with docker, not unRaid per se. Quote Link to comment
nick5429 Posted December 16, 2016 Share Posted December 16, 2016 then its an actual issue with docker, not unRaid per se. Perhaps my posts should be split off to a new post/defect report (mods??) with a reference from this thread as additional data point -- but there's no way my report (or this, presuming the same problem) is a "docker issue". Docker hadn't been given any reference to the cache drive. Unraid is the only thing that could have made the decision to write to /mnt/cache/<SHARE> Also, I noted the same problem on a share that docker has never touched Quote Link to comment
gubbgnutten Posted December 16, 2016 Share Posted December 16, 2016 Pretty sure last time this was discussed it was considered "by design" and not a defect, although I must admit that my memory is far from perfect. ^^ If a file on a user share is moved locally it will not be moved to another disk, only moved within the same disk. With operations conducted directly on the server this covers all of /mnt/user, as opposed to over the network where it only applies to moves within the same share. This is why the cache location from one share locally can "leak" to another. Copy and delete the files rather than move them. Quote Link to comment
lionelhutz Posted December 16, 2016 Share Posted December 16, 2016 i suspect that this change in behaviour started to happen when support for linking files was added to user shares .... This means that the move operation is now done by linking the file to the new names and then the link to the old name is removed; the by product is that the file remains on the same disk .... In your case the cache drive ... And is a very efficient operation as it just involves updating a few directory entries and the file contents are never actually moved. Mover doesn't work like that. (And if it did, it would be rather pointless) It physically moves the file. If anything its actually far more inefficient since hardlink support came into effect with 6.2 I wasn't clear enough ... my statement was about how the file is moved/renamed between SAB and Sickbeard, not how mover copies the file from the cache disk to the data disk. Actually doesn't even matter there either. Because /mnt/user/TV Shows and /mnt/user/Downloads are different mount points, the system will always do a copy / delete. No linking involved. But, that might be the key here. Vast majority of users add multiple paths to a docker app instead of just passing in /mnt/user So now you've got a single mount point passed through to a docker app that ultimately contains multiple mount points. I would surmise that the docker system is puking at that and just doing the rename instead of actually following the rules for following mounts. Solution is to pass separate mounts to all the docker apps involved of /downloads and /tv and make all the internal references point to them and ditch the mapping of /mnt or /mnt/user If this works, then its probably also @Nick5429's issue as he also is apparently passing through /mnt/user and then its an actual issue with docker, not unRaid per se. I have tried it with my TV_Shows share set to use cache and not use it and it works correctly both ways for me. I only pass through the lower level directory to my dockers. No /mnt or /mnt/user gets passed through. For example Sickbeard uses, /downloads -> /mnt/cache/appdata/Downloads/ /tv -> /mnt/user/TV_Shows/ /config -> /mnt/cache/appdata/Sickbeard/ Quote Link to comment
Squid Posted December 16, 2016 Share Posted December 16, 2016 then its an actual issue with docker, not unRaid per se. Perhaps my posts should be split off to a new post/defect report (mods??) with a reference from this thread as additional data point -- but there's no way my report (or this, presuming the same problem) is a "docker issue". Docker hadn't been given any reference to the cache drive. Unraid is the only thing that could have made the decision to write to /mnt/cache/<SHARE> Also, I noted the same problem on a share that docker has never touched Toggle some setting on the share settings. Have seen off and on with unRaid upgrades where for some reason it messes up reading the settings even though it displays it correctly. Quote Link to comment
miccos Posted December 16, 2016 Author Share Posted December 16, 2016 Thanks Nick, I am not the only one experiencing this issue than. Squid- For me sickbeard and sabnzdb are both plugins. SAbnzdb writes to; /mnt/cache/.custom/sabnzdb/config/Downloads/complete Quote Link to comment
miccos Posted December 23, 2016 Author Share Posted December 23, 2016 No other thoughts or theories? Quote Link to comment
miccos Posted December 23, 2016 Author Share Posted December 23, 2016 Gave up on this and checked this morning to see how much needed to be moved. Everything is working as normal again. Strange! Thanks for everyone's assistance, hopefully it doesn't happen again. Quote Link to comment
miccos Posted January 4, 2017 Author Share Posted January 4, 2017 And happening again. Haven't changed any settings at all. Still open for suggestions if anyone has some new ideas. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.