b0mb Posted January 25, 2018 Share Posted January 25, 2018 (edited) Hi! I´m running latest unRAID 6.4 with latest emby docker container. Filesystem used is btrfs. With emby i use an so called autoorganize plugin to sort and organize new series and seasons. I have an option in the plugin to copy or move the files beeing processed. I prefer move because new series allways have been processed very fast because anything was done on my cachedrive (samsung ssd) within seconds. A few weeks ago the files haven´t been moved anymore but copied instead. So organizing new files is taking ages now and there is a lot of traffic going on my cachedrive. First i´ve thought that might be a bug of emby or unraid but the problem is still present. Today i´ve opened a thread in emby forum where an admin said that they did not change anything on the plugin and that it´s moving files fine on other systenms. this is the code of the plugin - you might have a look So can anybody help me with that problem? Maybe there has something changed in unRAID system especially handling files processed on cache drive. Btw. when i move files within /mnt/user/ using Midnight Commander they wil be moved and not copied. Thx for help in advance! [emoji6] b0mb Edited January 26, 2018 by b0mb Quote Link to comment
b0mb Posted January 26, 2018 Author Share Posted January 26, 2018 Ok.... this might be an unRAID problem... i´ve just tried filebot container and it acts exactly like emby when moving files within the cache disk.... it´s acting like moving the files from drive 1 to drive 5 instead of fast moving like the data is stored on the same disk.... so it´s more or less copying and not moving.... i would appreciate any kind of help Quote Link to comment
Squid Posted January 27, 2018 Share Posted January 27, 2018 If you're moving from one mount point to another mount point (ie: from one host path to a second host path) then what you're seeing is both expected, and correct. If your mappings on the container are something like /downloads mapped to /mnt/user/downloads and /movies mapped to /mnt/user/movies And then you move a file from /downloads to /movies, the system (actually every OS ever made) will always do a copy / delete operation because the source and destination are not contained within the same mount point (regardless of the fact that they both exist within /mnt/user, and could in fact both reside on the cache drive. MC wouldn't suffer from this, because you're moving from /mnt/user/downloads to /mnt/user/movies which are both within the same mount point (/mnt/user), hence the operation is basically a rename If this is really annoying to you, your alternative is to pass /unRAID mapped to /mnt/user, and specify source and destination within filebot based upon paths contained within /unRAID Quote Link to comment
b0mb Posted January 27, 2018 Author Share Posted January 27, 2018 Thanks for explaining. What's confusing me it's that this kind of handling the files was working without a problem and then someone in December it suddenly stopped. Would this be the right way to solve my problem? Host = /mnt/user/Downloads Container = /Downloads It might be possible that I've set the mountpoints for the container before I've set up my unraid container from scratch in December.... Gesendet von meinem Redmi Note 3 mit Tapatalk Quote Link to comment
Squid Posted January 27, 2018 Share Posted January 27, 2018 7 hours ago, b0mb said: Would this be the right way to solve my problem? Host = /mnt/user/Downloads Container = /Downloads So long as all moves happen within /Downloads, then yes. If a move has to go from /Downloads to somewhere else, then no. /unRAID -> /mnt/user And then set the container up to reference everything within /unRAID Quote Link to comment
b0mb Posted January 27, 2018 Author Share Posted January 27, 2018 i´ve tried several when not all kinds of mounting my shares into the container but none worked... problems started december 2017, before i´ve never had that problem crazy Quote Link to comment
b0mb Posted January 29, 2018 Author Share Posted January 29, 2018 Meanwhile I've found out that this seems to be a problem that belongs to smb handling of the official docker container. Switching to linuxserver.io container solved the problem Gesendet von meinem Redmi Note 3 mit Tapatalk Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.