unRAID Server Release 6.0-beta14b-x86_64 Available


limetech

Recommended Posts

Btrfs is pretty stable.

 

^^^ THIS IS TRUTH ^^^

 

Why isn't it working, then?

 

It is working.

 

This is all good, but can we at least agree it has caused a lot of issues with unraid V6, given the inclusion of the multi-drive pool feature, which I bet is not one of the features being implemented in enterprise systems at the moment?

The multi-drive cache pool feature has nothing to do with the btrfs bug so I don't understand your point.

 

cache drives are useless at the moment they cause massive slow downs on the drives speed, since Unraid can't understand just to move files on the cache drive instead of coping them.

 

What happens when a file is on the Cache Drive and a program moves it to a CACHED share  the file should just move quickly from one folder on the same drive to another instead Unraid reads and writes the file on the same hard drive causing the harddrive to use all of its speed. When this happens any app running on the drive like Plex end up buffering etc.

 

If Unraid can fix it so the files just get moved, cache drives would be useful but right now i cant use them.

 

Let's not label a feature useless just because your use case has an issue.  Cache drives improve write performance to user shares  during real-time write operations, then migrate data to array devices on a schedule.  Moving data from one area of cache to another is not the atypical function for cache.

 

Dockers run on cache drive no? and when a app needs to move a file so that it will be migrated to the array? we get slow downs and unnecessary wearing of the drive.

 

Unraid should be able to know the file is already on the cache drive and move it instead of copying it over to another folder?

Why does the file need to be moved to begin with?  Why aren't you setting docker volumes to write data to cache enabled user shares directly?  You should not be saving files inside the containers themselves.

 

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Link to comment
  • Replies 476
  • Created
  • Last Reply

Top Posters In This Topic

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

Link to comment

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

 

If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems?

Link to comment

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

 

If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems?

If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename.  That way the base os will realize they are all on the same drive and the move will be instantaneous.

 

Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place

Link to comment

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

 

If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems?

 

i could if Sonarr would let me but it only lets you choose the destination folder so it will always move the file to the cached user share as that's where the app looks for and moves the files to.

 

for example even if i tell nzbget to put the file inside of the /mn/cache/share  the other app is looking for the files at /mnt/share so it will try to move it to /mnt/share so this causes the cache drive to read and write back the file to the same location.

 

is there a way to make cache folder show whats also in the user folder? maybe this will fix my problem or will this mess up the mover?

Link to comment

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

 

If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems?

If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename.  That way the base os will realize they are all on the same drive and the move will be instantaneous.

 

Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place

 

no this is a issue with Unraid you can test this in console by using mv for example

use mv to move a file around on the cache drive this will be instant

then try use mv to move a file on the cache drive to a cache share like /mnt/user/cachedshare this will cause read and write

Link to comment

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

 

If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems?

If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename.  That way the base os will realize they are all on the same drive and the move will be instantaneous.

 

Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place

 

no this is a issue with Unraid you can test this in console by using mv for example

use mv to move a file around on the cache drive this will be instant

then try use mv to move a file on the cache drive to a cache share like /mnt/user/cachedshare this will cause read and write

I'm going to have to disagree with you on that one.

 

You're using something like:

 

mv /mnt/cache/downloads/somefile /mnt/user/cachedshare/somefile

 

You're referencing two different mountpoints in the mv command (cache and user).  Its not unRaid's fault.  Its how all the OS's out there work. 

Link to comment

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

 

If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems?

If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename.  That way the base os will realize they are all on the same drive and the move will be instantaneous.

 

Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place

 

no this is a issue with Unraid you can test this in console by using mv for example

use mv to move a file around on the cache drive this will be instant

then try use mv to move a file on the cache drive to a cache share like /mnt/user/cachedshare this will cause read and write

 

The issue has nothing to do with cache.  When you traverse /mnt/user/cachedshare, as you said, it will cause read/write.  I think we can clear this up pretty quick.  Can you give us an actual example of one of your containers right now and the volume mappings you have defined for them in Docker.

Link to comment

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

 

If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems?

If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename.  That way the base os will realize they are all on the same drive and the move will be instantaneous.

 

Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place

 

no this is a issue with Unraid you can test this in console by using mv for example

use mv to move a file around on the cache drive this will be instant

then try use mv to move a file on the cache drive to a cache share like /mnt/user/cachedshare this will cause read and write

I'm going to have to disagree with you on that one.

 

You're using something like:

 

mv /mnt/cache/downloads/somefile /mnt/user/cachedshare/somefile

 

You're referencing two different mountpoints in the mv command (cache and user).  Its not unRaid's fault.  Its how all the OS's out there work.

 

oh i didn't know that

Link to comment

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

 

If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems?

If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename.  That way the base os will realize they are all on the same drive and the move will be instantaneous.

 

Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place

 

no this is a issue with Unraid you can test this in console by using mv for example

use mv to move a file around on the cache drive this will be instant

then try use mv to move a file on the cache drive to a cache share like /mnt/user/cachedshare this will cause read and write

I'm going to have to disagree with you on that one.

 

You're using something like:

 

mv /mnt/cache/downloads/somefile /mnt/user/cachedshare/somefile

 

You're referencing two different mountpoints in the mv command (cache and user).  Its not unRaid's fault.  Its how all the OS's out there work.

 

oh i didn't know that

 

knowing.jpg

Link to comment

Btrfs is pretty stable.

 

^^^ THIS IS TRUTH ^^^

 

Why isn't it working, then?

 

It is working.

 

Then let's stay on the current kernel and work toward a stable release..

Morten,

 

I already said that was an avenue we are pursuing at this point (patching the existing 3.19 kernel in lieu of an upgrade to 4.0).  Did you miss that part of my post before you replied initially?

 

Now, hold your horses. The existing kernel in your published beta is not 3.19, it is 3.18.5. Why not stay on that?

 

I looked over the BTRFS wiki and that seems to indicate the deadlock mount issue only exists in 3.14.35+, 3.18.9+ and 3.19.1+. In other words, you introduced this bug into unraid by wanting to 'update' the kernel from 3.18.5 in the first place.

https://btrfs.wiki.kernel.org/index.php/Gotchas

 

All the Copy-on-write stuff is cool, but most people who own a pro license have gazoodlez of gigabytes of storage, and the COW features are far from being a must-have. Same with cache raid1 pool, nice to have but ranking way below all the nice features you have already brought us.

 

I wish you good luck with the further development. I hope you will think of both your current and future potential customers who want to see the VM features and 64-bit OS released in a stable version.

 

 

..... like trying to fit a cylinder through a square hole.

 

rs.png

 

just saying....

 

Right, we need a better analogy :-)

 

Like an trying to fit an earthworm through a straw. You could do it by sucking on the straw, but it will be unpleasant for both you and the worm, and take a long time.

Link to comment

what i mean is if a docker like nzbget downloads something it needs to put the file on the array but i rather keep it on the cache until the mover moves it. So when nzbget puts the file on a cached Share  like mnt/user/cachedshare this causes Unraid to start reading and writing the file back to the cache drive into the folder cachedshare instead it could just move the file into the cachedshare folder. When this happens it reads at 50mb/s and writes at 50mb/s killing the performance of anything else that is using the cache drive.

Isn't that because ultimately you're telling nzbget to move files from one container path to another container path (two separate mount points).  The base OS in the container has no clue that they're on the same drive because of the two different mount points.

 

Its really no different than using Windows to move a file from a network share on the local drive D to another folder on the local drive D without referencing the network share name.  It will also take forever to do

 

If I understand the issue is basically moving data from a cache location to a user share that is cache enabled? Couldn't you just fix this by pointing the move to /mnt/cache/share ? or would that cause problems?

If the issue is what I believe it is (2 separate container paths (/mnt/cache/downloads and /mnt/cache/sharename) ) you would fix it by using a single container path (/mnt/cache) mapped to something like cachedrive, and then reference nzbget to use /cachedrive/downloads and /cachedrive/sharename.  That way the base os will realize they are all on the same drive and the move will be instantaneous.

 

Mind you, I've never tried this because #1 the slow down to me is negligable, and I personally don't like mapping over complete drives to a docker -> removes one of the benefits of running a docker in the first place

 

no this is a issue with Unraid you can test this in console by using mv for example

use mv to move a file around on the cache drive this will be instant

then try use mv to move a file on the cache drive to a cache share like /mnt/user/cachedshare this will cause read and write

I'm going to have to disagree with you on that one.

 

You're using something like:

 

mv /mnt/cache/downloads/somefile /mnt/user/cachedshare/somefile

 

You're referencing two different mountpoints in the mv command (cache and user).  Its not unRaid's fault.  Its how all the OS's out there work.

 

oh i didn't know that

 

knowing.jpg

 

is this possible?

 

for example /mnt/cache/share  inside of that it would show the files from /mnt/user/share? this way the app will see the files and move them to /mnt/cache without reading/writing. Would this mess up the mover? as it would think all the files have to be moved?

Link to comment

is this possible?

for example /mnt/cache/share  inside of that it would show the files from /mnt/user/share? this way the app will see the files and move them to /mnt/cache without reading/writing. Would this mess up the mover? as it would think all the files have to be moved?

 

Yes, but if you at some point download a duplicate file you have to decide whether you want the mover or your app to run into that issue. I vote for facing it up font in your docker app. Move from /mnt/user/DLfolder to /mnt/user/StorageFolder. Your appdata will be visible in the user shares, so just move from there.

Link to comment

is this possible?

 

for example /mnt/cache/share  inside of that it would show the files from /mnt/user/share? this way the app will see the files and move them to /mnt/cache without reading/writing. Would this mess up the mover? as it would think all the files have to be moved?

No.

 

You could pass through a single container path of /unRaidFiles mapped to a host path of /user and then reference everything off of that base path, and it *should* work.  I'm not a fan of that approach because then the docker container has access to all of your files (even though its probably a negligable risk) so I couldn't really tell you.

 

The thing to try is this:

 

mv /mnt/user/downloads/somefile /mnt/user/cachedshare/somefile

 

If its not instantaneous (assuming the rules for putting a file onto the cache drive and not the array are met), then there is going to have to be some tweaking of unRaid's underlying md driver (which I realistically wouldn't expect until after v6 Final)

Link to comment

is this possible?

for example /mnt/cache/share  inside of that it would show the files from /mnt/user/share? this way the app will see the files and move them to /mnt/cache without reading/writing. Would this mess up the mover? as it would think all the files have to be moved?

 

Yes, ...

No.  /mnt/cache/share is only going to show the contents of the share folder that are currently on the cache drive

Link to comment

For example

 

If downloads is a cache-only share, it is going to always be at /mnt/cache/downloads.

 

If Movies is a cache-enabled user share, it is at /mnt/user/Movies. Sometimes it may have files at /mnt/cache/Movies until they get moved. And it will have files on one or more array disks, which will also show up at /mnt/user0/Movies.

 

While it would be possible to mv /mnt/cache/downloads /mnt/cache/Movies, I'm not sure what would happen if you already have some file paths in /mnt/user0/Movies that are the same as those in /mnt/cache/Movies. Probably deterministic based on the details of mover script (it would either replace those files, or not replace them but leave them on cache, or something) but might not be exactly what you want.

Link to comment

For example

 

If downloads is a cache-only share, it is going to always be at /mnt/cache/downloads.

 

If Movies is a cache-enabled user share, it is at /mnt/user/Movies. Sometimes it may have files at /mnt/cache/Movies until they get moved. And it will have files on one or more array disks, which will also show up at /mnt/user0/Movies.

 

While it would be possible to mv /mnt/cache/downloads /mnt/cache/Movies, I'm not sure what would happen if you already have some file paths in /mnt/user0/Movies that are the same as those in /mnt/cache/Movies. Probably deterministic based on the details of mover script (it would either replace those files, or not replace them but leave them on cache, or something) but might not be exactly what you want.

Use /mnt/user/downloads and /mnt/user/Movies.  No chance of running into duplicates that way.  But then we're back to the original question of the slowdown in copying files from one to the other in a container.  To avoid the slowdown you can only passthrough to the container a single volume and host path : /mnt/user  Otherwise you're referencing two separate mountpoints in the container and the issue will still be there (because then its the base OS of the container doing the move)
Link to comment

For example

 

If downloads is a cache-only share, it is going to always be at /mnt/cache/downloads.

 

If Movies is a cache-enabled user share, it is at /mnt/user/Movies. Sometimes it may have files at /mnt/cache/Movies until they get moved. And it will have files on one or more array disks, which will also show up at /mnt/user0/Movies.

 

While it would be possible to mv /mnt/cache/downloads /mnt/cache/Movies, I'm not sure what would happen if you already have some file paths in /mnt/user0/Movies that are the same as those in /mnt/cache/Movies. Probably deterministic based on the details of mover script (it would either replace those files, or not replace them but leave them on cache, or something) but might not be exactly what you want.

Use /mnt/user/downloads and /mnt/user/Movies.  No chance of running into duplicates that way.  But then we're back to the original question of the slowdown in copying files from one to the other in a container.  To avoid the slowdown you can only passthrough to the container a single volume and host path : /mnt/user  Otherwise you're referencing two separate mountpoints in the container and the issue will still be there (because then its the base OS of the container doing the move)

Why not two cache only shares and reference them as /mnt/cache/share1 and /mnt/cache/share2?

 

E.g. for downloads and movies.

 

Link to comment

 

Why not two cache only shares and reference them as /mnt/cache/share1 and /mnt/cache/share2?

 

E.g. for downloads and movies.

I'm assuming that you're talking about creating two container volumes here.

 

At that point you will have two different mountpoints inside the docker container.  It's base OS (phusion or debian or whatever) is going to see that and do a copy / delete to move the file.  Regardless of the fact that the files are within the same mountpoint on the unRaid side of things.

 

I don't believe that cache only will make a difference here at all.  I could however be completely wrong since I've never experimented that far into it.  The copy/delete process doesn't affect my usage case at all, and I don't believe in passing the entire /user mount over to any of my containers.

Link to comment

not sure if this applies here but this is what I do with docker:

 

sab docker downloads movies to /mnt/cache/downloads/complete/movies/ and runs a post-scirpt "movies.sh" that will mkdir /mnt/cache/movies (doesn't matter if it fails, the idea is to ensure /mnt/cache/movies is always there when a movie is downloaded"

 

couchpotato docker "renames" /mnt/cache/downloads/complete/movies/Movie_X to /mnt/cache/movies/Movie_X

 

Becasue I'm exporting /mnt/cache to couchpotato it sees /mnt/cache/downloads/complete/movies/ and /mnt/cache/movies/ as being on the same device and it's an instant rename/mv for the downloaded movie.  Couch bugsout if the destination path doesn't exist so that's why I use the post-script in sab as /mnt/cache/movies/ is removed when the mover completes.

 

Unfortunately this doesn't work for sickbeard/sonarr as everything is based on the path to your entire TV archive, they wont let you specify a secondary path for newly imported shows.

 

Link to comment

not sure if this applies here but this is what I do with docker:

 

sab docker downloads movies to /mnt/cache/downloads/complete/movies/ and runs a post-scirpt "movies.sh" that mkdir /mnt/cache/movies (doesn't matter if it fails, the idea is to ensure /mnt/cache/movies is always there when a movie is downloaded"

 

couchpotato docker "renames" /mnt/cache/downloads/complete/movies/Movie X to /mnt/cache/movies/Movie X

 

Becasue I'm exporting /mnt/cache to couchpotato it sees /mnt/cache/downloads/complete/movies/ and /mnt/cache/movies/ as being on the same device and it's an instant rename/mv for the downloaded movie.

You may or may not run into it, but an issue with that is duplicate files.  Since you're referencing /mnt/cache/movies, the same file may already exist at /mnt/user/movies.  Wouldn't it still be instantaneous if you reference /mnt/user instead?
Link to comment

not sure if this applies here but this is what I do with docker:

 

sab docker downloads movies to /mnt/cache/downloads/complete/movies/ and runs a post-scirpt "movies.sh" that mkdir /mnt/cache/movies (doesn't matter if it fails, the idea is to ensure /mnt/cache/movies is always there when a movie is downloaded"

 

couchpotato docker "renames" /mnt/cache/downloads/complete/movies/Movie X to /mnt/cache/movies/Movie X

 

Becasue I'm exporting /mnt/cache to couchpotato it sees /mnt/cache/downloads/complete/movies/ and /mnt/cache/movies/ as being on the same device and it's an instant rename/mv for the downloaded movie.

You may or may not run into it, but an issue with that is duplicate files.  Since you're referencing /mnt/cache/movies, the same file may already exist at /mnt/user/movies.  Wouldn't it still be instantaneous if you reference /mnt/user instead?

 

possibly if you export /mnt to the container... something about doing that irked me but it might be worth looking into

 

 

*edit* actually you could do something like:

 

/mnt/cache/downloads/ -> /unraid/downloads/

/mnt/user/movies/ > /unraid/movies/

 

I just started using v6 last week so i haven't gotten my head fully around how to best leverage docker vs separate VMs

Link to comment

not sure if this applies here but this is what I do with docker:

 

sab docker downloads movies to /mnt/cache/downloads/complete/movies/ and runs a post-scirpt "movies.sh" that mkdir /mnt/cache/movies (doesn't matter if it fails, the idea is to ensure /mnt/cache/movies is always there when a movie is downloaded"

 

couchpotato docker "renames" /mnt/cache/downloads/complete/movies/Movie X to /mnt/cache/movies/Movie X

 

Becasue I'm exporting /mnt/cache to couchpotato it sees /mnt/cache/downloads/complete/movies/ and /mnt/cache/movies/ as being on the same device and it's an instant rename/mv for the downloaded movie.

You may or may not run into it, but an issue with that is duplicate files.  Since you're referencing /mnt/cache/movies, the same file may already exist at /mnt/user/movies.  Wouldn't it still be instantaneous if you reference /mnt/user instead?

 

possibly if you export /mnt to the container... something about doing that irked me but it might be worth looking into

I just don't know well enough about how unRaid handles duplicate files in a share (stored in the array and on the cache drive -> beyond complaining in the syslog)  Which file does it play? What will mover do etc?  I guess the only time you may see it is if you have couch set to look for better copies of the movie and don't append the quality of it onto the filename.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.