unRAID Server Release 6.0-beta14b-x86_64 Available


limetech

Recommended Posts

  • Replies 476
  • Created
  • Last Reply

Top Posters In This Topic

The problem is an old one and reported for quite some time.

 

On most systems the OS can tell if a file mv is within a physical device regardless of other factors and the user doesnt need to know. unRAID unfortunately cannot always do this.

 

It is east to replicate, simply try to mv a file from a FUSE share to the a disk share (say within a cache drive a very very common occurance). Even if the file never leaves the same physical drive unRAID will copy, confirm and delete rather than simply move. Obviosuly this is massively slower and will wear out SSD sooner.

 

This is not intuitive at all but at this time unavoidable.

 

Link to comment

The problem is an old one and reported for quite some time.

 

On most systems the OS can tell if a file mv is within a physical device regardless of other factors and the user doesnt need to know. unRAID unfortunately cannot always do this.

 

It is east to replicate, simply try to mv a file from a FUSE share to the a disk share (say within a cache drive a very very common occurance). Even if the file never leaves the same physical drive unRAID will copy, confirm and delete rather than simply move. Obviosuly this is massively slower and will wear out SSD sooner.

 

This is not intuitive at all but at this time unavoidable.

We have some big plans for user shares in future release (post 6.0) but they are premature to discuss here.  One of those plans involves a change that should solve this issue once and for all.

 

For now, this issue isn't causing problems, just a bit of inefficiency, which isn't to say it shouldn't be fixed, but does not make it a requirement for 6.0.

Link to comment

That is obviously fine we just need to find a better way to get this information out there as even in this thread of very active forum users we are going over this gotcha again for the umpteenth time as if it is the first time.

 

Moving files within a cache drive is not an edge or uncommon use case, far from it.

Link to comment

not sure if this applies here but this is what I do with docker:

 

sab docker downloads movies to /mnt/cache/downloads/complete/movies/ and runs a post-scirpt "movies.sh" that will mkdir /mnt/cache/movies (doesn't matter if it fails, the idea is to ensure /mnt/cache/movies is always there when a movie is downloaded"

 

couchpotato docker "renames" /mnt/cache/downloads/complete/movies/Movie_X to /mnt/cache/movies/Movie_X

 

Becasue I'm exporting /mnt/cache to couchpotato it sees /mnt/cache/downloads/complete/movies/ and /mnt/cache/movies/ as being on the same device and it's an instant rename/mv for the downloaded movie.  Couch bugsout if the destination path doesn't exist so that's why I use the post-script in sab as /mnt/cache/movies/ is removed when the mover completes.

 

Unfortunately this doesn't work for sickbeard/sonarr as everything is based on the path to your entire TV archive, they wont let you specify a secondary path for newly imported shows.

 

that's my current setup works with Couchpotato as it has a destination and move folder settings "from" and "to", Sonarr only has the location folder so it will always do a move to /mnt/user/tv shows/ but since its a cache share and the file is on the cache drive it slows down the HD for everything else.

 

so i thought it was a Unraid problem but looks like how the linux OS works it will always do a copy instead of a move.

Link to comment

The problem is an old one and reported for quite some time.

 

On most systems the OS can tell if a file mv is within a physical device regardless of other factors and the user doesnt need to know. unRAID unfortunately cannot always do this.

 

It is east to replicate, simply try to mv a file from a FUSE share to the a disk share (say within a cache drive a very very common occurance). Even if the file never leaves the same physical drive unRAID will copy, confirm and delete rather than simply move. Obviosuly this is massively slower and will wear out SSD sooner.

 

This is not intuitive at all but at this time unavoidable.

We have some big plans for user shares in future release (post 6.0) but they are premature to discuss here.  One of those plans involves a change that should solve this issue once and for all.

 

For now, this issue isn't causing problems, just a bit of inefficiency, which isn't to say it shouldn't be fixed, but does not make it a requirement for 6.0.

 

sounds good, didn't mean to offend you guys just thought it was something Unraid was doing not the OS.

 

thanks

Link to comment

That is obviously fine we just need to find a better way to get this information out there as even in this thread of very active forum users we are going over this gotcha again for the umpteenth time as if it is the first time.

 

Moving files within a cache drive is not an edge or uncommon use case, far from it.

 

Agreed.  Probably belongs somewhere in the unRAID wiki about volume mappings with Docker.  The moving of files within the cache is primarily the result of containers, agreed?

Link to comment

That is obviously fine we just need to find a better way to get this information out there as even in this thread of very active forum users we are going over this gotcha again for the umpteenth time as if it is the first time.

 

Moving files within a cache drive is not an edge or uncommon use case, far from it.

 

Agreed.  Probably belongs somewhere in the unRAID wiki about volume mappings with Docker.  The moving of files within the cache is primarily the result of containers, agreed?

 

no, this is a problem with the OS. I had this even on version 5 you can test it in terminal with the mv command. Its good we now know that this is something that they will be able to fix  in a future Unraid version.

Link to comment

That is obviously fine we just need to find a better way to get this information out there as even in this thread of very active forum users we are going over this gotcha again for the umpteenth time as if it is the first time.

 

Moving files within a cache drive is not an edge or uncommon use case, far from it.

 

Agreed.  Probably belongs somewhere in the unRAID wiki about volume mappings with Docker.  The moving of files within the cache is primarily the result of containers, agreed?

 

Docker would be the best bang for you buck to get the info out.

 

I would suggest adding something to the docker gui that pops up a  warning when a user mixes disk/cache and FUSE shares in volume mounts. Nothing covers every base short of a complete fix but this would help most.

That is obviously fine we just need to find a better way to get this information out there as even in this thread of very active forum users we are going over this gotcha again for the umpteenth time as if it is the first time.

 

Moving files within a cache drive is not an edge or uncommon use case, far from it.

 

Agreed.  Probably belongs somewhere in the unRAID wiki about volume mappings with Docker.  The moving of files within the cache is primarily the result of containers, agreed?

 

no, this is a problem with the OS. I had this even on version 5 you can test it in terminal with the mv command. Its good we now know that this is something that they will be able to fix  in a future Unraid version.

 

To be fair this is a FUSE problem or at least unRAID unique implementation of it. Certainly it needs fixed as typcial users cannot and should not need to concern themselves with this level of detail. If they do were doing something wrong.

Link to comment

my sab for media saves has a root of  /mnt/user/XBMC-Media/Downloads

 

then beyond that it's TV-Shows and Movies

 

 

my couch picks up /mnt/user/XBMC-Media/Downloads/Movies

 

and moves to my media folder /mnt/user/XBMC-Media/Movies/Movies

 

and sickbeard picks up from /mnt/user/XBMC-Media/Downloads/Tv-Shows

 

and moves to /mnt/user/XBMC-Media/Tv-Shows

 

incomplete downloads for any sab type are on a cache only share, and my media share uses cache.

 

i've never had any issues whatsoever.

Link to comment

I rewrote the sab docker for my setup though, (was before i realised you could just add in any volume mappings you like in the template of dockerman.)

 

 

a little tip for download shares.

 

put a file with junk in it and make it hidden in the root folder where they look for downloads  (ie put a "." as first character of file name) and sickbeard, headphones or couch's mover won't delete the root.

 

 

Link to comment

my sab for media saves has a root of  /mnt/user/XBMC-Media/Downloads

 

then beyond that it's TV-Shows and Movies

 

 

my couch picks up /mnt/user/XBMC-Media/Downloads/Movies

 

and moves to my media folder /mnt/user/XBMC-Media/Movies/Movies

 

and sickbeard picks up from /mnt/user/XBMC-Media/Downloads/Tv-Shows

 

and moves to /mnt/user/XBMC-Media/Tv-Shows

 

incomplete downloads for any sab type are on a cache only share, and my media share uses cache.

 

i've never had any issues whatsoever.

 

you probably don't notice, it has nothing to do with docker and it was already explain why the read/write check and delete happens.

Link to comment

not sure if this applies here but this is what I do with docker:

 

sab docker downloads movies to /mnt/cache/downloads/complete/movies/ and runs a post-scirpt "movies.sh" that will mkdir /mnt/cache/movies (doesn't matter if it fails, the idea is to ensure /mnt/cache/movies is always there when a movie is downloaded"

 

couchpotato docker "renames" /mnt/cache/downloads/complete/movies/Movie_X to /mnt/cache/movies/Movie_X

 

Becasue I'm exporting /mnt/cache to couchpotato it sees /mnt/cache/downloads/complete/movies/ and /mnt/cache/movies/ as being on the same device and it's an instant rename/mv for the downloaded movie.  Couch bugsout if the destination path doesn't exist so that's why I use the post-script in sab as /mnt/cache/movies/ is removed when the mover completes.

 

Unfortunately this doesn't work for sickbeard/sonarr as everything is based on the path to your entire TV archive, they wont let you specify a secondary path for newly imported shows.

 

that's my current setup works with Couchpotato as it has a destination and move folder settings "from" and "to", Sonarr only has the location folder so it will always do a move to /mnt/user/tv shows/ but since its a cache share and the file is on the cache drive it slows down the HD for everything else.

 

so i thought it was a Unraid problem but looks like how the linux OS works it will always do a copy instead of a move.

 

 

seems like another reason to dislike sonarr to me, lol.

Link to comment

not sure if this applies here but this is what I do with docker:

 

sab docker downloads movies to /mnt/cache/downloads/complete/movies/ and runs a post-scirpt "movies.sh" that will mkdir /mnt/cache/movies (doesn't matter if it fails, the idea is to ensure /mnt/cache/movies is always there when a movie is downloaded"

 

couchpotato docker "renames" /mnt/cache/downloads/complete/movies/Movie_X to /mnt/cache/movies/Movie_X

 

Becasue I'm exporting /mnt/cache to couchpotato it sees /mnt/cache/downloads/complete/movies/ and /mnt/cache/movies/ as being on the same device and it's an instant rename/mv for the downloaded movie.  Couch bugsout if the destination path doesn't exist so that's why I use the post-script in sab as /mnt/cache/movies/ is removed when the mover completes.

 

Unfortunately this doesn't work for sickbeard/sonarr as everything is based on the path to your entire TV archive, they wont let you specify a secondary path for newly imported shows.

 

that's my current setup works with Couchpotato as it has a destination and move folder settings "from" and "to", Sonarr only has the location folder so it will always do a move to /mnt/user/tv shows/ but since its a cache share and the file is on the cache drive it slows down the HD for everything else.

 

so i thought it was a Unraid problem but looks like how the linux OS works it will always do a copy instead of a move.

 

 

seems like another reason to dislike sonarr to me, lol.

 

its the best at what it does nothing else out there

Link to comment

not sure if this applies here but this is what I do with docker:

 

sab docker downloads movies to /mnt/cache/downloads/complete/movies/ and runs a post-scirpt "movies.sh" that will mkdir /mnt/cache/movies (doesn't matter if it fails, the idea is to ensure /mnt/cache/movies is always there when a movie is downloaded"

 

couchpotato docker "renames" /mnt/cache/downloads/complete/movies/Movie_X to /mnt/cache/movies/Movie_X

 

Becasue I'm exporting /mnt/cache to couchpotato it sees /mnt/cache/downloads/complete/movies/ and /mnt/cache/movies/ as being on the same device and it's an instant rename/mv for the downloaded movie.  Couch bugsout if the destination path doesn't exist so that's why I use the post-script in sab as /mnt/cache/movies/ is removed when the mover completes.

 

Unfortunately this doesn't work for sickbeard/sonarr as everything is based on the path to your entire TV archive, they wont let you specify a secondary path for newly imported shows.

 

that's my current setup works with Couchpotato as it has a destination and move folder settings "from" and "to", Sonarr only has the location folder so it will always do a move to /mnt/user/tv shows/ but since its a cache share and the file is on the cache drive it slows down the HD for everything else.

 

so i thought it was a Unraid problem but looks like how the linux OS works it will always do a copy instead of a move.

 

 

seems like another reason to dislike sonarr to me, lol.

 

its the best at what it does nothing else out there

 

personally i've never liked it from nzbdrone days, looks pretty though.

Link to comment

Btrfs is pretty stable.

 

^^^ THIS IS TRUTH ^^^

 

Why isn't it working, then?

 

It is working.

 

Then let's stay on the current kernel and work toward a stable release..

Morten,

 

I already said that was an avenue we are pursuing at this point (patching the existing 3.19 kernel in lieu of an upgrade to 4.0).  Did you miss that part of my post before you replied initially?

 

Now, hold your horses. The existing kernel in your published beta is not 3.19, it is 3.18.5. Why not stay on that?

 

I looked over the BTRFS wiki and that seems to indicate the deadlock mount issue only exists in 3.14.35+, 3.18.9+ and 3.19.1+. In other words, you introduced this bug into unraid by wanting to 'update' the kernel from 3.18.5 in the first place.

https://btrfs.wiki.kernel.org/index.php/Gotchas

 

All the Copy-on-write stuff is cool, but most people who own a pro license have gazoodlez of gigabytes of storage, and the COW features are far from being a must-have. Same with cache raid1 pool, nice to have but ranking way below all the nice features you have already brought us.

 

I wish you good luck with the further development. I hope you will think of both your current and future potential customers who want to see the VM features and 64-bit OS released in a stable version.

 

At the risk of getting back on topic, what is the reason we are moving away from the 3.18.5 kernel?

 

Jon, if you are ignoring me I do apologize. I am only asking again, in case this got lost in the chatter.

Link to comment

Btrfs is pretty stable.

 

^^^ THIS IS TRUTH ^^^

 

Why isn't it working, then?

 

It is working.

 

Then let's stay on the current kernel and work toward a stable release..

Morten,

 

I already said that was an avenue we are pursuing at this point (patching the existing 3.19 kernel in lieu of an upgrade to 4.0).  Did you miss that part of my post before you replied initially?

 

Now, hold your horses. The existing kernel in your published beta is not 3.19, it is 3.18.5. Why not stay on that?

 

I looked over the BTRFS wiki and that seems to indicate the deadlock mount issue only exists in 3.14.35+, 3.18.9+ and 3.19.1+. In other words, you introduced this bug into unraid by wanting to 'update' the kernel from 3.18.5 in the first place.

https://btrfs.wiki.kernel.org/index.php/Gotchas

 

All the Copy-on-write stuff is cool, but most people who own a pro license have gazoodlez of gigabytes of storage, and the COW features are far from being a must-have. Same with cache raid1 pool, nice to have but ranking way below all the nice features you have already brought us.

 

I wish you good luck with the further development. I hope you will think of both your current and future potential customers who want to see the VM features and 64-bit OS released in a stable version.

 

At the risk of getting back on topic, what is the reason we are moving away from the 3.18.5 kernel?

 

Jon, if you are ignoring me I do apologize. I am only asking again, in case this got lost in the chatter.

Not ignoring you.  Didn't realize this was not understood.  There are loads of fixes to btrfs in the newer kernel (among other fixes as well). Other bugs should not be introduced with the new kernel build as it is still a stable release build of the kernel. The entire linux kernel is a behemoth, but we only use a portion of its capabilities. We have been running on 3.19 for some time on internal builds. Going to 4.0 fixed one major btrfs bug that is not fixed in the 3.19 kernel yet (slated for official inclusion in 3.19.5).  It also caused a lot of issues because there isn't a btrfs progs for 4.0 yet.  We are wrapping up testing on a patched version of 3.19.4 now that has a patch we applied manually and all seems well.

Link to comment

There are loads of fixes to btrfs in the newer kernel (among other fixes as well). Other bugs should not be introduced with the new kernel build as it is still a stable release build of the kernel. The entire linux kernel is a behemoth, but we only use a portion of its capabilities. We have been running on 3.19 for some time on internal builds. Going to 4.0 fixed one major btrfs bug that is not fixed in the 3.19 kernel yet (slated for official inclusion in 3.19.5).  It also caused a lot of issues because there isn't a btrfs progs for 4.0 yet.  We are wrapping up testing on a patched version of 3.19.4 now that has a patch we applied manually and all seems well.

 

This is the reason? BTRFS prompts a kernel update to the 14th beta, and we are supposed to consider it stable/mature or even "mostly stable"?

 

More relevant, and out of genuine interest, what are these BTRFS bugs that needed fixing, is there a reason they are not talked about on the BTRFS wiki, and are there any defect reports relating to them?

(And will you guys please consider running a propper bugtracker..?)

 

I did search the defect reports forum, not finding a lot of new issues from the last couple of months that beta 14 has been in the wild. I am genuinely interested, despite the seemingly annoying questions.

 

Link to comment

There are loads of fixes to btrfs in the newer kernel (among other fixes as well). Other bugs should not be introduced with the new kernel build as it is still a stable release build of the kernel. The entire linux kernel is a behemoth, but we only use a portion of its capabilities. We have been running on 3.19 for some time on internal builds. Going to 4.0 fixed one major btrfs bug that is not fixed in the 3.19 kernel yet (slated for official inclusion in 3.19.5).  It also caused a lot of issues because there isn't a btrfs progs for 4.0 yet.  We are wrapping up testing on a patched version of 3.19.4 now that has a patch we applied manually and all seems well.

 

This is the reason? BTRFS prompts a kernel update to the 14th beta, and we are supposed to consider it stable/mature or even "mostly stable"?

 

More relevant, and out of genuine interest, what are these BTRFS bugs that needed fixing, is there a reason they are not talked about on the BTRFS wiki, and are there any defect reports relating to them?

(And will you guys please consider running a propper bugtracker..?)

 

I did search the defect reports forum, not finding a lot of new issues from the last couple of months that beta 14 has been in the wild. I am genuinely interested, despite the seemingly annoying questions.

 

I've made my points clear about btrfs stability.  I have not lost any data and for the most part, our implementation works as it should.  A lot of recent work has gone into refining how unRAID handles pool operations in our implementation.  There are lots of details to this that I am not going to get into because as much as I appreciate your interest, the time spent here explaining is time not spent developing features, creating guides, marketing, etc.

 

There aren't many folks reporting issues on btrfs because for the most part, it's very stable, and for the most part, when folks setup a cache pool, they leave it alone.  The problem is the edge cases.  Things like moving devices from the array to cache or vice-versa.  Another is adding devices to expand the pool or removing devices to drop the pool into a single disk configuration.  Much of that is improved greatly from Tom's work and will be seen in beta 15.  These are things that we test internally before pushing a release out and I don't expect many of our beta testers do this kind of testing given the sheer amount of time and effort it takes to do so.

 

This is the last I will be commenting on the kernel upgrade.  The simple truth is that the kernel being upgraded again isn't an issue and to say that it is without justification or proof that problems will occur as a result is just rabble rousing for no good reason.

 

As xamindar has illustrated, the btrfs mailing list can be a good source for information on btrfs specific development / known issues.  Also, this bug in particular that we experienced in 3.19 that is patched in 3.19.5 is directly highlighted on the btrfs "gotchas" page:  https://btrfs.wiki.kernel.org/index.php/Gotchas

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.