I think I found a bug in the mover


Element115

Recommended Posts

Scenario is this. I have two shares, "Downloads" and "Media". I also have two cache pools, "Download" and "Cache".


The cache pool for the Share "Downloads" is "Download". The cache Pool for the Share "Media" is "Cache".

 

After I downloaded files to the "Downloads" share via the "Download" cache pool I move the files via Krusader to the Share "Media".

 

Now when I invoke the mover the files in the Share "Media" and on the Cache Pool "Download" stay on the Cache Pool "Download". Only after I change the Cache Pool of the Share "Media" from "Cache" to "Download" the mover takes action.

Edited by Element115
Link to comment
5 minutes ago, trurl said:

Not really a mover bug. Linux (Krusader) didn't actually move the files to another disk, it just renamed them to a new path on the same disk. See #2 here:

 

I never expected Krusader to move the files to another disk. I went with Krusader into the Share UNRAID/user/downloads and moved them to the Share UNRAID/user/media. I expect the mover to do the moving between disks.

 

Unraid knows the files are on a cache pool. The mover just does not check every cache pool to move files just the one set in the share as the designated cache.

 

Respectfully if it's not a bug it's an oversight that should be fixed

Link to comment
1 minute ago, trurl said:

Krusader renamed the files to another path on the same disk, so they stayed on the same pool they were on before the move.

 

Mover only moves from the designated pool. That is by design.

 

Krusader works as expected. This is not the issue!

 

The issue is what you describe a design choice.

 

Why not let the mover move from all pools and not just the designated one? No harm in that I (naively) assume?

 

Please don't brush it off and take it under consideration. Thank you!

Link to comment

The behavior you are asking about is basically the "orphans" that some people use to keep some of a user share on pool for performance reasons instead of moving to the array. With multiple pools it is perhaps less justified than it was. The FAQ below was written before there were multiple pools, but the principle is the same.

 

8 minutes ago, Element115 said:

take it under consideration

Since it is by design, maybe make a Feature Request to have it changed. The developers are unlikely to notice it here among the hundreds of threads that get posted every week.

 

 

Link to comment
1 minute ago, trurl said:

The behavior you are asking about is basically the "orphans" that some people use to keep some of a user share on pool for performance reasons instead of moving to the array. With multiple pools it is perhaps less justified than it was. The FAQ below was written before there were multiple pools, but the principle is the same.

 

I respectfully disagree that I created orphans as I moved files between shares and because the mover worked as soon as I changed the pool.

 

1 minute ago, trurl said:

Since it is by design, maybe make a Feature Request to have it changed. The developers are unlikely to notice it here among the hundreds of threads that get posted every week.

 

I will thank you! Do you know where I can do that?

Link to comment

@jonp

 

Not sure if you have been involved in any of the discussions about the 2 User Share "surprises" I summarize in the post I linked at the top of this thread. These are easily replicated. I'm pretty sure Tom is aware of #2, and I know he has been involved in discussions about #1.

 

There doesn't seem to be any solution to these without recoding linux cp and mv (or maybe shfs?) and even that might not catch some other ways linux might do things if it isn't using cp or mv.

 

Back in the day when Unraid was just a NAS and most users were only working with files over the network these weren't often encountered since the source and destination were not the same "mount" as far as SMB was concerned.

 

We have always dealt with these by just telling the users how to avoid them.

 

And I think #2 might be avoided with docker as long as the source and destination are separate container paths in the mappings, though I have not tested this.

 

In the particular example this user has (#2 scenario in my link), there is a Downloads share using pool Download, and a Media share using pool Cache.

 

The user does mv (through Krusader) from /mnt/user/Downloads to /mnt/user/Media. Since these are both the same mount, Linux renames the path on the disk instead of copying from source to destination then deleting from source. With the result that the files stay on the Download pool even though the Media share is configured to use the Cache pool instead. Then, when mover runs, it doesn't move these files from the Download pool because the Media share doesn't use the Download pool.

 

The user proposes to redesign mover so it always moves from all pools, but I think some users would find that surprising behavior also and some may rely on the existing behavior to insure mover ignores certain files they have put intentionally on other pools (the orphans mentioned in that FAQ I linked above).

  • Like 1
Link to comment
16 hours ago, trurl said:

The user proposes to redesign mover so it always moves from all pools, but I think some users would find that surprising behavior also and some may rely on the existing behavior to insure mover ignores certain files they have put intentionally on other pools (the orphans mentioned in that FAQ I linked above).

 

@jonp

 

I do not concur. I moved the files to a share that is set to "Use cache pool (for new files/directories): YES". This lets me expect two things.

 

1. Some files maybe unprotected, because they are put on a cache first.

2. Once the mover is invoked all files are moved to the array.

 

Files not being moved to the array is the unexpected behavior for that share.

 

I think an additional option might be useful in the share settings: "Have mover check all Cache disk/pools and move onto the array: YES/NO"

 

At the very least clear up the help text. "Mover only moves from the specified Cache disk/pool others are ignored" or something.

Edited by Element115
Link to comment
On 9/16/2021 at 10:32 PM, Element115 said:

At the very least clear up the help text. "Mover only moves from the specified Cache disk/pool others are ignored" or something.

 

Yes we will update this text which did not get modified properly when we introduced multiple pool feature.  The behavior you are seeing is by design as @trurl has pointed out.  When we release "multiple unRAID array" feature the situation will be a little different.  The share storage settings will change to reflect the concept of a "primary" pool and a "secondary" pool for a share.  You could, for example, have a btrfs primary pool of hdd's and a xfs secondary single-device nvme secondary pool.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.