mv file from cache to share (that uses no cache) results in the file being moved to the cache creating a share on the cache.


Recommended Posts

Hi guys, I got a weird behaviour here and would like to know if this is a bug or I've done something wrong.

 

I have a share called movies. The share is configured to live only on the array (no cache). Config looks like this.

image.thumb.png.a88d95e3265c827d2c8d7088ba47ce0a.png

 

I have another share which lives only on the cache called tmp

image.thumb.png.d724534b8123c30192c0d5a7eb212383.png

 

 

I `ssh` to the nas and move a file from tmp to movies

mv /mnt/user/tmp/jdownloader/movie.mp4 /mnt/user/movies/.

 

The end result is that a folder movies is created in `/mnt/cache/movies/movie.mp4`.

The expected behaviour would have been that it was moved to the array at the designated folder since that share is explicitly not using cache and as such the mover is never expected to run for this share.

Is this a bug or have I misunderstood something?
 

Edited by sdfyjert
Link to comment

This is a by-product of the way that the underlying Linux system implements move.    It first tries to do a ‘rename’ if it thinks source and target are on the same Mount point and only if that fails does it do a copy/delete.    In this case both appear to Linux to be under ‘mnt/user’, and so it tries rename which has worked and the file is left on the cache.    In such a case you either need to have the target set to Use Cache=Yes so that mover later moves it to the array, or do an explicit copy/delete yourself.  

Link to comment

I think most people take the easy way out and simply change the share to Use Cache=Yes and let mover handle getting the file onto the array when it runs at a later point in time.   A 'benefit' of the mv behaviour you describe is that from a user perspective it completes almost instantly whereas a copy/delete takes much longer and the user does not see the time that mover later takes to get the file onto the array as it typically happens outside prime time.

 

You DO get the behaviour you want if it is done by accessing the shares over the network - it is only moving them locally from within the server that exhibits this behaviour.

 

Link to comment

Changing the cache to YES for that share would actually do me more harm.

 

I have a limited amount of cache (SSD drives) and they're primarily used with shares related to video-editing.

If I would set the movies share to cache=YES then it would easily saturate the cache 'causing issues to the shares that really need it.

 

Regarding the network based approach, moving files in a machine from disk to disk is significantly faster than doing it over samba through another machine. The same file that I can move on the machine in 1 minute doing it over Finder as mounted network storage (sambar) will take significantly more time.

 

Given there's no "native" file-explorer in Unraid another temporary solution would be perhaps to "monitor" the FS for changes and when this behaviour is recorded invoke the mover for the particular files (and god forbid your array does not run out of space in the meantime 😂).

As this behaviour could 'cause issues and headaches to people unaware of it (I only noticed it on time because the "Fix Common Problems" plugin spotted it), in my opinion this should be addressed in OS level so the user settings are 100% applied in real-time.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.