mover - schedule by % free space disk


Recommended Posts

That's an interesting idea now that it's possible to have redundancy via cache pool.  Previously we wanted to get data off the cache as soon as practical because if the cache device fails you lose all the data on it.  But with multi-device cache pool, not so critical to get data off.  I like it.

Link to comment

Also solves the problem of the  cache drive getting full and therefore not allowing any writing to the share

As long as you have Min Free Space set to be more than the largest file you want to copy to the share, then unRAID already handles this.  Once the free space falls below the Min free Space value unRAID stats writing directly to the array data drives.

Link to comment

There is at least one outstanding bug where this does not actually happen in at least some cases.

I believe that there is a bug in the current beta 7 GUI code to do with recognizing suffixes when trying to set the min free space value. Not sure if that is relevant if the value is already set (or you set it manually via a text editor).

Link to comment

I think the unit bug is fixed in b7. However there is a bug where if 0 is not set as a minimum a non infinite value is used.

 

Regardless of this i often see the cache dir run out of space being filled up with small files. i.e. another bug.

 

I only point it out as currently what is supposed to happen (and has happened in the past) doesn't always work.

 

This ticket is still a good idea although to be fair it been had at least twice before. I say this just to point out how well this new process is working and it shouldnt be lost again.

Link to comment

Regardless of this i often see the cache dir run out of space being filled up with small files. i.e. another bug.

 

After reading this post and having a moment of lucidity I think it's dawned on me how this can happen  ;D  If new files are being written to cached share, and those files are beyond the 'slit level' in the directory hierarchy, code that prevents split kicks in and forces new files to stay on device where parent exists, in this case the cache.  The free space is not taken into consideration in this case - but with the cache disk it should be - that's the bug.

 

Sound like this is your scenario?

 

And I was just about to release -beta8....

Link to comment

Regardless of this i often see the cache dir run out of space being filled up with small files. i.e. another bug.

 

After reading this post and having a moment of lucidity I think it's dawned on me how this can happen  ;D  If new files are being written to cached share, and those files are beyond the 'slit level' in the directory hierarchy, code that prevents split kicks in and forces new files to stay on device where parent exists, in this case the cache.  The free space is not taken into consideration in this case - but with the cache disk it should be - that's the bug.

 

Sound like this is your scenario?

 

And I was just about to release -beta8....

 

Sounds like that could be it. Get b8 out and we can look at this for b9. Nice one.

Link to comment

Regardless of this i often see the cache dir run out of space being filled up with small files. i.e. another bug.

 

After reading this post and having a moment of lucidity I think it's dawned on me how this can happen  ;D  If new files are being written to cached share, and those files are beyond the 'slit level' in the directory hierarchy, code that prevents split kicks in and forces new files to stay on device where parent exists, in this case the cache.  The free space is not taken into consideration in this case - but with the cache disk it should be - that's the bug.

 

Sound like this is your scenario?

 

And I was just about to release -beta8....

 

Sounds like that could be it. Get b8 out and we can look at this for b9. Nice one.

Looks as though the fix was easy once the cause was identified as it appears to have made beta 8.

Link to comment

Keep in mind the "min free space" (when working correctly) only stops you from writing to the cache and then either (I can't remember) fails to allow the copy operation or defaults to writing directly to the array.

 

That is "OK" but it would be a much more elegant and seemless option (yes option, not forced choice) if the mover script could kick off after you've hit that min-free-space (or %) limit so the user doesn't have to deal with it.

 

I'd also say this should have been, and should be, the functionality all along regardless of the cache's new redundancy ability. Consider that "moving stuff off the cache as quickly as possible" makes sense if you are of the mindset that the cache might take days to fill.  But what about after large writes (and/or if your running with a smaller SSD as a cache) that can get filled well before the standard daily mover?  Of course nothing stops us setting the mover to run more often, but that seems a bit kludgy afterll :)

 

In fact if you want to talk about getting the data off the cache as quickly as possible, then doing it automaticallyafter a huge amount of data has been copied to it seems like a no brainer.

Link to comment
  • 4 years later...

Yes please.

 

The cache needs to be transparent, and never interfere with file operations.

Set cache high and low watermark, e.g. % of space, or absolute space.

High watermark start moving files, e.g. less than 10% free start moving files.

Low watermark stop moving files, e.g. more than 50% free stop moving files.

Pick files to move by age and access count, e.g. move least accessed files first.

 

Many well behaved apps and bulk copy apps, e.g. robocopy, will reserve space before copying file contents.

This is an ideal opportunity for "thin provisioning" systems to allocate the storage in a physical location with enough space.

E.g. min free space set 2GB, app creates a new file, app sets the file size to 4GB, before the app starts writing the content, move the file creation to a drive with space, then when the write starts happening, there is enough space.

In this scenario the only failure case will be create, write, write, out of space.

 

 

Link to comment
8 hours ago, ptr727 said:

Many well behaved apps and bulk copy apps, e.g. robocopy, will reserve space before copying file contents.

And there lies the problem.

 

You see this as a single operation, but in reality it is 2 operations at the file system level:

  • create/open file
  • seek to file position required to allocate file of given size.

In order to create the file Unraid has to choose on which drive it is going to create it so by the time the file seek operation comes along the disk has already been selected;  hence the need for the minimum free space setting.

  • Upvote 1
Link to comment
On 5/3/2019 at 11:53 PM, remotevisitor said:

And there lies the problem.

 

You see this as a single operation, but in reality it is 2 operations at the file system level:

  • create/open file
  • seek to file position required to allocate file of given size.

In order to create the file Unraid has to choose on which drive it is going to create it so by the time the file seek operation comes along the disk has already been selected;  hence the need for the minimum free space setting.

I didn't say it is easy, but it can be done (time, money, resources), e.g. SMB handle is different to FS handle, can be remapped as needed.

A cache that needs reserved space in anticipation of a large file is wasted space, e.g. thin provisioned VM image grows beyoind size and fails permanently, e.g. copy a file that is too big and fail permanently.

An alternative is obviously to support SSD drives as data storage, then there is no longer a need to use a cache of SSD's when the main array is made of SSD's.

Link to comment
  • 11 months later...
4 minutes ago, ryperx said:

Sorry for reanimate this old thread, but was this feature implemented?

I mean that mover starts copying files in the moment a certain amount of % are reached?

As i know the maximum free space setting helps that the cache cant run completly full but wouldnt trigger the mover job or am i wrong?

Have you looked at the Mover Tuning plugin?    If I understand what you want the plugin should do the job.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.