A shares "Minimum free space" setting should not apply if it uses the cache drive


18 posts in this topic Last Reply

Recommended Posts

If a share is not set to cache only, and a min free space is set, I think it should only apply to free space on HDDs, not how much free space is on the cache.

I only want to fill up my drives to certain amount, but I always want to utilize the faster write speed of using the cache drive, however if the cache fills up and has less free space than the share setting, it will write straight to an HDD and be much slower.

Making it an option would work too.

 

Perhaps a better explanation:

4 hours ago, itimpi said:

At the moment if you have a Minimum Free Setting on both a share and the on the cache drive, then the larger of the two values gets used to decide if the cache drive can be used for the file.   I think what is being requested is that the value for Minimum  Free Space on the User Share should be ignored when deciding if the cache drive can be allowed to hold the file and only the one set for the cache should be used.

Edited by GoChris
clarity
Link to post
1 minute ago, GoChris said:

if the cache fills up and has less free space than the share setting, it will write straight to an HDD and be much slower.

What is the alternative?

 

I'm not understanding the scenario and what you want to change.

Link to post
33 minutes ago, GoChris said:

If a share is not set to cache only, and a min free space is set, I think it should only apply to free space on HDDs, not how much free space is on the cache.

I only want to fill up my drives to certain amount, but I always want to utilize the faster write speed of using the cache drive, however if the cache fills up and has less free space than the share setting, it will write straight to an HDD and be much slower.

Making it an option would work too.

Sounds to me like you set his mover schedule not often enough and/or have a cache pool that is too small for the amount of data that needs to be cached.

Also, do you know that filling an SSD close to the brim has a detrimental effect to its lifespan?

I think you might be trying to put a bandaid on COVID-19.

Link to post
4 hours ago, jonathanm said:

What is the alternative?

 

I'm not understanding the scenario and what you want to change.

The alternative would be to have a separate setting for how much free space you want to maintain on the cache, just like you can for a share. Cache drives are not going to be as big as data drives (let's assume we're talking about 99% of setups), so chances are you might fill it up more than a share setting, but there can remain more than enough space for file transfers. If you need/want an example, I might have my share set to 150GB min, but various files could have been written to multiple shares and reduced the cache drive to (just) under 150GB. If I then want to write another file (size is irrelevant as long as it's under the free space size), it would skip the cache and go to the share directly, obviously that's slower, but really no reason it couldn't go to the cache (before the mover catches up).

 

Edited by GoChris
Link to post
1 hour ago, GoChris said:

The alternative would be to have a separate setting for how much free space you want to maintain on the cache, just like you can for a share.

Settings, Global Share Settings, Cache Settings, Min Free Space.

Link to post
13 hours ago, GoChris said:

The alternative would be to have a separate setting for how much free space you want to maintain on the cache, just like you can for a share. Cache drives are not going to be as big as data drives (let's assume we're talking about 99% of setups), so chances are you might fill it up more than a share setting, but there can remain more than enough space for file transfers. If you need/want an example, I might have my share set to 150GB min, but various files could have been written to multiple shares and reduced the cache drive to (just) under 150GB. If I then want to write another file (size is irrelevant as long as it's under the free space size), it would skip the cache and go to the share directly, obviously that's slower, but really no reason it couldn't go to the cache (before the mover catches up).

 

Any particular reason why you set it at 150GB?

That is extremely high value for min free space. No wonder your SSD runs out of free space before the mover runs.

Link to post
2 minutes ago, testdasi said:

Any particular reason why you set it at 150GB?

That is extremely high value for min free space. No wonder your SSD runs out of free space before the mover runs.

At the moment if you have a Minimum Free Setting on both a share and the on the cache drive, then the larger of the two values gets used to decide if the cache drive can be used for the file.   I think what is being requested is that the value for Minimum  Free Space on the User Share should be ignored when deciding if the cache drive can be allowed to hold the file and only the one set for the cache should be used.

Link to post
4 hours ago, itimpi said:

At the moment if you have a Minimum Free Setting on both a share and the on the cache drive, then the larger of the two values gets used to decide if the cache drive can be used for the file.   I think what is being requested is that the value for Minimum  Free Space on the User Share should be ignored when deciding if the cache drive can be allowed to hold the file and only the one set for the cache should be used.

Correct. I will update my original post to be more clear, I guess I wasn't as clear as I thought.

Link to post
On 3/29/2020 at 11:40 AM, itimpi said:

At the moment if you have a Minimum Free Setting on both a share and the on the cache drive, then the larger of the two values gets used

Well, this explains a lot! I've had a similar issue but it was more of an annoyance and so I ignored it. Now that I understand this I've tweaked some settings and should be able to avoid any overflow onto the array moving forward. Thanks!

Link to post

I second this request.
I have 8TB drives so a healthy 90% full to allow for faster write speeds means 800MB free space.

To have a SSD that would work with that would be 1TB and give me 200MB of working drive space, that makes no sense.

Even better would be to have individual disk space settings as I have some smaller 4 TB drives and you currently cannot have both a larger drive i.e. 8TB and smaller 4TB at 90% as you would have to choose. UNRAID is all about drive mixing so how has this not been implemented :)

Link to post
  • 1 month later...

Huge +1 on this. The cache already has its own global "Minimum Free Space" option under the "Cache Settings" header. Unraid should not ignore this in favor of a share's free space settings, which should be a value optimized for the array disks. Overriding the global cache value with something share-specific leads to negative, unexpected behavior.

 

Separate to that issue, I'm also going to echo what ados said above: when you have an array with multiple disk sizes, a static number of bytes to keep free doesn't make sense. Keeping enough free space for a large drive means a small percentage of usable space on a small drive.

 

Instead, I propose we be allowed to set a percentage in the "Minimum Free Space" fields. Setting a value of, say, 10% would allow you to reserve 1 TB free on a 10 TB drive but only reserve 500 GB on a 5 TB drive. By having to set a fixed value in bytes, you're either losing drive usage on lower drives OR you're risking filling up a disk too much and slowing it down due to fragmentation. Neither of these is a good option.

 

(I also realize that the intent behind the field's help text of "Choose a value which is equal or greater than the biggest single file size you intend to copy to the share." is intended to prevent fragmentation, but in the case of a share where files may be deleted and written a lot, this fragmentation can happen even if all the files are under that threshold.)

Link to post
On 5/7/2020 at 9:29 PM, gregnostic said:

I also realize that the intent behind the field's help text of "Choose a value which is equal or greater than the biggest single file size you intend to copy to the share." is intended to prevent fragmentation, but in the case of a share where files may be deleted and written a lot, this fragmentation can happen even if all the files are under that threshold.)

Actually it is nothing to do with fragmentation.   In Unraid each array disk is a discrete filing system, and a file must fit onto the disk as it is never split across disks.  Once Unraid selects a disk for a new file it will not change its mind, and will give an ‘out-of-space’ error if the file does not fit on the drive.    By setting the Minimum Free Space to be larger than the largest file you intend to write you ensure that you do not get this out-of-space error.

Link to post
On 5/7/2020 at 4:29 PM, gregnostic said:

I propose we be allowed to set a percentage in the "Minimum Free Space" fields.

This makes no sense at all because of how Unraid actually uses the Minimum Free setting. It is all about allowing a disk to be chosen for writing a file, not about keeping a certain amount of space unused.

 

itimpi already explained, but here is an example:

 

Minimum Free is set to 10GB. A disk has 11GB free. That disk can be chosen for a new file because it has more than minimum. A 9GB file is written to the disk. Now it has only 2GB remaining, which is less than minimum, so the disk won't be chosen again.

 

But note that the disk winds up having less free than the minimum. It has nothing to do with keeping some of the disk unused.

Link to post

Clearly my mental model on this based on how it's described in the Unraid UI was way off. I understand now why Unraid behaves the way it does and why the share value of "Minimum Free Space" overrides the global cache value.

 

That's not to say I agree with or like this behavior, but at least I understand it now. Thanks.

Link to post

And in fact, it isn't really possible for Unraid to guarantee a certain amount of unused space. It doesn't know in advance how large a file will become when it chooses a disk to write it to.

Link to post

Even worse for predictions, files in place can be grown over time.

 

You pretty much have 2 choices, set a minimum free space large enough that you should theoretically never overrun it and use good choices for allocation and split level, or micromanage each disk by manually moving things around.

 

The default high water allocation is a decent way to keep things working well automatically, since it fills each drive sequentially until each threshold is met. By the time you fill all your disks more than 3/4, it's probably a good idea to start shopping around for more capacity by replacing or adding drives.

 

If you micromanage and move things around manually, you can afford to run much closer to the space limits on the drives before you add capacity.

 

I personally tend not to let total array free space fall below the size of my largest data disk, but that's because I like to know that in a pinch I can totally free up my largest drive.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.