Do not allow Minimum Free Space to be 0.


Recommended Posts

It seems to me that there is never a reason you WANT the Minimum Free Space value for a User Share or a pool to be 0 (unless I have missed some special action triggered by 0 :( ).
 

I would suggest that if it is found to be 0 then Unraid should change it to some sort of non-zero default?    A suggestion would be something like 10% of the total size for a pool.   Not sure of the best value for a User Share so suggestions welcome?   Maybe something in the range of 10-100GB?

 

To get the current behaviour a User can always then set this to a very small non-zero value.   Resetting it back to 0 should then trigger setting it back to whatever default is set by 'Unraid.

 

Thoughts?

 

  • Like 2
Link to comment

You are right, I don't think I've EVER seen or recommended that someone set minimum free space to be 0 to solve some issue or change an unwanted behaviour.

 

I think the problem is finding a default value that will upset the least amount of people.

 

If Unraid were a huge company, I'd be all over suggesting a setup wizard, that walked you through all the settings and explained them screen by screen, with various suggestions for optimizing based on usage, media server, office documents, backup server, etc. However, since the company resources are stretched as it is, and I'm not willing to volunteer to program it, the setup wizard will have to remain a dream for now.

Link to comment
10 minutes ago, itimpi said:

Personally I think ANY sensible default would be an improvement.

I agree, problem IMHO is finding a sensible default, for user shares I would use around 50GB, for pools it's more complicated, maybe 2% pool size but with a 10GB minimum and a 50GB max.

Link to comment
35 minutes ago, Squid said:

OK.  Now I'm starting to feel bad for not bothering to ever have set it.

That's ok, I doubt it's ever been set on a huge number of machines, for 2 reasons.

 

1st, and this is my excuse, I OCD about which files are where, and manually move items to different drives to satisfy my need to keep the drives as tidy as possible. If you never allow a drive to even get close to full, minimum free space is never an issue.

 

2nd, and this is the one that bites most users, is ignorance of what it does and how it does it, and how important it is.

 

50GB sounds reasonable to me as a value to set as a default.

 

I think it's wise to leave a fair bit more free to allow file system checks room to work well, but what I think is prudent others will find wasteful.

 

Maybe FCP can bitch about 0KB?

 

If we have a good FAQ entry to point to, the onslaught of posts should be somewhat manageable.

Link to comment
Just now, JonathanM said:

Maybe FCP can bitch about 0KB?

 

Only if I can put your personal cell phone number as how to get help to resolve.  :) There would be a metric ton of posts regarding that.

 

But, if you can convince me that it's the proper way to set things up I'll add it in.

 

Link to comment
1 minute ago, Squid said:

if you can convince me that it's the proper way to set things up I'll add it in.

I already told you all my servers have 0KB set, I'm not trying to convince you. :)

 

However, I do think it would reduce the amount of posts complaining about getting backed into a corner with free space.

 

I think the users that care enough to install FCP deserve to see a message, they can either fix it, ignore it, or complain about it.🤣

 

I believe it truly is a "Common Problem".

Link to comment

If FCP adds a check on Minimum Fred Space being 0 perhaps it would make sense to initially only do the check for pools as completely filling a pool seems to be the commonest problem with the most severe consequences.   On an array drive it tends to only be a problem if the Fill-up allocation method is being used.    On array drives the commonest issue seems to be too restrictive Split Level setting causing drives to fill up which would be much harder for FCP to sensibly detect.

 

 

Link to comment

We had an internal discussion about this subject, and @jonp came up with the bright idea to make dynamic settings based on the largest file present in each share.

 

I turned his idea into a new plugin Dynamix Share Floor. This plugin creates a cronjob which can run at a given interval and scans each existing share for the largest file present.

It then calculates a (rounded) share floor based on the scan result and updates the share settings if the current floor value is different.

The scan routine works with a minimum floor of 500 MB, if a share is empty or has only small files it will get this minimum set.

 

The new feature can be found under Settings -> Scheduler -> Share Floor Settings

 

There is a Update Now button to perform a manual scan and update immediately.

 

  • Like 1
  • Upvote 2
Link to comment
8 minutes ago, bonienl said:

We had an internal discussion about this subject, and @jonp came up with the bright idea to make dynamic settings based on the largest file present in each share.

 

I turned his idea into a new plugin Dynamix Share Floor. This plugin creates a cronjob which can run at a given interval and scans each existing share for the largest file present.

It then calculates a (rounded) share floor based on the scan result and updates the share settings if the current floor value is different.

The scan routine works with a minimum floor of 500 MB, if a share is empty or has only small files it will get this minimum set.

 

The new feature can be found under Settings -> Scheduler -> Share Floor Settings

 

There is a Update Now button to perform a manual scan and update immediately.

 

Sounds like a good start :) 
 

Does this plugin only do something if the value is 0 so that the user can set a value smaller than the largest file if so desired?
 

Does this plugin do anything about pools as it may be more important to have a floor set there as completely filling a pool often seems to lead to file system corruption.     Also it may not be appropriate to look for the largest files as there may well be vdisks there and using their size would probably give a larger size than the user wants.

Link to comment

It is a first start indeed, may need tweaking going forward :)

 

The plugin will change the floor value if new value is different from the old value, regardless of what the old value is (including 0).

The idea is that the floor value gets automatically adjusted based on the content stored in the share. This way it is a set and forget feature for the user.

 

All shares are taken into account, including shares in pools.

 

A share with vdisks is the exception to the rule, currently there is no way to treat such a share differently. Good point though.

 

Give it a try, and let me know your observations and possible improvements!

 

Link to comment
26 minutes ago, bonienl said:

The idea is that the floor value gets automatically adjusted based on the content stored in the share. This way it is a set and forget feature for the user.

I can see a potential problem with this if the user occasionally has very large files that are copied directly into position on a drive because they exceed the size that the user wants to use for the floor value on either a pool or a share (this is the share level equivalent to having large vdisk files on a pool).  Not sure how best to handle this case :(   Perhaps allowing for a max value to be specified that will never be exceeded?   As always it is catering for the edge cases that can end up causing problems :) 

 

Link to comment
2 minutes ago, trurl said:

Whatever the plugin does, users will still have to install it, and will still not understand how Minimum Free works.

 

If you understand Minimum Free, you don't need the plugin.

 

Of course this plugin is not for everyone.

With most things - if you understand what you are doing, there are no problems :) 

 

Link to comment
2 minutes ago, trurl said:

Whatever the plugin does, users will still have to install it, and will still not understand how Minimum Free works.

 

If you understand Minimum Free, you don't need the plugin.


I agree, but if the plugin can cater for 90+% of Use Cases and allow some sort of over-rides for special cases maybe the need to understand it will largely be removed. 
 

Once it has settled down maybe it will make sense for it to be built in as standard to future Unraid releases.

 

just a thought - can the plugin add an entry to the share’s .cfg file so we can see that the plugin has set the floor value when we look at diagnostics?

Link to comment
3 hours ago, bonienl said:

It is a first start indeed, may need tweaking going forward :)

 

The plugin will change the floor value if new value is different from the old value, regardless of what the old value is (including 0).

The idea is that the floor value gets automatically adjusted based on the content stored in the share. This way it is a set and forget feature for the user.

 

All shares are taken into account, including shares in pools.

 

A share with vdisks is the exception to the rule, currently there is no way to treat such a share differently. Good point though.

 

Give it a try, and let me know your observations and possible improvements!

 

 

Sounds like a good idea....I installed the plugin and did the update now.  I am getting below output and free space does not update for the shares.  Let me know if you want me to move this to your plugin thread.  Don't want to hijack this thread.

 

Scanning Cell Phone ...
Warning: file_get_contents(http://localhost/update.htm): failed to open stream: HTTP request failed! in /usr/local/emhttp/plugins/dynamix.share.floor/scripts/share_floor on line 40
updated - new floor setting: 2.6 GB
Scanning Downloads ...
Warning: file_get_contents(http://localhost/update.htm): failed to open stream: HTTP request failed! in /usr/local/emhttp/plugins/dynamix.share.floor/scripts/share_floor on line 40
updated - new floor setting: 41.2 GB
Scanning Movies ...
Warning: file_get_contents(http://localhost/update.htm): failed to open stream: HTTP request failed! in /usr/local/emhttp/plugins/dynamix.share.floor/scripts/share_floor on line 40
updated - new floor setting: 41.2 GB
Scanning Music ...
Warning: file_get_contents(http://localhost/update.htm): failed to open stream: HTTP request failed! in /usr/local/emhttp/plugins/dynamix.share.floor/scripts/share_floor on line 40
updated - new floor setting: 500.0 MB

Link to comment
7 hours ago, JonathanM said:

How about a rule that minimum free space won't go above a ceiling value of some percent of the total size? I have a pool that hosts a single VM taking up 90% of the space, but also is used for appdata.

 

Alternatively exclude vdisk image files from all calcs.

 

I could see us making this plugin ignore all system shares by default (domains, appdata, and system).  These shares aren't exposed over SMB by default either and frankly the only data being written to them should be data coming from within the system itself (not over SMB or any other method).  I definitely agree that if you have a domains share that has say 2TB free and you create your first vdisk and make it 1TB, then with this "floor" plugin, you would never be able to create another vdisk, which would be unexpected and frankly confusing behavior.

 

Maybe it's even more simplistic in that by default, it only applies this floor calculation on shares that are exposed over SMB.  Meaning if SMB = No, the "floor" isn't managed by the plugin and rather becomes a user-managed variable.

  • Like 1
Link to comment
  • 2 months later...
On 5/17/2022 at 11:05 PM, JonathanM said:

Hmm. I was thinking of the minimum free space of pools. The setting that tries to keep BTRFS pools used in mover operations from getting filled to the brim and corrupting, regardless of whether the files come from SMB or simply writes to /mnt/user/share that go to the pool.

So was I :)   As long as you only set something like the 10% value on a pool if it is currently 0 I see little downside.   The worst case is that a User Share unexpectedly overflows to the Main array because 10% is too much, but this is not a serious issue, and the user could then modify the setting to any none-zero value they prefer.

Link to comment

With a "minimum free space" of 0 on a cache drive, what happens when beyond available free space?  Any problems, or just an "out of space" warning message? 

 

Example scenarios:

A)  You transfer a file, and the drive goes to zero.

B)  You are performing a task at the OS level (e.g. building a docker container).

C)  Performing multiple file transfers/tasks that consume the storage at the same time.

 

Link to comment

I think I have this all right, maybe someone will correct me, but should be good enough for deciding about Minimum anyway.

 

Whether minimum is zero, or some other value, it works the same.

 

If a disk has more than minimum, Unraid can choose the disk, and if the file won't fit, it will give an error when the disk runs out of space.

 

If a disk has less than minimum Unraid will choose another disk. For cache, choosing another disk means overflowing to the array. So zero means the disk can always be chosen regardless of how much space remains.

 

Also note that only cache:yes and cache:prefer shares will overflow. cache:only shares go to cache regardless.

 

The Minimum for the user share applies when choosing an array disk for overflow or cache:no shares.

 

But other factors may come into play, more about that after some examples

 

Minimum Free is 20G, disk has 30G remaining, you write a 25G file. Since the disk has more than minimum, Unraid can choose the disk. After the 25G file is written, the disk has less than minimum and won't be chosen again.

 

Minimum Free is 20G, disk has 25G remaining, you write a 30G file. Since the disk has more than minimum, Unraid can choose the disk. Since the file won't fit, after the disk fills up you get an error.

 

Other factors

 

If a file already exists on a disk, and you replace it, the replacement goes to the disk the file is already on, even if it won't fit.

 

Split Level has precedence over Minimum, so if split level says a file belongs on the same disk as other related files, that is where it will go even if it won't fit.

 

For array disks, Allocation Method might cause Unraid to choose another disk anyway, and Include/Exclude will also restrict which disk can be chosen.

 

In the general case, Unraid doesn't know how large a file will become when it chooses a disk for it, so it just follows the rules, and it follows those rules in a certain order. When a rule says it can should choose a disk, it doesn't check the remaining rules. Minimum is the last rule.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.