How much room to leave on a disk?


gtroyp

Recommended Posts

TL;DR What if any free space should always be on a drive in the array?

 

I have just reached the capacity of my case for drives (11/12-3.5in; 4-2.5in) and am making the jump to 8TB drives one per month; the first new drive (8TB) is replacing my 4TB parity which enters the array as a new drive as soon as I get the big one stress tested.

 

At the same time, my array is at 98% capacity, and I started wondering, what is a safe amount of free space to leave on a drive in the array? I wonder because, I might need to move some current data onto the new/old drive. Plus once I have three 8TB drives in place, I am going to move to dual parity, which will mean taking a drive out of the array by putting all of the data from two 2TB drives onto the 8TB. But, I want to put 8TB of cold storage on that drive, or as close as I can get safely and free up some relatively new 4TB (and fast) drives for new data.

 

Is there a need for any free space, or can I just fill to capacity?

 

Link to comment

It depends...

 

If the data is written, not deleted, changed, or grown, you can fill a filesystem very full. However once you begin making changes, called making holes, or fragmenting, things can become very ugly. Depending on the filesystem, you may notice as soon as 80% full, but certainly in the 90+% range the filesystem begin doing a lot of extra work if files are being changed.

 

Also, almost all filesystems keep working on this to improve.

 

That said, you want to put 8TB of cold storage, (what about atime, etc?), you can go very full 99%, just be sure it is truly cold. I would even single thread the copy to avoid fragments, but that is really just a read speed thing.

Link to comment

I did some research on this same question a month ago, I had the same question.

 

First depends on your filesystem type. I'll assume XFS, cause IMHO that's really the only one worth picking.

 

I've linked others internet sites where the best I could find recommended 5% free space because of the default optimal inode size of the filesystem is something like 4Gb or 4GB. Doesn't matter which cause either is small relative to the size of the disks we using especially at 8TB. If I find the link again I'll post it, or search through my history, you my find it in an earlier post I made. 

 

Point is 5% is huge amount of space with 8TB drive, and it was recommend as the % to allow for enough space for defraging of XFS filesystems based on the inode size.

 

I don't care about defragging at this point, so could care less about that aspect. I only mention all this cause you wanted to know, as I did, so that's all I found searching for the answer to this same question. I came away thinking there'd be more science or a solid good number to recommend, but there wasn't. There was no nugget of knowlege I found, so if anyone else found something, I'd be happy to know as well.

 

My advice is this based on finding nothing and only though simply experimentation over the last two months on this very question. At 8TB drives, I set 1% free as a target. That's roughly 120GB IIRC in my settings that I set on shares as a minimum free space limit to enforce. In the Disk Settings as well I set 99% as the warning level and 100% as the critcal level simply cause I don't want to see the color red in the GUI. I wish this setting allowed decimal place, cause I'd set 99% as the warning and 99.5% as the critcal if I could.

 

Long post, but hope there was something of value in it if you read this far.

  • Like 2
Link to comment

I try to leave space available based on what I have stored on the drive.  If I have a lot of series that could expand then I tend to leave 500GB+ free.  If it is movies (which span many drives)  I leave 250GB or less but I always leave some - probably shouldn't now that my arrays are all XFS and not ReiserFS.

Link to comment
13 hours ago, Lev said:

First depends on your filesystem type. I'll assume XFS, cause IMHO that's really the only one worth picking.

 

My advice is this based on finding nothing and only though simply experimentation over the last two months on this very question. At 8TB drives, I set 1% free as a target. That's roughly 120GB IIRC in my settings that I set on shares as a minimum free space limit to enforce. In the Disk Settings as well I set 99% as the warning level and 100% as the critcal level simply cause I don't want to see the color red in the GUI. I wish this setting allowed decimal place, cause I'd set 99% as the warning and 99.5% as the critcal if I could.

Thanks for the XFS, as dev I can not really come here and promote it.

1% of 8TB is 80GB, your 120Gb is 1.5% which you probably did to avoid the warning at 99%.

 

I :+1: your request for getting the setting to allow decimals as I am constantly dealing with people who claim out of space 100%, yet have 100s of GB available. When you have room for thousands/millions more of your average filesize, you're not out of space, just because you see a 100% from df.

Edited by c3
  • Like 1
Link to comment
  • 2 months later...
On 9/9/2017 at 2:14 PM, c3 said:

I :+1: your request for getting the setting to allow decimals

 

Thanks, I did some research on this.

 

Ok first the problem... First I attached a picture below to illustrate the problem. Warning is set to 99%, Critical 100% using a 8TB drive as an example. 1% seems to be equal to roughly 40GB in size based on how unRAID is calculating it. The GUI then recieves this 'critical' state and shows red.

 

40GB doesn't seem like that big of a deal as a single drive, but when I multiply this across all my drives, potentially the unRAID max drive limit (28? 30? I forget) Let's just use 28 for the math... 28 * 40GB = 1.1TB of free space remaining, yet I'd be in critical state.

 

That sounds like a lot, but if I had 28 drives at 8TB in size, that means I'd have 224TB of total space. Am I really going to care about only having 223TB out of 224TB due to this loss?

 

I think the decimal place is still a good idea. It's curious that it's not a setting in the /config/disk.cfg, how does this work @limetech ?

 

 

 

 

WarningCritical.JPG

Link to comment

For a drive you just fill you could go really, really full with most file systems.


But note that a file system using copy-on-write (CoW) like Btrfs needs additional disk space even for updating file access times.

 

And you might want to store additional information later - possibly checksum information for the files. Or you might later want to switch to a different file system - different file systems requires different amounts of hidden storage for book keeping.

 

Anyway - having 0.5% free on every drive represents just 0.5% of the purchase cost of the drives and 0.5% of the electricity to run the drive. So there isn't any strong economic incentive to push the limit. Especially since fragmentation issues can become a really big issue way earlier than this, depending on usage pattern.

  • Like 1
Link to comment

Other thing to consider is space to do any type of file recovery operation. The recommends minimum free space setting is 2x the size of the largest file on the disk. That is room for one more file to copy + room to recover a file. Like I said, this is bare minimum.

  • Like 1
Link to comment
  • 5 years later...

Hi everyone, this is really interesting. All of my drives are at 90% use, they range from 2 TB to 10 TB. I use the server as media repository for movies and tv series. Files are never changed, only added. Does that mean I can safely set the limit to 1%?

This will not affect the server read performance? Don't really care much about the write side as I only write each file once.

My free space currently is just under 5Tb.

PS: my file system is reiserfs

Edited by gnollo
Link to comment
42 minutes ago, gnollo said:

Does that mean I can safely set the limit to 1%?

Yes, with xfs or btrfs for that type of use, with reiserfs you will likely get performance penalty for browsing and reads, also note that reiserfs is deprecated and is going to be unsupported by the Linux kernel in the near future, so you should start converting to xfs.

Link to comment
18 minutes ago, Michael_P said:

 

This is wrong. There's no benefit in a write once read many use case to leave 20% of it's capacity un-used, where fragmentation isn't a thing. Fill'er up.

I partially agree with this, but I would go for leaving some work space free (such as 20-50GB) rather than as a percentage of the drive.

  • Upvote 1
Link to comment
8 minutes ago, Michael_P said:

 

That's the rub, if it's just fill and read - there's no need to 'work' with the files so any space left over is simply wasted. 50GB over 24+ drives really adds up

The point is that should the filesystem get corrupted for any reason the repair process needs space to work with or it'll just fail and your whole FS is toast.

Better keep 50-100GB free to be able to repair the other 12TB than clog it up and risk everything breaking.

Link to comment
2 hours ago, Kilrah said:

The point is that should the filesystem get corrupted for any reason the repair process needs space to work with or it'll just fail and your whole FS is toast.

Better keep 50-100GB free to be able to repair the other 12TB than clog it up and risk everything breaking.

Yes I understand, but is there a science to this that can confirm how much do we need to leave? On my largets drive 10% is over a terabyte. How much free space does unraid need to rebuild a disk etc.

Link to comment

There doesn't seem to be much documentation that gives a clear number for that on xfs, someone suggested leaving a bit more than the largest file on the drive in case the repair needed to relocate that entire file, no idea if that's a good point to go with but it seems reasonable.

I go with 100GB regardless of drive size, not percentages. 

Edited by Kilrah
Link to comment
On 9/9/2017 at 10:47 AM, Lev said:

I did some research on this same question a month ago, I had the same question.

 

First depends on your filesystem type. I'll assume XFS, cause IMHO that's really the only one worth picking.

 

I've linked others internet sites where the best I could find recommended 5% free space because of the default optimal inode size of the filesystem is something like 4Gb or 4GB. Doesn't matter which cause either is small relative to the size of the disks we using especially at 8TB. If I find the link again I'll post it, or search through my history, you my find it in an earlier post I made. 

 

Point is 5% is huge amount of space with 8TB drive, and it was recommend as the % to allow for enough space for defraging of XFS filesystems based on the inode size.

 

I don't care about defragging at this point, so could care less about that aspect. I only mention all this cause you wanted to know, as I did, so that's all I found searching for the answer to this same question. I came away thinking there'd be more science or a solid good number to recommend, but there wasn't. There was no nugget of knowlege I found, so if anyone else found something, I'd be happy to know as well.

 

My advice is this based on finding nothing and only though simply experimentation over the last two months on this very question. At 8TB drives, I set 1% free as a target. That's roughly 120GB IIRC in my settings that I set on shares as a minimum free space limit to enforce. In the Disk Settings as well I set 99% as the warning level and 100% as the critcal level simply cause I don't want to see the color red in the GUI. I wish this setting allowed decimal place, cause I'd set 99% as the warning and 99.5% as the critcal if I could.

 

Long post, but hope there was something of value in it if you read this far.

Thanks, it was useful for me

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.