(6.2.4) Cache Drive reporting "No Space Left On Disk"


Recommended Posts

I have been trying to get to the bottom of this for a few weeks now.  I have a 500GB Cache drive that has about 300GB of free space on it.  For some reason Unraid is reporting it as full.  It is stopping SAB and pretty much all of my dockers at the moment.  From everything I can tell there is plenty of space available.

 

 

Here is the part that is really confusing me.  If I go and delete a partial download from SAB, say 2 GB or so, the free space being reported in SAB jumps to 300GB+.  As soon as I refill that 2 GB of space it flips to zero and stops.  I believe I have set all minimums to 0 so that should not be an issue.

 

I'm at a loss.  Any help is appreciated.

Link to comment

Thanks Jonnie.  Here they are-

 

Also post the output of:

 

btrfs fi show /mnt/cache

 

btrfs fi show /mnt/cache

Label: none  uuid: 44e6e86c-5e0e-40a5-9ead-4137b7b72fcf

Total devices 1 FS bytes used 139.81GiB

devid    1 size 465.76GiB used 465.76GiB path /dev/sdg1

 

btrfs fi df /mnt/cache

 

btrfs fi df /mnt/cache

Data, single: total=463.75GiB, used=138.33GiB

System, single: total=4.00MiB, used=80.00KiB

Metadata, single: total=2.01GiB, used=1.49GiB

GlobalReserve, single: total=512.00MiB, used=0.00B

 

 

Link to comment

Might be helpful to ready my thread on the subject.

 

http://lime-technology.com/forum/index.php?topic=56096.0

 

I had to remove everything from the cache drives, format to xfs (with 1 drive), then bring both drives back in as BTRFS cache.  Finally, moved my "appdata" back to the newly formatted cache drive and recreate all my dockers.

 

I hope I don't need to go this route.  That is a PIA.  I swapped my old cache for this one about 3 months ago.  It's been fine till recently.

Link to comment

This is the problem:

 

devid    1 size 465.76GiB used 465.76GiB path /dev/sdg1

 

All space on the device is allocated, no new chunks can br created, try to delete something and running a balance:

 

btrfs balance start -dusage=5 /mnt/cache

 

If you get and out of space error, try deleting more files, the bigger the better, if the balances succeeds you'll get the allocated but unused space back.

Link to comment

This is the problem:

 

devid    1 size 465.76GiB used 465.76GiB path /dev/sdg1

 

All space on the device is allocated, no new chunks can br created, try to delete something and running a balance:

 

btrfs balance start -dusage=5 /mnt/cache

 

If you get and out of space error, try deleting more files, the bigger the better, if the balances succeeds you'll get the allocated but unused space back.

 

 

Thanks for the help Jonnie.  I dumped a decent amount of stuff that wasn't needed, maybe 30GB, executed the balance command, and now I get this...

 

btrfs fi show /mnt/cache

Label: none  uuid: 44e6e86c-5e0e-40a5-9ead-4137b7b72fcf

        Total devices 1 FS bytes used 98.36GiB

        devid    1 size 465.76GiB used 325.76GiB path /dev/sdg1

 

 

So what is happening here?  Why is it showing a value that is not accurate?

Link to comment
So what is happening here?  Why is it showing a value that is not accurate?

It's complicated

https://btrfs.wiki.kernel.org/index.php/FAQ#Why_is_free_space_so_complicated.3F

So, in general, it is impossible to give an accurate estimate of the amount of free space on any btrfs filesystem. Yes, this sucks. If you have a really good idea for how to make it simple for users to understand how much space they've got left, please do let us know, but also please be aware that the finest minds in btrfs development have been thinking about this problem for at least a couple of years, and we haven't found a simple solution yet.
Link to comment

This is related to how btrfs allocates chunks (mainly data and metadata) before writing, data chunks are usually 1GB each, metadata 256MB, problems like you had arise when all device space is allocated:

 

devid    1 size 465.76GiB used 465.76GiB path /dev/sdg1

 

and metadata usage is high:

 

Metadata, single: total=2.01GiB, used=1.49GiB

 

So there wasn't room for any new chunks and btrfs failed to create a new metadata chunk for any new writes, so it gives and out of space error, this shouldn't happen and this behaviour should improve in the future (update to 6.3, newer kernel), but you're not the first, so keep an eye on the used (allocated) space, whenever it gets real close to max run a balance like before, you can use a higher -dusage value to reclaim more unused chunks but due to the nature of how cache works, constantly filling and emptying new chunks will be allocated again soon.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.