BTRFS utilizing remaining free space issue


Recommended Posts

Currently trying to chase down an issue with a couple of BTRFS formatted drives.

 

Most of my drives were BTRFS formatted in 6.8.3, maybe early 6.9 series and have been able to utilize all the way down to the last 100MB in some cases, however I have a couple drives that I can't utilize past the last ~200GB of space.

I've done the typical tricks of high balance values and scrubs, which has worked for all other drives, but for these two drives formatted in later Unraid versions there seems to be some other constraint.

 

Now is there some format option that changed, or some free space guarding in the kernel that I can bypass?

Data is easily replaceable and is WORM, so the free space buffer is not needed for future use.

 

Any ideas?

 

 

Offending drives btrfs filesystem df

Data, single: total=16.28TiB, used=16.04TiB
System, DUP: total=6.00MiB, used=1.72MiB
Metadata, DUP: total=19.00GiB, used=18.36GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

Data, single: total=16.33TiB, used=16.13TiB
System, DUP: total=8.00MiB, used=1.80MiB
Metadata, DUP: total=19.00GiB, used=18.48GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

 

versus an example good utilization drive

Data, single: total=16.33TiB, used=16.33TiB
System, DUP: total=8.00MiB, used=1.84MiB
Metadata, DUP: total=21.00GiB, used=19.60GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

 

Link to comment
2 hours ago, tjb_altf4 said:

however I have a couple drives that I can't utilize past the last ~200GB of space.

This happens when the fs runs out of space for a new metadata chunk, much less likely to happen with newer kernels, one way to get around this is to freu up some space, pre-allocate some metadata, then delete that when it's getting full, I can post detailed instructions if you want, but first you'd need to free up some space, like 100 or 200GB should be enough.

Link to comment
1 hour ago, JorgeB said:

This happens when the fs runs out of space for a new metadata chunk, much less likely to happen with newer kernels, one way to get around this is to freu up some space, pre-allocate some metadata, then delete that when it's getting full, I can post detailed instructions if you want, but first you'd need to free up some space, like 100 or 200GB should be enough.

If you could post some details it would be greatly appreciated!

One HDD already has 328GB free, the other 219GB.  but I can delete a little more if you think its needed.

 

 

Link to comment
1 hour ago, JorgeB said:

There only needs to be some unallocated space, for both disks post the output of:

btrfs fi usage -T /mnt/disk#

 

 

ah ok, I see, I'll make some additional space... here is the current output

 

root@jaskier:~# btrfs fi usage -T /mnt/disk13
Overall:
    Device size:                  16.37TiB
    Device allocated:             16.32TiB
    Device unallocated:           51.02GiB
    Device missing:                  0.00B
    Used:                         16.07TiB
    Free (estimated):            305.82GiB      (min: 280.31GiB)
    Free (statfs, df):           305.82GiB
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data     Metadata System              
Id Path      single   DUP      DUP      Unallocated
-- --------- -------- -------- -------- -----------
 1 /dev/md13 16.28TiB 38.00GiB 12.00MiB    51.02GiB
-- --------- -------- -------- -------- -----------
   Total     16.28TiB 19.00GiB  6.00MiB    51.02GiB
   Used      16.04TiB 18.36GiB  1.72MiB            

 

root@jaskier:~# btrfs fi usage -T /mnt/disk14
Overall:
    Device size:                  16.37TiB
    Device allocated:             16.37TiB
    Device unallocated:            1.01MiB
    Device missing:                  0.00B
    Used:                         16.17TiB
    Free (estimated):            204.13GiB      (min: 204.13GiB)
    Free (statfs, df):           204.13GiB
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data     Metadata System              
Id Path      single   DUP      DUP      Unallocated
-- --------- -------- -------- -------- -----------
 1 /dev/md14 16.33TiB 38.00GiB 16.00MiB     1.01MiB
-- --------- -------- -------- -------- -----------
   Total     16.33TiB 19.00GiB  8.00MiB     1.01MiB
   Used      16.13TiB 18.48GiB  1.80MiB    

 

Link to comment
1 hour ago, JorgeB said:

There only needs to be some unallocated space, for both disks post the output of:

btrfs fi usage -T /mnt/disk#

 

OK deleted just over 100GB (1 file) on each and gave a balance up to 90, output looks like this now:

 

root@jaskier:~# btrfs fi usage -T /mnt/disk13
Overall:
    Device size:                  16.37TiB
    Device allocated:             16.17TiB
    Device unallocated:          203.02GiB
    Device missing:                  0.00B
    Used:                         15.97TiB
    Free (estimated):            407.27GiB      (min: 305.77GiB)
    Free (statfs, df):           407.27GiB
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data     Metadata System              
Id Path      single   DUP      DUP      Unallocated
-- --------- -------- -------- -------- -----------
 1 /dev/md13 16.13TiB 38.00GiB 12.00MiB   203.02GiB
-- --------- -------- -------- -------- -----------
   Total     16.13TiB 19.00GiB  6.00MiB   203.02GiB
   Used      15.94TiB 18.23GiB  1.73MiB 

 

root@jaskier:~# btrfs fi usage -T /mnt/disk14
Overall:
    Device size:                  16.37TiB
    Device allocated:             16.27TiB
    Device unallocated:          103.01GiB
    Device missing:                  0.00B
    Used:                         16.07TiB
    Free (estimated):            305.57GiB      (min: 254.07GiB)
    Free (statfs, df):           305.57GiB
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data     Metadata System              
Id Path      single   DUP      DUP      Unallocated
-- --------- -------- -------- -------- -----------
 1 /dev/md14 16.23TiB 38.00GiB 16.00MiB   103.01GiB
-- --------- -------- -------- -------- -----------
   Total     16.23TiB 19.00GiB  8.00MiB   103.01GiB
   Used      16.04TiB 18.37GiB  1.80MiB   

 

Link to comment

This is what I do to get around this issue:

 

-create a temp folder in the disk, e.g:

mkdir /mnt/disk13/x

then cd to it and type:

dd if=/dev/urandom bs=1024 count=1050000 | split -a 6 -b 2k - file.

this will create just over 1GiB in very small files, and because they are so small they will use metadata chunks, not data chunks, these will use at least part of 2 metadata chunks, it will take a coupe of minutes because of being small files, once it's done start writing the normal data, once you write a few GBs, just enough to create some new metadata, you can delete the temp folder amd the previously used metadata space on those two chunks will now be free and available for your data.

  • Thanks 1
Link to comment
On 10/13/2022 at 9:49 PM, JorgeB said:

This is what I do to get around this issue:

 

-create a temp folder in the disk, e.g:

mkdir /mnt/disk13/x

then cd to it and type:

dd if=/dev/urandom bs=1024 count=1050000 | split -a 6 -b 2k - file.

this will create just over 1GiB in very small files, and because they are so small they will use metadata chunks, not data chunks, these will use at least part of 2 metadata chunks, it will take a coupe of minutes because of being small files, once it's done start writing the normal data, once you write a few GBs, just enough to create some new metadata, you can delete the temp folder amd the previously used metadata space on those two chunks will now be free and available for your data.

I've been through the full process on one of the disks, followed by balancing and was able to fill that disk all the way up with about 150MB to spare!
Thanks @JorgeB :)

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.