[SOLVED] BTRFS snapshots deleted but free space not showing properly?


Recommended Posts

So been moving data around and ended up deleteing around 1TB of data from a drive. Naturally the snapshots still had the data so after making sure I didn't need them I cleared all snapshots in hopes of freeing up the space.

 

Sadly the free space did not change after doing this.

 

In krusader I checked the used space and sure enough, it is showing the correct amount of used space. The free space is still missing around 1tb from the used vs total space.

 

In the past restarting seems to update it but is there a way to update the used space without restarting the server?

 

I have things like this happen semi-regularly and generally when the server is working on long running tasks, so can't restart it.

Link to comment

Data from delete snapshots (not reference anywhere else) is immediately deleted, though it can take a few seconds or even minutes for very large filesystems to cleanup, note that depending on the btrfs pool config used and Unraid version it can show incorrect used space, free space or both for some btrfs pools, depending on profile and number of devices.

Link to comment
3 minutes ago, JorgeB said:

Data from delete snapshots (not reference anywhere else) is immediately deleted, though it can take a few seconds or even minutes for very large filesystems to cleanup, note that depending on the btrfs pool config used and Unraid version it can show incorrect used space, free space or both for some btrfs pools, depending on profile and number of devices.

In this case it is array drives, I am not current snapshotting any pools.

 

Yeah, I would of expected a bit of a delay in updating and the numbers to maybe be off a little but when I remove snapshots it never seems to register at all.

 

Today I deleted a bit under 1TB of data off one array drive, I then cleared all snapshots. The free and used space didn't change in either krusader or unraid.

 

In krusader I still have a ~1TB gap between used space (by using the calculate occupied space option on all folders on the drive) and used space as reported at the top of the window or the total space on the drive.

 

I have noticed this in the past on a much smaller scale and it seemed to correct itself after a reboot but don't want to do that this time if it can be avoided.

 

I didn't know if there was a way to force a refresh while the array was online.

Link to comment
7 minutes ago, JorgeB said:

In that case space is correctly reported, and like mentioned snapshots are immediatly deleted and free space reclaimed, so that's not your problem.

In that case why did deleting 1TB of data not translate into any usable space?

 

What do I trust? Either free space is not being reported correctly or  The used space is not reported correctly? Or the files are not really deleted? Not sure how that would be possible?

 

Pretty sure if I reboot the free space will show up correctly based on past experience, I dealt with this last month as well but it was only ~100GB then. The server crashed a few days later forcing a reboot and when I got it back online the free space was reported correctly.

 

Worst case I can live with this and just reboot to update things, just really inconvenient.

Edited by TexasUnraid
Link to comment
btrfs sub list /mnt/disk4
ID 269 gen 3957 top level 5 path Media Server

 

btrfs fi usage -T /mnt/disk4
Overall:
    Device size:                  10.91TiB
    Device allocated:              8.55TiB
    Device unallocated:            2.36TiB
    Device missing:                  0.00B
    Used:                          8.46TiB
    Free (estimated):              2.45TiB      (min: 1.27TiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

                   Data    Metadata System               
Id Path            single  DUP      DUP       Unallocated
-- --------------- ------- -------- --------- -----------
 1 /dev/mapper/md4 8.54TiB 20.00GiB  16.00MiB     2.36TiB
-- --------------- ------- -------- --------- -----------
   Total           8.54TiB 10.00GiB   8.00MiB     2.36TiB
   Used            8.44TiB  9.11GiB 944.00KiB            

 

Here is the Krusader calculation of the occupied space, which is the correct number I expect to be used after deleting the data. This is the only share on this drive.

 

firefox_b2m9UA4Q1B.jpg

 

10.9tb - 2.5tb = 8.4tb is what it claims to be used at the top, and this IS what was used before I deleted the data.

 

As we can see there is only 7.5tb of data currently on the drive from the calculation above, which is what it should be after removing a bit under 1tb of data.

 

I can't explain it, I just expected to see 1TB less used space after removing the files and snapshots.

Edited by TexasUnraid
Link to comment
Just now, JorgeB said:

I don't use encryption, wonder if that can make a difference, see if rebooting really reclaims the space, if yes my guess is that it's related to luks.

Interesting idea, that could explain it.

 

I am trying to see how long I can get it to run without rebooting right now, last time I started to run into some issues after a few weeks, trying to see if they were a fluke or some kind of deeper issue.

 

I will try to remember to update this thread when I do reboot.

Link to comment
2 hours ago, JorgeB said:

Just used my test server for a quick test and using an encrypted filesystem the space was also recovered immediately (it took a few seconds like it usually does) after deleting a snapshot, so appears to not be that.

That is very strange then, I am not really doing anything out of the ordinary with the array drives. Just run of the mill BTRFS encrypted drives, all handled by unraid.

 

I do have a script setup to take snapshots daily but it uses basic BTRFS commands to do it.

 

I have seen this one time before for sure and a few other times where I was not 100% sure but sure seemed like it (the amount of space was small enough to go mostly unnoticed).

 

On one of the other drives the same thing happened, I deleted a few hundred GB of data and it also still shows the same free space as before I deleted it. So it is not limited to this drive.

 

Luckily I still have some room so it is not a big deal at the moment and I think it will correct itself when I reboot but it is a strange bug.

 

I am on the beta if that makes any difference, I don't remember if I noticed this issue when I was on 6.8 or not.

Link to comment

I'm also using the beta and would think the most likely is everything is working correctly, just some mistake or confusion, also Krusader showing wrong used space is likely a Krusader issue, I don't see if btrfs isn't freeing the space how Krusader could calculate that, please try this:


 

btrfs sub create /mnt/disk4/test
fallocate -l 50G /mnt/disk4/test/file

 

Wait a couple of secs and confirm used space changed by 50G, then:
 

btrfs sub snap -r /mnt/disk4/test /mnt/disk4/test/snap

rm /mnt/disk4/test/file

 

Now delete the snapshot and used space should go down by 50G after 30 seconds or so.

 

btrfs sub del /mnt/disk4/test/snap

 

After don't forget to delete test subvolume:

 

btrfs sub del /mnt/disk4/test

 

 

 

 

 

 

  • Like 1
Link to comment

Ok, ran all of the above commands, usage increased by 50gb as expected in both krusader and unriad.

 

Removed the file and snap and it has been a few minutes now but the space has not been reclaimed.

 

You can see that used space has increased by ~50gb from above (I have not done anything of note to the drive since then)

btrfs fi usage -T /mnt/disk4
Overall:
    Device size:                  10.91TiB
    Device allocated:              8.61TiB
    Device unallocated:            2.30TiB
    Device missing:                  0.00B
    Used:                          8.52TiB
    Free (estimated):              2.39TiB      (min: 1.24TiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

                   Data    Metadata System               
Id Path            single  DUP      DUP       Unallocated
-- --------------- ------- -------- --------- -----------
 1 /dev/mapper/md4 8.59TiB 20.00GiB  16.00MiB     2.30TiB
-- --------------- ------- -------- --------- -----------
   Total           8.59TiB 10.00GiB   8.00MiB     2.30TiB
   Used            8.50TiB  9.11GiB 944.00KiB     

Nothing in the system or drive logs either.

Edited by TexasUnraid
Link to comment

Very bizarre development just happened.

 

Just got an email out of the blue that disk 4 returned to normal usage?

 

Checked it and sure enough, everything matches up perfectly now and I regained the space from the snapshots.

 

I just checked the log and nothing is there at the time this happened. About an hour or 2 before this I disabled file activity plugin which is the only thing I can think of that might of played a role in this.

 

No idea what to make of this but hey, the space is back!

 

Come to think of it, in the past when I thought this might of happened before, I was not sure as the space seemed to reappear randomly (although a matter of hours, not 5 days).

Link to comment

Highly unlikely, except possibly file activity plugin?

 

No programs I use are even aware of the snapshots existence. The only way I can access them is manually via krusader / unraid since I can't get the windows restore files option to work with them over SMB.

 

Although even then I can't think of anything that would even know they exist, much less use them and all the other computers in the house are turned off every night.

Link to comment

Did some more testing and googling and no idea what this could be, can't find anyone else complaining of something similar, I've been using snapshots together with send/receive to backup most of my Unraid servers for years now, most times during the backup I delete older snapshots from every data disk, and the space recovery always starts after a few seconds, in fact sometimes it even causes the server to perform slowly while it's deleting snapshots from multiple disks simultaneously, but it never took more than a few minutes to recover all the space.

Link to comment
1 hour ago, JorgeB said:

It's possible, you'd need to test without it running.

Yeah, I actually just did that after posting and I think it was indeed the issue. Deleted the snapshots and space was recovered within a minute or so (although not sure the exact amount of space that was supposed to be recovered it looks right).

 

So looks like that mystery is solved!

  • Like 1
Link to comment
  • JorgeB changed the title to [SOLVED] BTRFS snapshots deleted but free space not showing properly?

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.