koopatroopa8 Posted January 12 Share Posted January 12 BLUF: My pool (named cache) is reporting a usage of 702 GB or 653 GiB, but the size of all files in /mnt/cache is only 472GB or 440GiB. I want to know why 200+ GB are used when I don't have that many files. So I have a problem that I have found some reference to previously, but has not been resolved with what I have tried so far. I have found that btrfs from /mnt/cache is reporting 653 GiB, which matches the Unraid GUI at 702GB when converting from GiB to GB. However, when using the compute size of /mnt/cache in either Krusader or qdirstat, I get 440GiB of usage. I can then compare this to the size of my appdata and system shares (the only folders in /mnt/cache), which are 451GB and 22.5GB in the GUI respectively. 440GiB=472GB, so those numbers correspond correctly, meaning that from what I can tell, there is not some other location adding to /mnt/cache. What I want to know is why I have over 200GiB of "used" space on my cache drive that I cannot find/access. I have tried balancing and scrubbing the btrfs filesystem and they did not improve the usage metrics. Worst case, I may have to move all appdata to my array and rebuild my pool and then move back, but I'd like to know the cause to either prevent it from happening again or fix it outright without needing to move everything. I have attached Diagnostics and a few screenshots to show what I am seeing. Any help or additional areas/things to check would be appreciated. btrfs usage.txt koopatower-diagnostics-20240111-2236.zip Quote Link to comment
JorgeB Posted January 12 Share Posted January 12 Any vdisks on the pool? They can grow if not trimmed, defrag can also help if you don't use snapshots. Quote Link to comment
koopatroopa8 Posted January 12 Author Share Posted January 12 (edited) No Vdisks that I'm aware of, I haven't ever made any VMs in the system, only 7 pretty standard docker containers. The only "non traditional" thing I think have done is I've loaded ~400 GB of data into my mariadb database. Edited January 12 by koopatroopa8 Fixing typo Quote Link to comment
JorgeB Posted January 12 Share Posted January 12 The docker vdisk can also grow with time, you can recreate it to see if it's that, if not you can defrag the fs or move everything to the array to confirm what is using the extra space. Quote Link to comment
koopatroopa8 Posted January 12 Author Share Posted January 12 So I moved everything to the array, and it looked like significantly less data, then I moved it back, and now it is only taking up 455GB of data. I don't understand why that would fix it, but apparently that's all that was needed. Quote Link to comment
JorgeB Posted January 12 Share Posted January 12 Some files grow with time on btrsf (and zfs), because of COW. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.