UncleStu Posted November 23, 2022 Share Posted November 23, 2022 (edited) For the last 6+ months, my nvme cache drive has been slowly filling up. The last 4 weeks or so it has been pretty flat at ~58%. I built a new VM to replace an existing VM so I expected the cache drive to fill up some. The existing VM was 500 GB and the new VM is 300 GB. What was odd is that when I deleted the 500 GB VM, the new used space on the nvme cache was 67%. I wasn't expecting a ~10% increase when ultimately swapping in a smaller VM. I decided to move all my VMs into the array in hopes that there was something stuck or (no pun intended) cached in there. With no domains on my nvme cache, it is showing 33% in use. When I look at the disk usage for /mnt/nvme_cache, I only see 142G compared to 305G used with Disk Free. Compute space shows 151G in use. Where is this extra 160G sitting? I've tried Balancing and I scrubbed a few days before I built the new VM, but I don't know where to look next. unraid-diagnostics-20221123-1337.zip Edited November 30, 2022 by UncleStu update title Quote Link to comment
itimpi Posted November 23, 2022 Share Posted November 23, 2022 When you create vdisks for a VM they are created as Linux ‘sparse’ files which means only sectors that are written to actually occupy physical space. In other words the physical space occupied can be less than the logical size of the vdisk file. However over time as the VM runs you can expect the physical space to grow as more parts of the vdisk file are written to and therefore allocated physical space increases. Quote Link to comment
UncleStu Posted November 23, 2022 Author Share Posted November 23, 2022 (edited) 26 minutes ago, itimpi said: physical space occupied can be less than the logical size of the vdisk file Similar to thin provisioning? So basically, not all of the space is allocated at once. That makes sense. However I moved all my VM's to the array and all that was left on my nvme cache was appdata and system files. And there is still that 160 GB delta between df and du and the Calculated space. Edited November 23, 2022 by UncleStu Quote Link to comment
JorgeB Posted November 24, 2022 Share Posted November 24, 2022 Du is not reliable with btrfs, GUI/df will show correct used space. Quote Link to comment
UncleStu Posted November 24, 2022 Author Share Posted November 24, 2022 (edited) On 11/24/2022 at 12:22 AM, JorgeB said: GUI/df will show correct used space. Thanks. There still seems to be an issue/discrepancy in the free/used space. The Calculate Usage shows 89 GB used where the Main page shows 263GB. I think I just need to move my appdata and system data off this pool, blow away the pool, and recreate it. Unless there is another way to reclaim this space that I'm not aware of. Edited November 26, 2022 by UncleStu Quote Link to comment
JorgeB Posted November 24, 2022 Share Posted November 24, 2022 Any image type file,like vdisks, docker image, etc can grow with time, looks like you've moved the vdisks, try recreating the docker image if there, there are also reports that defragging the filesystem helps, as long as you are not doing snapshots. 1 Quote Link to comment
UncleStu Posted November 24, 2022 Author Share Posted November 24, 2022 (edited) Didn't even know defragging within unRAID was a thing. Am I correct that I need to run "btrfs filesystem defragment /mnt/nvme_cache"? I am not doing any snapshots and no longer have vdisks on my nvme. You are correct in that I moved them all. Do I need to stop my dockers and the docker server before running the defrag though? EDIT: "btrfs filesystem defragment -r /mnt/nvme_cache" made no difference. My docker image is also only 40 GB (which I have not deleted yet) and my libvirt file (which I did delete) is only 1 GB. Edited November 24, 2022 by UncleStu Quote Link to comment
JorgeB Posted November 25, 2022 Share Posted November 25, 2022 I would recommend deleting and recreating the docker image, very easy to do and it can make a difference. Quote Link to comment
UncleStu Posted November 25, 2022 Author Share Posted November 25, 2022 Done, but unfortunately not much changed. I did change my advanced docker settings to enable log rotation, but I had already manually truncated my logs. Calculated space still shows 89 GB used where the pool shows 263 GB. Still an unknown 174 GB being used somewhere. Quote Link to comment
JorgeB Posted November 26, 2022 Share Posted November 26, 2022 That would point to appdata being the problem, you can move to the array share by share to see what it is. Quote Link to comment
Solution UncleStu Posted November 30, 2022 Author Solution Share Posted November 30, 2022 I moved my appdata and system data off to the array and my nvme drive still had 155 GB being reported in use. Calculate space showed 800k, which there was some lingering appdata from my Plex server. Subtitles, meta data, etc. I ultimately stopped the array, removed the nmve drives from the pool, restarted the array, stopped it again, adding the drives back to the pool, and restarted the array again. 3.5 MB was now reported in use. As expected with nothing on it. Still a mystery what was taking up the ~155 GB. The usage would change over time, as you can see through this thread. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.