Good evening everybody,
I'm a relatively new user of unraid (less than two months), and I'm still learning the ins and outs of the system. So far, I'm loving it; definite upgrade for me from running some docker containers haphazardly in Ubuntu.
I'm having an interesting issue (to me), that I've been unable to resolve. I've searched through the forums here as best I can, and I've also google'd a bit, and I can't find a solution. I may have missed it posted here somewhere, so my apologies if I've overlooked it.
Last night, I removed one of my two cache drives (before removal, there were 2x250GB SSDs in the pool). I found a walkthrough (found here), and followed it step by step. Currently, one of those two 250GB drives remains. This remaining drive is the first drive I ever added to my machine, and ran with just that drive for a while before I put a second one in. I added the second because I wanted to get more cache space, not understanding that's not how it works, and just decided to leave well enough alone. This second drive, in addition to another caching drive that I used [edit, 922pm 8/6: in a separate cache pool for transcoding], was removed to allow for two more drives for the array (I need a new PSU; I'm missing sata power cables).
The issue I'm having currently, is my remaining cache drive is showing a usage of about 70-75GB after mover has run and nothing else is being put onto the cache. Prior to the removal of my second cache drive, it sat somewhere between 20-25GB of usage (based on my recollection). I have a script to invoke the mover once the cache hits 75% or more full, checking once an hour. Prior to yesterday/today, this script worked as advertised without issue. Manually invoking the mover does not lower it past this 70-75GB point.
My cache pool is using btrfs, and after removing the drive and restarting the array, it began "doing its thing". However, the automove script ran once during that process. I completely forgot to turn that off prior to this drive removal. I don't know if it caused an issue during the removal process, but I didn't know how to stop the mover from running once it was going and felt it best to just leave everything alone to finish so I hopefully don't screw anything up [further]. The removal process took around 2 hours to finish; I don't know if this is normal, or if the mover running delayed its completion.
This may be my inexperience with unraid that's causing me to not be able to figure it out, but I don't know how to determine where all this extra usage is coming from. I looked in unbalance to see if there's any folders that look incorrect, and nothing there sticks out. I tried looking through things through Krusader, and nothing is sticking out to me there either (but, I might not know what to look for). I'm not knowledgeable enough to know what commands to run in the CLI, so maybe there's something there I can check?
I've attached the diagnostics report to this post; how to dig into it to decipher anything is beyond me. I'm not sure if this will show up, but you may find an unused 10TB disk in the report. This is intended to be my parity drive in the immediate future. I just got the two 10TB drives hooked to the machine, yesterday.
To my knowledge, I have included all relevant details that I can think of at the moment.
Any assistance that any of you can provide would be infinitely appreciated. If I've missed including anything that would shed light on my issue, please let me know and I will promptly provide it to you if I can.
Thank you in advance for your time.
-Chris/Sumi
unlazarusraid-diagnostics-20210806-2052.zip