volcs0 Posted February 26 Share Posted February 26 I have about 400gb free on my 1Tb cache drive (NVME). Today, I was running a python script to copy video files from my photo library to my photos share on unRAID. The total amount to be copied will be about 1 Tb. About halfway through, the script crashed, because it said it was out of space. The cache drive was full. I had to invoke the mover and wait until it cleared some out and then I could start the script again. Is this the expected behavior? Do I just need to restrict downloads and copy functions to the available space on the cache drive? Thanks for the help. Diagnostics attached. tower-diagnostics-20240225-1944.zip Quote Link to comment
Solution itimpi Posted February 26 Solution Share Posted February 26 You need to set the Minimum Free Space value for the cache pool as mentioned in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The free space dropping below this value is what tells Unraid to stop writing to the cache and start writing directly to the array instead. Quote Link to comment
volcs0 Posted February 26 Author Share Posted February 26 10 hours ago, itimpi said: You need to set the Minimum Free Space value for the cache pool as mentioned in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The free space dropping below this value is what tells Unraid to stop writing to the cache and start writing directly to the array instead. Thanks. Just did this - had to stop the array first. I made it 10GB, which should work pretty well for most things. Quote Link to comment
itimpi Posted February 26 Share Posted February 26 3 hours ago, volcs0 said: Thanks. Just did this - had to stop the array first. I made it 10GB, which should work pretty well for most things. Bear in mind that the value is only checked when a new file is created and does not take into account the size of the file about to be written so you may want a little more headroom. The normal recommendation is twice the size of the largest file you expect to write during day-to-day operation. BTRFS file systems seem a bit fragile when getting close to full. Quote Link to comment
volcs0 Posted February 26 Author Share Posted February 26 34 minutes ago, itimpi said: Bear in mind that the value is only checked when a new file is created and does not take into account the size of the file about to be written so you may want a little more headroom. The normal recommendation is twice the size of the largest file you expect to write during day-to-day operation. BTRFS file systems seem a bit fragile when getting close to full. OK - Terrific. Thanks for the tip. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.