How do I keep cache from filling up during intensive copy operations?


Go to solution Solved by itimpi,

Recommended Posts

I have about 400gb free on my 1Tb cache drive (NVME). 

Today, I was running a python script to copy video files from my photo library to my photos share on unRAID. The total amount to be copied will be about 1 Tb. About halfway through, the script crashed, because it said it was out of space. The cache drive was full. I had to invoke the mover and wait until it cleared some out and then I could start the script again. 

Is this the expected behavior? Do I just need to restrict downloads and copy functions to the available space on the cache drive?

Thanks for the help. Diagnostics attached.

tower-diagnostics-20240225-1944.zip

Link to comment
  • Solution

You need to set the Minimum Free Space value for the cache pool as mentioned in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.  The free space dropping below this value is what tells Unraid to stop writing to the cache and start writing directly to the array instead.

Link to comment
10 hours ago, itimpi said:

You need to set the Minimum Free Space value for the cache pool as mentioned in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.  The free space dropping below this value is what tells Unraid to stop writing to the cache and start writing directly to the array instead.

Thanks. Just did this - had to stop the array first. I made it 10GB, which should work pretty well for most things.

Link to comment
3 hours ago, volcs0 said:

Thanks. Just did this - had to stop the array first. I made it 10GB, which should work pretty well for most things.

Bear in mind that the value is only checked when a new file is created and does not take into account the size of the file about to be written so you may want a little more headroom.  The normal recommendation is twice the size of the largest file you expect to write during day-to-day operation.  BTRFS file systems seem a bit fragile when getting close to full.

Link to comment
34 minutes ago, itimpi said:

Bear in mind that the value is only checked when a new file is created and does not take into account the size of the file about to be written so you may want a little more headroom.  The normal recommendation is twice the size of the largest file you expect to write during day-to-day operation.  BTRFS file systems seem a bit fragile when getting close to full.

OK - Terrific. Thanks for the tip.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.