Cache disk almost full after update to 6.9.2


Recommended Posts

So this has probably been answered many times but I cannot find anything that would make sense to my issue.

 

Last week, I've upgraded from 6.8.(don't remember) to 6.9.2. Right away after the update, my single cache drive of 500gb was almost full (64gb available).

I didn't modified any settings on the shares regarding cache use.

 

Does 6.9.X modifies anything that could cause this?

 

If anyone has any idea what is happening here, I would really appreciate it. Happy to share any log/config required.

 

Thanks community!

Link to comment
1 minute ago, JorgeB said:

Nothing should change with a single device cache, it will if using multiple devices of different capacity, since that was wrong before, you should post the diags: Tools -> Diagnostics

 

 

Which files from the diags would help? All of them?

Link to comment
3 minutes ago, Parigot said:

so my media share is set to No for 'Use Cache Pool' but I still see 151Gb of files on the cache. Deleted it and now back to normal.

 

Thanks for the help!

You might want to work out why the files ended up there to stop it happening again?   It can happen if something is run that can by-passes the UnRaid User Share system.

 

you mentioned you deleted the files which removes them from the ‘media’ share as even with a ‘Use Cache’ setting of ‘No’ UnRaid would treat them as part of the User Share for read purposes.   If you wanted instead to keep the files you could have instead changed the “Use Cache” setting to “Yes” and then run mover which would have transfered them onto the array.

Link to comment
Just now, itimpi said:

You might want to work out why the files ended up there to stop it happening again?   It can happen if something is run that can by-passes the UnRaid User Share system.

 

you mentioned you deleted the files which removes them from the ‘media’ share as even with a ‘Use Cache’ setting of ‘No’ UnRaid would treat them as part of the User Share for read purposes.   If you wanted instead to keep the files you could have instead changed the “Use Cache” setting to “Yes” and then run mover which would have transfered them onto the array.

Yeah I realized this after I deleted them. There wasn't that many anyway. But I'll keep your suggestion in mind for next time.

 

To your first question, I wonder if this happens when sub-folders of a Share are created through cli or using a UI like Krusader.

Link to comment
8 minutes ago, Parigot said:

Yeah I realized this after I deleted them. There wasn't that many anyway. But I'll keep your suggestion in mind for next time.

 

To your first question, I wonder if this happens when sub-folders of a Share are created through cli or using a UI like Krusader.


the easiest way for this to happen is to use the ‘mv’ command from the CLI (or an equivalent from Krusader) and the fact that the Linux underlying Unraid does not understand User Shares.  Linux tries to optimise the move operation if it thinks both source and target are under the same mount point by first trying a rename and only if that fails doing a copy/delete operation.   This means you can get the rename working and leaving the files on the same drive but under a different folder.    Using an explicit copy/delete avoids this.

 

it might be easier to simply have the ‘media’ share set to use the cache and have the Minimum `free Space setting on the cache high enough that you never completely fill it.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.