Files not automatically written to array when cache full


Recommended Posts

  • 2 months later...
7 minutes ago, Jaybau said:

When my cache is full, it doesn't switch to writing to the array.  Even with minimum free space set.

How are you writing to the cache?    If it is from a docker container are you sure that you have the mapping set to a User Share rather than to go directly to the pool (thus by-passing the User Share settings)?

 

BTW: You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.

Link to comment
3 minutes ago, trurl said:

Do you mean the minimum free for the share you are writing to, or do you mean the minimum free set for cache? You need to pay attention to both.

 

Thanks for pointing out there are two separate minimum free settings.

 

For my scenario, I have both minimum amounts set for cache and user share, the same value.

Link to comment

Here is the recreated scenario:

 

image.png.412767db2613e0d0aec92ee4742524e8.png

 

image.png.8ae961089d02d0ae8fe2f06539f7ed43.png

 

image.png.bdfd0a151c89da767f905e065e17f58d.png

 

image.thumb.png.946536807a12adc3341d411326044d37.png

 

We are expecting the files being copied to use the array, which has enough storage.

 

Other notes:

1) The cache drive says 86.0 KB free, which is below the minimum limit of 100MB.

2) I'm surprised the guest OS doesn't recognize the file being copied is larger than available space, and prevent the transfer.  I'm not sure if/how Unraid reports available storage to a guest OS.

3) The file copy action will keep repeating over and over, automatically retrying, and constant read/writes.

 

tower-diagnostics-20220812-0924.zip

Edited by Jaybau
Attached additional screenshots.
Link to comment

Ok, Looking at what you posted. You set your share Limit to 100Meg. Did you set your Cache Minimum space to a larger value than 0? it needs to be higher as well. 

 

Share limits are designed to make sure drives that are low send files to other drives in the array. 

I don't know why your SSD wouldn't follow those rules too since its kinda part of the Share, but not part of the Protected Array

You need to go into your SSD settings, by clicking on it and look at the Minimum Space. I bet its zero. You need to stop the array and raise its value too. 

Link to comment

The above is likely your issue. Say you set free space to 100M, there is 200MB free, your client sends a 300MB file - There is less than the minimum free at the start of the transfer so the file gets put there but it's too big so it fails part way. But now that file is there so every subsequent retry will still try to update it in place but that'll fail since it's still too big.

 

A sensible Min Free space is more a few tens of GBs than 100M. It should be higher than the biggest file you're expecting to be sending. 

Edited by Kilrah
Link to comment
  • 6 months later...
3 hours ago, saue0 said:

Should the default not be 0, since it give error .

Nobody can agree on a useful default. Depends on the type of files written to the user share or pool

 

For each user share, you must set Minimum Free to larger that the largest file you will write to the share. For each pool, you must set Minimum Free to larger than the largest file you will write to the pool.

 

There is a plugin to help with this, Dynamix Share Floor

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.