BreakfastPurrito Posted May 31, 2022 Share Posted May 31, 2022 On all my shares I have "Use cache pool" set to Yes. According to the information provided that should mean that when the cache is full data gets written directly to the array. However when I transfer more than the cache can handle via SMB I just get a disk space error and the transfer stops. What is going wrong? 1 Quote Link to comment
JorgeB Posted May 31, 2022 Share Posted May 31, 2022 Share needs to have an appropriately set minimum free space, usually we recommend setting that to twice the max file size you expect to copy to that share. Quote Link to comment
BreakfastPurrito Posted May 31, 2022 Author Share Posted May 31, 2022 Thanks for the tip. I'll give it a try and see how it goes. Quote Link to comment
Jaybau Posted August 12, 2022 Share Posted August 12, 2022 When my cache is full, it doesn't switch to writing to the array. Even with minimum free space set. Quote Link to comment
itimpi Posted August 12, 2022 Share Posted August 12, 2022 7 minutes ago, Jaybau said: When my cache is full, it doesn't switch to writing to the array. Even with minimum free space set. How are you writing to the cache? If it is from a docker container are you sure that you have the mapping set to a User Share rather than to go directly to the pool (thus by-passing the User Share settings)? BTW: You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread. Quote Link to comment
trurl Posted August 12, 2022 Share Posted August 12, 2022 11 minutes ago, Jaybau said: When my cache is full, it doesn't switch to writing to the array. Even with minimum free space set. Do you mean the minimum free for the share you are writing to, or do you mean the minimum free set for cache? You need to pay attention to both. Quote Link to comment
Jaybau Posted August 12, 2022 Share Posted August 12, 2022 3 minutes ago, trurl said: Do you mean the minimum free for the share you are writing to, or do you mean the minimum free set for cache? You need to pay attention to both. Thanks for pointing out there are two separate minimum free settings. For my scenario, I have both minimum amounts set for cache and user share, the same value. Quote Link to comment
Jaybau Posted August 12, 2022 Share Posted August 12, 2022 (edited) Here is the recreated scenario: We are expecting the files being copied to use the array, which has enough storage. Other notes: 1) The cache drive says 86.0 KB free, which is below the minimum limit of 100MB. 2) I'm surprised the guest OS doesn't recognize the file being copied is larger than available space, and prevent the transfer. I'm not sure if/how Unraid reports available storage to a guest OS. 3) The file copy action will keep repeating over and over, automatically retrying, and constant read/writes. tower-diagnostics-20220812-0924.zip Edited August 12, 2022 by Jaybau Attached additional screenshots. Quote Link to comment
kizer Posted August 12, 2022 Share Posted August 12, 2022 Ok, Looking at what you posted. You set your share Limit to 100Meg. Did you set your Cache Minimum space to a larger value than 0? it needs to be higher as well. Share limits are designed to make sure drives that are low send files to other drives in the array. I don't know why your SSD wouldn't follow those rules too since its kinda part of the Share, but not part of the Protected Array. You need to go into your SSD settings, by clicking on it and look at the Minimum Space. I bet its zero. You need to stop the array and raise its value too. Quote Link to comment
Jaybau Posted August 12, 2022 Share Posted August 12, 2022 Quote Did you set your Cache Minimum space to a larger value than 0? Yes, 100 MB. Quote Link to comment
itimpi Posted August 12, 2022 Share Posted August 12, 2022 Just a thought - have you checked that there is not an (incomplete) copy of the file already on the cache from a previous failed attempt? If Unraid finds a file already exists it will try and update in situ and thus use the drive it exists on regardless of the other settings. Quote Link to comment
Kilrah Posted August 13, 2022 Share Posted August 13, 2022 (edited) The above is likely your issue. Say you set free space to 100M, there is 200MB free, your client sends a 300MB file - There is less than the minimum free at the start of the transfer so the file gets put there but it's too big so it fails part way. But now that file is there so every subsequent retry will still try to update it in place but that'll fail since it's still too big. A sensible Min Free space is more a few tens of GBs than 100M. It should be higher than the biggest file you're expecting to be sending. Edited August 13, 2022 by Kilrah Quote Link to comment
saue0 Posted February 28, 2023 Share Posted February 28, 2023 (edited) Should the default not be 0, since it give error . Edited February 28, 2023 by saue0 Quote Link to comment
trurl Posted February 28, 2023 Share Posted February 28, 2023 3 hours ago, saue0 said: Should the default not be 0, since it give error . Nobody can agree on a useful default. Depends on the type of files written to the user share or pool For each user share, you must set Minimum Free to larger that the largest file you will write to the share. For each pool, you must set Minimum Free to larger than the largest file you will write to the pool. There is a plugin to help with this, Dynamix Share Floor Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.