Jump to content
Sign in to follow this  

Allocation Method/Minimum Free Space Not Being Honored

3 posts in this topic Last Reply

Recommended Posts

I have been messing around with this for a while now and I'm not sure if it is behaving as intended or not...I can at least say that it isn't behaving as I would expect. I'll do my best to explain...I have six data disks, five of which I have had for quite some time. They were getting quite full with one near 100%. I installed a new 3TB drive thinking that it would start using the free space on there as it could knowing that because of how the share splitting works that some of the other drives would continue to get writes to them. As some time went on, my near full drive filled completely and of course I started to get some errors and so forth. I quickly diagnosed the issue and moved some files to make some room on the drive. The annoying part was that it broke my CrashPlan installation and I had to fix that...not that big of a deal, but wanted to prevent that from happening again. I went in and set all of the share's minimum free space to 5GB and the allocation method to "Most-Free". My understanding of this setting, and my expectation, is that there would have to be at least 5GB available on the disk for it to write to it and would look for the drive with the most room to spare, but that certainly hasn't been the behavior.


After I initially made the adjustments and got the disk below the 5GB mark, things remained that way for a while, but it eventually filled again. I'm convinced that what is filling it up is CrashPlan. I'm currently shifting things around so that is will only be sending data to a single disk share which I think will resolve that immediate issue for a while, but I think there is still an underlying issue with preventing shares and disks becoming full and causing havoc. I would think this would be a fairly basic management capability of any NAS.


I have read several postings and explanation on the share allocation methods and so forth and it is still less than clear given the behavior that I'm experiencing. The explanations/definitions of the feature don't jibe with the behavior. I get the notion of at least defining the allocation method at the share level, but there seems like there would be some sort of parameter set at the disk level to prevent over filling them as well or at least to start warning when they reach some sort of threshold.


So what am I looking for...I'd like to know what I'm experiencing is correct behavior or not. And if it is correct...what am I doing wrong to prevent my drives from filling to the point that apps begin to fail? And don't get me wrong...I'm a die hard unraid fan...just hoping maybe this discussion can improve something...even if it is just to make me smarter.  ;)

Share this post

Link to post

If split level is preventing a directory from being split then unRAID will continue to write files to the disk the directory is already on.

Share this post

Link to post

I have been digesting your statement since you posted and I went back and looked again and that may have certainly been the case...I think I had the splitting level set one lower than it should have been. I guess that I'm still of the mindset though, that the system should be smart enough to protect us from ourselves...especially in the case of a filled disk. Currently the only thing that happens is that things stop working and you get some errors in the log that may or may not be clear to some.


Thanx for the information...

Share this post

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this