adamfl Posted March 2 Share Posted March 2 (edited) Hello all, I have 3x 18TB disks and 2x 14TB disks in use right now. I'm slowly filling them up with data from an external share. The first disk filled up to about halfway as per my high water settings I'm using for all shares, but now that my fifth disk is starting to feel I've noticed that some additional data is being written to one of the more full disks, let's say going from 8.5 TB available down to 8.48, then the other disks that were at 8.5 will also all go down to 8.48 or so, then finally data starts flowing back to the nearly empty disk. Wouldn't any new data for any share go directly to the nearly empty fifth disk if all of my shares are set to not exclude any drives? I'm trying to understand why high water doesn't seem to be totally followed. Thanks. Attached is an example. Yesterday, the top four disks were all over 8TB free and now they continue to go down. Even earlier this evening they were all around 7.9TB free. Here are screenshots from around 30 minutes apart. Edited March 2 by adamfl Adding screenshot x2 Quote Link to comment
JorgeB Posted March 2 Share Posted March 2 Please post the diagnostics and lets us know the share name you are using. Quote Link to comment
adamfl Posted March 2 Author Share Posted March 2 10 hours ago, JorgeB said: Please post the diagnostics and lets us know the share name you are using. Attached, thanks! Note that I thought this was an issue due to some split level issues, but even after modifying all of the shares I was writing to or had written to to any level, it continued. rick-diagnostics-20240302-1330.zip Quote Link to comment
itimpi Posted March 2 Share Posted March 2 You might give the name of a share having unexpected behaviour so we can check it. i do notice that some of the shares I looked at had very large values for their Minimum Free Space values. You might want to check these and put them to values that match how you expect to use the system. Quote Link to comment
adamfl Posted March 2 Author Share Posted March 2 (edited) 2 hours ago, itimpi said: You might give the name of a share having unexpected behaviour so we can check it. i do notice that some of the shares I looked at had very large values for their Minimum Free Space values. You might want to check these and put them to values that match how you expect to use the system. The share in question is the media share. Whenever I try to blank out the 'minimum free space' to 0, it replaces it with 1.4 TB as 'calculated free space value' so I'm not sure what to do about that. Edit: I dropped a bunch to 100GB in lieu of 0 even though appdata and the other system shares are set to 0, and that seemed to work. Thanks. Edited March 3 by adamfl Quote Link to comment
itimpi Posted March 3 Share Posted March 3 11 hours ago, adamfl said: Whenever I try to blank out the 'minimum free space' to 0, A value of 0 is treated as meaning "set it to a value based on my disk sizes". Setting it to any other value than 0 should retain whatever you set. It used to default to 0 but so many people would have problems with that setting at 0 that a change was made at some point to change 0 to a calculated value. Quote Link to comment
adamfl Posted March 4 Author Share Posted March 4 Understood. Does anything in the logs or config indicate why this is happening re: my long-running rsync task? At this point all of the first four disks are down to around 5.78 TB free. Thanks. Quote Link to comment
JorgeB Posted March 4 Share Posted March 4 2 minutes ago, adamfl said: Does anything in the logs or config indicate why this is happening re: my long-running rsync task? Not really, it's behaving as most free instead of highwater, try changing the allocation method to a different one and then back to highwater, just in case there's some glitch. 1 Quote Link to comment
adamfl Posted March 4 Author Share Posted March 4 (edited) Thanks, I just did that. It almost seems like it is behaving as most free if it were ignoring the fifth disk, though, otherwise it doesn't really make sense to me. Here are the current levels. I just made the change and then back to high water. Is there some kind of advantaged percentage or calculation done where if I'm moving generally larger, multi-GB files, that high-water continues to try and make the other four disks be "equal" if they aren't within 0.05% or something similar? Maybe they keep going back and forth out of "equal" range? Edited March 4 by adamfl Quote Link to comment
JorgeB Posted March 5 Share Posted March 5 11 hours ago, adamfl said: it is behaving as most free if it were ignoring the fifth disk Right, there's also that, normal most free would not ignore that disk, honestly, this is starting to look like a bug to me, but haven't been able to reproduce for now, will try to do some more testing. Quote Link to comment
JorgeB Posted March 6 Share Posted March 6 So like I suspected this is a bug, I can reproduce with an array layout similar to yours, thankfully it's also present with smaller devices: Like with you, it's basically working as most free, except for the last disk, which keeps twice the free size of the other ones, I noticed it was the same for you: 7.7x2 = 15.4 5.6 x2 = 11.2 So definitely a bug, now I still have to try and find out which layouts cause the issue, and if it's filesystem dependent or not. Quote Link to comment
adamfl Posted March 6 Author Share Posted March 6 (edited) @JorgeB Wow thanks for reproducing, I didn't even catch the 1:2 ratio. Thanks for digging into this. The pattern is continuing as shown. Edited March 6 by adamfl Quote Link to comment
Solution JorgeB Posted March 6 Solution Share Posted March 6 I have confirmed it only happens with zfs, it works fine with two devices, with three (or more, most likely) it doesn't, it works correctly until the two first devices are below 50%, once they both get to 50% it starts doing the almost most free thing, keeping the last disk with twice of the free space of the other ones. I'll report this issue to LT and it should be fixed for the next release. 1 Quote Link to comment
adamfl Posted March 6 Author Share Posted March 6 3 hours ago, JorgeB said: I have confirmed it only happens with zfs, it works fine with two devices, with three (or more, most likely) it doesn't, it works correctly until the two first devices are below 50%, once they both get to 50% it starts doing the almost most free thing, keeping the last disk with twice of the free space of the other ones. I'll report this issue to LT and it should be fixed for the next release. Awesome, thanks! If it's functioning as most free with a 2:1 ratio, are you able to test what will happen once it hits the share minimum free space value on a given disk? I'm worried if I need to use more of the free space on disk5 it may behave oddly. Quote Link to comment
JorgeB Posted March 7 Share Posted March 7 14 hours ago, adamfl said: are you able to test what will happen once it hits the share minimum free space value on a given disk? Yes, I did test and that still works correctly, once the minimum free space is hit for the top disks it will write exclusively to the last one, until it also reaches the floor. 1 Quote Link to comment
JorgeB Posted March 27 Share Posted March 27 This issue should be fixed on v6.12.9 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.