-Daedalus Posted September 20, 2018 Share Posted September 20, 2018 I know some of this can be handled with user scripts, but I'd like to see more native settings for the Mover. The main one I'm thinking of is treating the cache drive more like a cache on a RAID controller - Once a file gets written to the cache, it is immediately moved to the array. Useful for those who have small SSDs, or for those only running a single SSD rather than a pool, and don't want to leave important files on the cache for hours unprotected, or for those who simply want the write performance increase, and aren't worried about power consumption/noise from all the array disks not spinning down. (Ideally, I'd love to have the cache act like this, on an unassigned device, and leave VMs and Docker on the pool) Link to comment
DZMM Posted September 20, 2018 Share Posted September 20, 2018 you can do this now with the mover tuning plugin where you can set it to move to the array on a custom schedule e.g. hourly, or at certain used level Link to comment
-Daedalus Posted September 20, 2018 Author Share Posted September 20, 2018 I'm aware, however it's not quite the same thing. You could set the mover to run at 10%+ utilisation, and if your cache drive is always above this, it'll do what I want, but I would imagine it's a more elegant solution to check if new files or added, and move only them. Link to comment
DZMM Posted September 20, 2018 Share Posted September 20, 2018 There's also the cron schedule to force move e.g you could set to every x mins Link to comment
trurl Posted September 20, 2018 Share Posted September 20, 2018 1 hour ago, -Daedalus said: check if new files or added, and move only them. This bit is a little confusing. Mover moves everything that is supposed to be moved when it runs without checking anything. Why would you want it to skip moving some things? Link to comment
-Daedalus Posted September 21, 2018 Author Share Posted September 21, 2018 I'll be completely honest, and say that I know know the low-level stuff with the mover; I don't know what functions it calls, how much overhead is involved, etc. I was only asking to do it on a per-file basis, as soon as copied, because I imagined that would be less expensive than basically having the mover continually called just in case something new was added to the cache. Link to comment
trurl Posted September 21, 2018 Share Posted September 21, 2018 1 hour ago, -Daedalus said: I'll be completely honest, and say that I know know the low-level stuff with the mover; I don't know what functions it calls, how much overhead is involved, etc. I was only asking to do it on a per-file basis, as soon as copied, because I imagined that would be less expensive than basically having the mover continually called just in case something new was added to the cache. Forget about the low-level mover stuff. Just think about the way the disks work. Moving immediately is actually likely to give you worse write performance than simply writing directly to the array and not caching. This is why mover is normally scheduled for when there is likely going to be less activity. You write a file to cache, it immediately begins copying it to the array and then deleting it from cache (moving between disks is a copy to destination followed by a delete from source). While this is going on, another file is written to cache. So you have multiple changes to cache going at the same time. And then when that next file is immediately moved, the whole thing starts again, but the first probably hasn't completed and so those writes to disk and parity will be competing with each other and probably causing a lot of seeking since they likely won't be at continuous sectors. And then another file gets cached, etc. Or, if the writes to the array wait on the previous one so they don't compete, then of course, you are back to waiting on the array and so there was no point in caching. Link to comment
trurl Posted September 21, 2018 Share Posted September 21, 2018 Maybe you should reconsider whether or not you cache some of your user shares. I have most of mine set to not use cache. Link to comment
-Daedalus Posted September 21, 2018 Author Share Posted September 21, 2018 Valid point. I didn't think about the parity disk here. If you end up in a situation where you're writing to more than one disk, the parity disk will be having a hard time of it. Wouldn't be so much concerned with multiple things happening on the cache, given it'll outpace the disks by miles, but the parity disk thrashing is a valid point, and probably renders it moot, unless you were to limit the move to sequentially moving files as they come in. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.