Mover mode - Immediate


-Daedalus

Recommended Posts

I know some of this can be handled with user scripts, but I'd like to see more native settings for the Mover.

 

The main one I'm thinking of is treating the cache drive more like a cache on a RAID controller - Once a file gets written to the cache, it is immediately moved to the array.

 

Useful for those who have small SSDs, or for those only running a single SSD rather than a pool, and don't want to leave important files on the cache for hours unprotected, or for those who simply want the write performance increase, and aren't worried about power consumption/noise from all the array disks not spinning down.

 

(Ideally, I'd love to have the cache act like this, on an unassigned device, and leave VMs and Docker on the pool)

Link to comment

I'll be completely honest, and say that I know know the low-level stuff with the mover; I don't know what functions it calls, how much overhead is involved, etc.

 

I was only asking to do it on a per-file basis, as soon as copied, because I imagined that would be less expensive than basically having the mover continually called just in case something new was added to the cache.

Link to comment
1 hour ago, -Daedalus said:

I'll be completely honest, and say that I know know the low-level stuff with the mover; I don't know what functions it calls, how much overhead is involved, etc.

 

I was only asking to do it on a per-file basis, as soon as copied, because I imagined that would be less expensive than basically having the mover continually called just in case something new was added to the cache.

Forget about the low-level mover stuff. Just think about the way the disks work. Moving immediately is actually likely to give you worse write performance than simply writing directly to the array and not caching. This is why mover is normally scheduled for when there is likely going to be less activity.

 

You write a file to cache, it immediately begins copying it to the array and then deleting it from cache (moving between disks is a copy to destination followed by a delete from source). While this is going on, another file is written to cache. So you have multiple changes to cache going at the same time. And then when that next file is immediately moved, the whole thing starts again, but the first probably hasn't completed and so those writes to disk and parity will be competing with each other and probably causing a lot of seeking since they likely won't be at continuous sectors. And then another file gets cached, etc.

 

Or, if the writes to the array wait on the previous one so they don't compete, then of course, you are back to waiting on the array and so there was no point in caching.

Link to comment

Valid point. I didn't think about the parity disk here. If you end up in a situation where you're writing to more than one disk, the parity disk will be having a hard time of it.

 

Wouldn't be so much concerned with multiple things happening on the cache, given it'll outpace the disks by miles, but the parity disk thrashing is a valid point, and probably renders it moot, unless you were to limit the move to sequentially moving files as they come in.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.