Jump to content

Is there a way to tweak the mover?


Recommended Posts

Overall, I'm finding using a cache drive for an array seems to cause more problems than it solves.  e.g. It seems the default is to allow the cache to fill up most of the way before the mover even starts running.  Meaning if say I upload 500 GB to my 1 TB drive today, and the next day do the same thing, the cache drive will still be sitting with 500 GB in it when I start the next upload, even though there was sufficient time for it to have completely clear.

 

That means I get warnings, I'm pretty much destined to ignore and can create issues when transfering large files.  For example, today I started a backup of disk image files from old computers.   I'm looking and see 62 GB left in the cache, and 400 GB image file transfering.   The mover is running, but the disk is filling faster than the mover can move.   Do I want to know what happens to my transfer if the cache drive fills up?  Yes.  But it is not an experiment I wish to try today.

 

Now the silly thing is, I open a console window and run rsync -aP --remove-source-files /mnt/cache/<my share>/ /mnt/disk1/<my share>/.  and it runs much faster than my transfer, much faster than the mover...

So I'm wondering is there a way to set the mover to have higher priority, so my manual intervention is not neccessary?

 

Link to comment
59 minutes ago, docbillnet said:

Meaning if say I upload 500 GB to my 1 TB drive today, and the next day do the same thing,

Usually you set the mover to run overnight to move everything to the array, or was that time not enough? It should have been for 500GB

Link to comment

Whatever you do to try and tweak mover make sure you have set then Minimum Free Space for the pool acting as cache to be larger than the biggest file you expect to cache (the normal recommendation is twice that size to give some headroom).   This sets the criteria for when files should start by-passing the pool and going directly to the array.   Pools tend to start exhibiting problematical behaviour if they get too full so you want to avoid this.

 

if transferring very large files it is normally better to write them directly to a array drive and bypass the cache altogether.

Link to comment

You might also consider changing the setting on the "Tunable (md_write_method):"   (SETTINGS   >>>  Disk Settings)  parameter to 'reconstruct write' .  There will about double the write speed to the array but will also spin up all of the array drives to do so. 

Link to comment

Sometimes there is no benefit to be caching transfers.  Cache if great when you are sitting in front of your desktop PC waiting for the transfer to complete.  But for automated tasks, such as backups, it is better to just let it write directly to the array.  My backup shares for automated backups and my media do not use cache - both have automated tasks, and makes little sense to write the data twice, first to cache then to the array.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...