docbillnet Posted May 14 Share Posted May 14 Overall, I'm finding using a cache drive for an array seems to cause more problems than it solves. e.g. It seems the default is to allow the cache to fill up most of the way before the mover even starts running. Meaning if say I upload 500 GB to my 1 TB drive today, and the next day do the same thing, the cache drive will still be sitting with 500 GB in it when I start the next upload, even though there was sufficient time for it to have completely clear. That means I get warnings, I'm pretty much destined to ignore and can create issues when transfering large files. For example, today I started a backup of disk image files from old computers. I'm looking and see 62 GB left in the cache, and 400 GB image file transfering. The mover is running, but the disk is filling faster than the mover can move. Do I want to know what happens to my transfer if the cache drive fills up? Yes. But it is not an experiment I wish to try today. Now the silly thing is, I open a console window and run rsync -aP --remove-source-files /mnt/cache/<my share>/ /mnt/disk1/<my share>/. and it runs much faster than my transfer, much faster than the mover... So I'm wondering is there a way to set the mover to have higher priority, so my manual intervention is not neccessary? Quote Link to comment
JorgeB Posted May 14 Share Posted May 14 59 minutes ago, docbillnet said: Meaning if say I upload 500 GB to my 1 TB drive today, and the next day do the same thing, Usually you set the mover to run overnight to move everything to the array, or was that time not enough? It should have been for 500GB Quote Link to comment
itimpi Posted May 14 Share Posted May 14 Whatever you do to try and tweak mover make sure you have set then Minimum Free Space for the pool acting as cache to be larger than the biggest file you expect to cache (the normal recommendation is twice that size to give some headroom). This sets the criteria for when files should start by-passing the pool and going directly to the array. Pools tend to start exhibiting problematical behaviour if they get too full so you want to avoid this. if transferring very large files it is normally better to write them directly to a array drive and bypass the cache altogether. Quote Link to comment
Frank1940 Posted May 17 Share Posted May 17 You might also consider changing the setting on the "Tunable (md_write_method):" (SETTINGS >>> Disk Settings) parameter to 'reconstruct write' . There will about double the write speed to the array but will also spin up all of the array drives to do so. Quote Link to comment
ConnerVT Posted May 19 Share Posted May 19 Sometimes there is no benefit to be caching transfers. Cache if great when you are sitting in front of your desktop PC waiting for the transfer to complete. But for automated tasks, such as backups, it is better to just let it write directly to the array. My backup shares for automated backups and my media do not use cache - both have automated tasks, and makes little sense to write the data twice, first to cache then to the array. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.