marcusone1 Posted December 12, 2019 Share Posted December 12, 2019 I am setting up a new system and moving drives, so thought it a great time to convert from reiserfs to XFS. The issue: I have a new unraid 6.7.2 running and working fine. with some new Red Pro drives 6TB, and 4TB to convert some 4-9year old 2TB Greens/Reds. I start the array WITHOUT a parity drive (as I don't want to wait for the new 6TB to do parity until after I convert my drives). when i do the rsync command like `rsync -avPX /mnt/disk1/ /mnt/disk2/` transfer speeds on large files drop to as low as 1MB/s (yes, 1 or sometime 800KB/s)... happens with even just one rsync going. If i stop the array, remove all the drives from assigned drives, manually mount the drives, i get the expected 80-120MB/s of the old drive read speed. I can even do two 2TB old 5400rpm drives to a new 7200rpm drive and each one gets about 80-100MB/s (as the new drive caps out at 240MB/s). when manually mounted! Why the slow transfer when the drives are in the array (WITHOUT a Parity drive). I tried the md_write_method to reconstruct write, but that didn't do anything Thanks! Quote Link to comment
JorgeB Posted December 13, 2019 Share Posted December 13, 2019 Upgrade to v6.8, there are known performance issues with v6.7.x Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.