Arbadacarba Posted October 12, 2022 Share Posted October 12, 2022 I tried looking for this but get so many confounding variables that I can't find the information I'm looking for. I have 3 pools: Array - 6 Disks Plus Parity where individual Shares are locked to specific Drives (Television Shows on Disk 1, Music on Disk 3, etc. - So that the movie disk can stay spun down whien I'm mostly listening to Music, and my AudioBook disk might be spun down for weeks at a time) Config - 1 2TB NVME disk with VM's and system files on (I wish I had named this pool System, but I think I saw a reason not to do so) Cache - 2 SSD's (Containing my Download folder, and of course, cache) My question is, what transfer speeds are reasonable if I tell Krusader to Move a large file from me Cache over to the Array (etc.)? I'm assuming the Write Speed of the recieving pool/disk is the deciding factor, and everything seems to be working great, but I just want to know if I am getting the Speeds I should expect. I assume when I move a file to the array, it goes to the Cache first... I'm trying to run a few tests and have included the results below: Copy 40GB file from NVME to Cache Copy 40GB file from NVME to Array (Non Caching) Bounced between 60 and 85 Copy 40GB file from Cache to Array (Non Caching) Bounced between 50 and 75 Copy 40GB file from Cache to NVME Copy 40GB file from Array (Non Caching) to Array (Non Caching) (2nd Disk) Bounced between 63 and 81 Copy 40GB file from Array (Non Caching) to Cache Bounced between 218 and 224 Copy 40GB file from Array (Non Caching) to NVME Bounced between 213 and 224 Methodology: - I'm running File Copy from inside the Shares page. - I feel like I'm seeing virtually the same results whether I Move or Copy, but if I run the tests from within Windows, I see a difference between Move and Copy. - Moving 1Large File verses Several Small Files - I took note at about 50% because the numbers had seemingly normalized at that point Are these numbers about right? OR have I got something horribly misconfigured? Again, it is working great... The only concern I have is the occasional CRC Issue on the Cache Disks Quote Link to comment
JorgeB Posted October 13, 2022 Share Posted October 13, 2022 You can enable turbo write for better arrays speeds at the expanse of all disks spinning up for writes. 1 Quote Link to comment
Arbadacarba Posted October 13, 2022 Author Share Posted October 13, 2022 Hmm, I might try that... Would that just affect the transfers to the array? And would not spin them all up if I were just reading from one or another? But how about the numbers I'm getting? Are they reasonable? Thanks JorgeB Arbadacarba Quote Link to comment
Solution JorgeB Posted October 14, 2022 Solution Share Posted October 14, 2022 11 hours ago, Arbadacarba said: Would that just affect the transfers to the array? Yes, and that's usually the slower one. 11 hours ago, Arbadacarba said: And would not spin them all up if I were just reading from one or another? No, just for writes. And yes, they do look reasonable and about what I would expect. 1 Quote Link to comment
Arbadacarba Posted March 7, 2023 Author Share Posted March 7, 2023 Trying to do a little further testing... I want to repeat this process but get as isolated results as I can. My previous tests copied files from one drive to another... What if I stored a file on the ram disk... Say 20GB? then copied it using the above methods to each storage type? since I am unfamiliar with the available tools, I am trying to get a share available on the ram disk (/dev/shm) but I'm having no luck. I tried creating a softlink (ln -s /dev/shm/test/ /mnt/user/Backup/Test2)within another share to a folder at /dev/shm/test but I get invalid path when I try to look in that on the device: Am I just barking up the wrong tree or am I making a minor mistake somewhere? WHY---------------- I have upgrades my two SSD drives a little to a pair of 4TB Samsung EVO drives... It occurs to me that I might be smarted keeping my system folder on that and moving my Cache to the 2TB NVME drive... My gaming VM is using another 2TB NVME so I don't need that much raw speed for my pfsense router - Home Assistant Server - and various test machines. But the added size and the fact that they are duplicated would add redundancy that I am currently going without. Thanks for any help Arbadacarba Quote Link to comment
Arbadacarba Posted March 8, 2023 Author Share Posted March 8, 2023 I think I found a solution... Copying a 25GB file to the ram drive and then copying that file to each of my drives. I figure this is giving me a fairly consistent view of the write performance of each of my drives: Spinning Rust Drives: rsync --progress --stats -v Test.mkv /mnt/disk1/Backup 25,775,133,173 100% 96.26MB/s 0:04:15 (xfr#1, to-chk=0/1) 100,512,382.22 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk2/Backup 25,775,133,173 100% 105.59MB/s 0:03:52 (xfr#1, to-chk=0/1) 110,412,959.48 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk3/Backup 25,775,133,173 100% 105.43MB/s 0:03:53 (xfr#1, to-chk=0/1) 110,412,959.48 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk4/Backup 25,775,133,173 100% 100.77MB/s 0:04:03 (xfr#1, to-chk=0/1) 105,445,505.27 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk5/Backup 25,775,133,173 100% 87.66MB/s 0:04:40 (xfr#1, to-chk=0/1) 91,585,882.91 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk6/Backup 25,775,133,173 100% 93.33MB/s 0:04:23 (xfr#1, to-chk=0/1) 97,842,224.06 bytes/sec Single Cache SSD rsync --progress --stats -v Test.mkv /mnt/cache/Backup 25,775,133,173 100% 604.45MB/s 0:00:40 (xfr#1, to-chk=0/1) 636,578,420.72 bytes/sec NVME rsync --progress --stats -v Test.mkv /mnt/config/Backup 25,775,133,173 100% 1.78GB/s 0:00:13 (xfr#1, to-chk=0/1) 1,663,317,808.97 bytes/sec I'll be combining the 2 SSD's together and testing them again Quote Link to comment
Arbadacarba Posted March 11, 2023 Author Share Posted March 11, 2023 Dual SSD Cache (No longer used as Cache...) rsync --progress --stats -v Test.mkv /mnt/systems/Backup 25,775,133,173 100% 341.82MB/s 0:01:11 (xfr#1, to-chk=0/1) 360,579,385.22 bytes/sec NVME (Just for a sanity check) rsync --progress --stats -v Test.mkv /mnt/cache/Backup 25,775,133,173 100% 1.80GB/s 0:00:13 (xfr#1, to-chk=0/1) 1,778,029,382.28 bytes/sec So I've Halved the speed of my SSD Array... Hmm... I didn't expect that. Quote Link to comment
Arbadacarba Posted April 12, 2023 Author Share Posted April 12, 2023 Dual SSD Cache (Configured in Raid 0)) rsync --progress --stats -v Test.mkv /mnt/cash/Backup 25,775,133,173 100% 1.11GB/s 0:00:21 (xfr#1, to-chk=0/1) 1,145,841,157.42 bytes/sec NVME (Again) rsync --progress --stats -v Test.mkv /mnt/cache/Backup 25,775,133,173 100% 1.81GB/s 0:00:13 (xfr#1, to-chk=0/1) 1,778,029,382.21 bytes/sec So we are back to a noticeable improvement in speed for the SSDs... So now... Do I set the NVMe drive up as Cache and the larger (But also unprotected) Pool as my working folder? And just make sure it is backed up to the array in case of drive failure? Or do the opposite? I think I would be better off using the much larger pool where I will have far more content... My System folder, my domains (VM Drives), and maybe a Steam Library? Just to be funny - Copying the file from one ram drive to another rsync --progress --stats -v Test.mkv /tmp 25,775,133,173 100% 1.98GB/s 0:00:12 (xfr#1, to-chk=0/1) 2,062,514,083.36 bytes/sec This is kind of an unexpected result... am I spoiling the other tests in some way with my methodology? Arbadacarba Quote Link to comment
JorgeB Posted April 14, 2023 Share Posted April 14, 2023 I've notice before some write performance degradation with btrfs raid1 (compared with btrfs single device or 4 device raid10), but it's not always, possible hardware dependent, you could try a zfs mirror, as a bonus reads will also be faster since zfs stripes reads from the mirror members. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.