Raid 0 Not Improving Write Speeds


Recommended Posts

Hey everyone! I'm new here to the world of Unraid, so excuse my ignorance! I've done a bit of research into cache pools, and ended up purchasing two of these 240G SSDs:

 

https://www.amazon.com/dp/product/B01N5IB20Q/

 

The write speed for each SSD is maxed at 350MB/s, but I assumed that if I used RAID0 (which I did configure in the cache settings, and I'm planning on going to RAID10 once I get more SSDs), I'd be seeing something closer to 500MB/s, however, my read/write is no different than using RAID1, which both give me somewhere around 300MB/s. I'm a bit lost as to where to go next. My MOBO is SATAII, which is why I opted for RAID0 rather than simply buying a larger, faster drive. 

 

I previously was using a FreeNAS system with my drives in a RAID10 configuration, and I got around 515MB/s write, so I know my network setup is capable (I'm using 10gbps Mellanox cards). I was hoping to get similar performance when switching to Unraid, but haven't been able to achieve it just yet. 

 

Any tips on what I might be doing wrong? 

 

Here is some of my info I figure might be helpful:

 

BTRFS "Balance Status"
Data, RAID0: total=4.00GiB, used=768.00KiB

System, RAID1: total=32.00MiB, used=16.00KiB

Metadata, RAID1: total=1.00GiB, used=112.00KiB

GlobalReserve, single: total=16.00MiB, used=0.00B

 

Disk Settings:

Capture.PNG.51146447215faa22e23483db9292c01c.PNG

Edited by Takedown
Add info for current performance
Link to comment
17 hours ago, johnnie.black said:

Make sure jumbo frames and direct i/o are enable.

I did try the Jumbo Frames, and I ran into some issues. I'm not sure if my NICs are capable of Jumbo Frames. When I enabled direct i/o, my writes went up a little, but my reads shot down to about half. I'll try to do both simultaneously when I get home and maybe it'll make a difference. 

Link to comment

I did try the both simultaneously last night, and the jumbo frames were still giving me an issue. Everytime I set the MTU at 9000 and rebooted, I couldn't even connect, which is the same issue I'd had before. The Direct IO enabled, like I said, gives me a little faster writes (maybe 10-20%), but then my reads got down to half of the speed. Thanks for the suggestion. I'm guessing if I could figure out the jumbo frames issue, it might help a little. But apart from that, I feel like there's a much larger bottleneck that I'm not seeing...

Link to comment
35 minutes ago, Takedown said:

Everytime I set the MTU at 9000 and rebooted, I couldn't even connect, which is the same issue I'd had before.

This suggest your ethernet hardware doesn't support jumbo frames, it needs to be enable everywhere, including the switch if you're using one.

 

Try this scrip to see if the pool itself is performing well, to bypass the network, copy to flash drive root and run with:

 

/boot/write_speed_test.sh /mnt/cache/test.dat

 

You can test with single vs raid0 pool, and while raid0 won't double the speed it should show a significant improvement.

write_speed_test.sh

Link to comment
43 minutes ago, johnnie.black said:

This suggest your ethernet hardware doesn't support jumbo frames, it needs to be enable everywhere, including the switch if you're using one.

  

 Try this scrip to see if the pool itself is performing well, to bypass the network, copy to flash drive root and run with:

  


/boot/write_speed_test.sh /mnt/cache/test.dat

  

You can test with single vs raid0 pool, and while raid0 won't double the speed it should show a significant improvement.

write_speed_test.sh

I ran the script and I'm getting 190-250MB/s in RAID1, and  210-240 MB/s in RAID 0. That's strange to me, as when I run a disk speed test or simply transfer a file over the network, I get closer to 300. Does that mean I have something configured improperly with RAID0? As I mentioned before, the write speeds on these drives are supposed to be 350MB/s. 

Link to comment

Those Kingston SSDs are slow TLC models, they can't sustain writes of much more than 100MB/s each, the high result with just one likely means of have a lot of RAM and it's caching it, type this before the test and run it again:

 

sysctl vm.dirty_ratio=1
sysctl vm.dirty_background_ratio=1

 

9 hours ago, Takedown said:

which is rated at 550MB/s write

What model? Often they are rated at those speeds for just a small time, e.g., the 120GB 850 EVO can write at 500MB/s while it's filing the SLC cache, but that's just 3GB and tren it drops to a max sustained write speed of just 150MB/s.

  • Upvote 1
Link to comment
On 11/22/2018 at 12:52 AM, johnnie.black said:

Those Kingston SSDs are slow TLC models, they can't sustain writes of much more than 100MB/s each, the high result with just one likely means of have a lot of RAM and it's caching it, type this before the test and run it again:

  


sysctl vm.dirty_ratio=1
sysctl vm.dirty_background_ratio=1

  

What model? Often they are rated at those speeds for just a small time, e.g., the 120GB 850 EVO can write at 500MB/s while it's filing the SLC cache, but that's just 3GB and tren it drops to a max sustained write speed of just 150MB/s.

Interesting! I honestly didn't even know that SSDs had caches as well. I guess I figured it could all be stored at the same speed... I entered in those commands and I got a seemingly slower speed when testing after using it. What's an optimal number to set these to if I'm using 8G of RAM? Does it matter much? 

 

Here's the model of the SSDs I use for video productions: https://www.amazon.com/dp/B00KHRYRLY/

Link to comment

I did some transfer tests after setting these ratios (and I also researched what I was actually modifying). Using raid0 now does make a difference of about 80MB/s over raid1. I'm still not getting as fast as I'd hoped, though... Are all SSDs going to have a sustained write speed significantly lower than the sequential write speed? 

 

Edit: So I found https://ssd.userbenchmark.com/ - which has an option to compare avg sustained write speeds. If I now am doing the correct research, it would tell me that if I go for one of these https://www.amazon.com/dp/B0786QNS9B, I'll be better off vs the 2 Kingstons in raid 0? (eventually my goal is to also get 3 more in raid10 to maximize my performance). Do you think that would get me to over 300MB/s speeds? 

Edited by Takedown
Link to comment
4 minutes ago, johnnie.black said:

The MX500 are much faster, Samsung 850/860 EVO are also fast, the 250GB 850 EVO can sustain 300MB/s writes, the 500GB model can sustain 500MB/s, I believe the 860 EVO performs similarly, and the MX500 should also have similar performance.

Alright, I'll be looking to upgrade my SSDs soon. I'll post an update when I do so! Thanks so much for your help. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.