Jump to content

Big issue, unraid limited to write speed of a single SSD compared to Windows Server?


Recommended Posts

Hi all, so I recently picked up a pretty cool  1u storage server to try and upgrade my homelab and fix some bad practices, as well as explore new software options for my nas, so I'm quite new to unraid. Unfortunately due to budget reasons, truenas is out of the list of options since I can't afford to buy a bunch of high capacity hard drives at this time.

Took a few weeks but I was finally able to afford four 2tb SSDs and another 8tb drive for parity, so I finally started doing some proper testing after picking up a pro license during the black friday sale since I was near the end of my trial.

However, from my testing, I suddenly saw a significant problem, in that the cache performance was frankly, awful.

My old server has a 500gb nvme drive as tiered storage with drivepool and I'm able to easily saturate my 10 gig network. I was at first concerned that the sas expander and backplane layout might be limiting the available bandwidth to my drives, so I fired up windows server as a sanity check.

With a disk management raid0 of the four SSDs I'm able to get 1,054 MB/s reads and 1,164 MB/s writes over the network on Q8T1 crystaldiskmark, fantastic! Matching my other server.

But, with unraid and a raid0 using ZFS, I'm limited to 893 MB/s reads and only 434 MB/s writes.
With BTRFS raid0 I'm seeing 703 MB/s reads and 449 MB/s writes.

I've enabled SMB multichannel on unraid, and manually spun down the HDD array to ensure the only devices using any real bandwidth on the SAS channels are the SSDs, but these results are consistent, and unfortunately, kind of deal breaking. The other sections of crystaldiskmark are also significantly worse compared to windows server across the board.

900 read isn't bad honestly but being able to write at only ~400 is a substantial and noticeable downgrade from my current setup, and unfortunately, this server doesn't have a way to add nvme storage, since the model that did was out of my price range. Unfortunately that is seeming very regrettable now.

Now, I should actually be able to add a custom 5v supply internally and add another ghetto rigged set of four SSDs, perhaps even 8 off the intel hba chipset by removing the shell off the SSDs, but that's another several hundred and I'm unsure if that will even help. It would be nice for capacity, but speed wise, since four drives at raid0 or raid10 seem so limited, I really doubt it will.

I was hoping to have my four SSDs in a pseudo raid5 array, with single parity so I had the storage space of three, with the ability to saturate 10 gig but it seems unraid can't do that so I might need to do raid10 and lose half my ssd capacity, if this speed issue can be helped.

Link to comment
  • Replies 55
  • Created
  • Last Reply

Top Posters In This Topic

One curious thing I just noticed, is that I shared the main array, but also created a share only on the cache to ensure that it was only writing to these drives.

It seems to report the overall capacity of the server however which is interest. I had originally wanted the cache to be non-transparent and be more like drivepool, where it showed the capacity of the cache as well as spinning drives. I'm wondering if this points to some sort of issue though.

image.png.3a8eb18421f73a04a1aa3e1ba77c2504.png

 

I would ideally like to have a single share like this, but the quantassd share doesn't show files off the array so that means that isn't going to work how I want at all and will even be misleading if I wanted a share that stayed on the SSDs it seems...

 

edit: After changing secondary storage to none the capacity changed, though it's higher than it should be. Not a huge issue right now though as I want to fix the speed issue. I had the share set up with no secondary storage before and wanted to test again now and this did not affect speeds.

 

image.png

Edited by JunctionRunner
Link to comment

Slight bit of further testing, I enabled NetBios and disabled enhanced macos compatibility.

With BTRFS raid0 I get 906 MB/s reads, a jump upwards, but only 440 MB/s writes yet again. ZFS provided similar results to the first test as well. I also enabled reconstruct write but I don't think that affects the cache?

Edited by JunctionRunner
Link to comment
4 minutes ago, JorgeB said:

It doesn't, you can try using a disk share or an exclusive share.

Ok, that's what I figured, since you'd want the cache to be as snappy as possible. It's honestly probably a minor thing once I configure mover tuning.

I had originally wanted tiered storage, like I have currently set up in my mess of a server which does 500gb nvme > 500gb sata ssd > raid array.

It's entirely possible now that I'm going to quickly ingest a terabyte of raw video at a time, so I needed to upgrade, and I basically want files to stay on the SSD tier as long as possible before being shoved off. Mover tuning I think will get me close, but I can't really start testing it properly until I get this speed issue figured it.

It will be a shame to lose the visible capacity at an easy glance of the SSD tier, but it just seems like I kind of have to suck it up.

I was also thinking hmm, maybe I run a windows server VM on top of unraid with drivepool to get that functionality, passing the array and cache through to that but it's clunky. Maybe if mover tuning doesn't work how I want. Pinning specific folders would be nice after all, but that's low priority since I can't even properly use the server currently.
 

Edited by JunctionRunner
Link to comment

I see matching speeds on the unraid server as well. I haven't tried running multiple tests, I'll try doing that when I get back home in a few hours. 

 

I did try using the diskspeed docker plugin but it doesn't seem to allow me to benchmark a single SSD regardless, is there a particularly good way within unraid to benchmark the cache itself locally?

 

It's making zero sense especially since the exact same hardware works perfectly with windows server. I can boot back up into that, configure the drives and immediately have the expected performance.

Link to comment

Yeah, I'm not sure why it's reading kind of low. That being said according to task manager transferring to my other machine I do hit 10 gig speeds, and diskmark shows this which is far more expected, it will also keep these speeds for 64GiB

image.png.db4d3055ad3a01bd1bbf6d435bd4c180.png

Interesting though, iperf the other direction shows a significant difference. I thought I had run it both ways yesterday and saw matching figures but I was fairly tired and on a mix of caffeine and allergy meds.

 

image.png

Link to comment

Alright, so, more iperf testing with four streams. I did single stream testing with the Quanta server (the unraid box) in windows and got similarly poor iperf results but for some reason that doesn't affect transfer speeds. Writing this a bit as I go.

 

From my desktop to my main server.

screenshot_1495.jpg.3673970587994c1d132e29013a4bb568.jpg

 

Then from the main server to the desktop, more like what I should be getting.

screenshot_1496.jpg.a4726c1bdf11c05eebc9392fa685d39c.jpg

 

Now, from the Quanta server to my desktop, running windows server. I did notice the second 10 gig port being a little flaky for some reason so I disconnected that, and need to investigate.
screenshot_1498.jpg.9e50208e664956d9ef7b3c7a55aceb7f.jpg
Then desktop to Quanta server running windows server.

screenshot_1497.jpg.70286c1a335df49eeda6628096c279c8.jpg


That all looks fine, my old server is a little worse performing actually, but it's not noticeable in use.

So, now with unraid booted, from my desktop to the Quanta server.

image.png.8f675864155d246d07e91c61be20db7f.png

 

and the Quanta/unraid server back to my desktop.

screenshot_1499.jpg.797352977e7bdec30b46dd5bfd557d02.jpg



So, that all looks fine and dandy, back to testing with the fresh unraid install.

Interestingly this time, I saw more than 400MB/s on the unraid ui, but my write speed is still trash and reads are back to where they were.

image.thumb.png.75ece915abd3060dbaf49a7be57da955.png


So, I enabled exclusive access and netbios again and shared the cache with no secondary and things are looking much better.

image.thumb.png.809a1fc338b71358e12f6a21d9aa64f5.png

Saw up to 1.6 GB/s in unraid, and the final results are 1,022.43 reads and 810.9 writes.

It seems like the flaky network connection was causing some sort of issue, I don't know if it's a transceiver issue or smething but this is a massive step forward.

That being said I am still under the performance of my other server by a bit, this is at least what I consider usable now though I think, I'll run some tests with real world file transfers now but I need a screen break.
 

Link to comment

Sigh, this is really feeling like I wasted $160 right now. There's still a bunch of other cool stuff I want to check out with unraid but there's no point when at this point it looks like unraid for some reason is just sucking for network storage. Fuck this is frustrating.

Exact same hardware config, I've tried everything I can find searching, and I still get massively worse performance on unraid than windows server even with a disk share.

SMB is supposed to be as good on unraid as anything else, I don't know, is there some sort of additional tweak to make to smb multichannel in a config file or something?

Windows server, yet again with the flaky transceiver pulled. Easily saturating the network with a 17gb folder of files.

screenshot_1505.jpg.c6001eddd57c9d4b32db9f5a561ba27e.jpg

screenshot_1506.jpg.66ba731888bc9266e5d7182c7bd5b22c.jpg

 

But unraid? Half that, even with that disk share. And I'm still using zfs raid0. If anything you'd think zfs would be faster than NTFS

screenshot_1507.jpg.6c26fe06274de811a02b2bb37c56d720.jpg

 

Testing with ~200gb of files on an external nvme ssd, there's a four-five minute longer transfer time on unraid than windows, even with windows doing it's classic dipping then speeding up behavior. Change that to a terabyte and that's an even bigger difference.

Edited by JunctionRunner
Link to comment

Disabling bonding, as expected, didn't help, so much for proper sleep, unfortunately I can't think of anything else to try currently.

I don't know if I can pass the array and cache through to a Windows server vm directly, not as a share, to test that out. It seems like right now the issue is unraid, so if I do the network share through a vm that could saturate the network maybe then that will fix this issue but that seems like a stupid workaround to have to do...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...