10Gbe unRAID to unRAID (Peer-to-Peer)


TechMed

Recommended Posts

First off - thanks Johnnie for the continued help digging into this. 

 

My current setting is weekly for SSD Trim. Should that be bumped up to a more frequent setting? Is it simply the need to keep moving large chunks of data around that could be slowing this thing down? If so I'll try that. If not, any suggestions on drives that will perform at the level required? I can't seem to find solid sources for information on what drive speeds will perform at - a la the current choice based on some savings. I would have rather splurged if I had known I would have the right performance expectations. 

Link to comment
5 hours ago, 1activegeek said:

My current setting is weekly for SSD Trim. Should that be bumped up to a more frequent setting?

Depends on how much writes you do, but changing to daily is a good option in any case it won't do any harm.

 

5 hours ago, 1activegeek said:

Is it simply the need to keep moving large chunks of data around that could be slowing this thing down? If so I'll try that. If not, any suggestions on drives that will perform at the level required?

I found the write test sometimes generates lower speeds than an actual transfer, what seeds are you seeing during a transfer?

I can for example tell you that a 500GB Crucial MX500 can write at 500MB/s for the first few GB while using SLC cache and then slows down a little but it can still sustain around 400MB/s writes.

Link to comment

Hi guys, just reading this and thought I should share some of my SSD experience. I have used a wide variety of different brands and types of SSD over the last few years, including in my unRAID server. Some general things I would like to share include never completely trusting the 'theoretical' versus actual SSD performance at different sequential/random sizes and queue depths, testing bottlenecks using random/incompressible data to RAM disks (as suggested already above) and being very wary of different trim, wear-leveling algorithms and secure erase features. Generally speaking PCIe/NVMe (stick based) SSDs are an improvement over older SATA models but not always. Anyway, I would fully recommend if your mainboard has support for PCIe/NVMe SSDs (sometimes called M.2/2280) then the better models like the Samsung 970 Pro can actually achieve sustained sequential transfers above 1250MB/sec for large multi-gigabyte file transfers that will occasionally saturate your 10 Gbps (1.25 GB/sec theoretical - 8 bits in a byte!) NICs. One other critical performance factor to consider is thermal throttling, sometimes the faster models will get very hot very quickly and might slow to a crawl to keep cooler, so having a decent passive heatsink on them might help. Finally, endurance and reliability of SSDs on Linux is a real factor worth considering, I have had around 20% of my old/early (mostly under 100GB) SSDs completely die on me, different brands but with similar controllers with questionable wear-leveling on them. Don't let that put you off though, things have improved significantly and I've also got a 10 year old Samsung 128GB SATA SSD still going very strong with barely 5% of its rated TBW endurance used :) I have just purchased a 500GB Crucial MX500 (PCIe/NVMe model not SATA) and looking forward to testing it but I do not expect it to perform anywhere near as well as a Samsung Pro although I think it has a 5 year warranty and 100 TBW rated endurance which should be more than enough for my average 5GB per day of writes. In theory it should last me up to 20,000 days or over 50 years but remember what I said about theoretical versus actual.

Link to comment
2 hours ago, johnnie.black said:

Depends on how much writes you do, but changing to daily is a good option in any case it won't do any harm.

 

I found the write test sometimes generates lower speeds than an actual transfer, what seeds are you seeing during a transfer?

I can for example tell you that a 500GB Crucial MX500 can write at 500MB/s for the first few GB while using SLC cache and then slows down a little but it can still sustain around 400MB/s writes.

Ok, I'll try stepping up the TRIM level first to see if that helps at all and try running fresh tests. 

 

When you say seeds, what do you mean? I mean at the very base, I would imagine I should be able to see write at >125MB/s, and dwindling as seen on the larger sizes files as well. We are talking about average life file tests of 2 and 4 GB in size. I'm shocked that almost any SSD at this point is not able to transfer a 2-4GB file in less than about 8 seconds. I believe the 4GB files I tried sending across were taking almost 24 seconds (roughly calculated to about 124MB/s). And that is even over the network, but your write test was local, so it really makes me wonder what could be limiting this local data writing to such low levels when it should be 4x that number (yes theory I know), but even if I could see something like 250-300MB/s I'd be less concerned. At this point, I'm just as good running my cache with my WD RED's which can perform at the same level. 

 

EDIT: I think you may have had a typo and meant speeds, in which case I did outline what I was seeing on network transfers. They are right in line with your test. Which again, just has me scratching my head. I am confident in the bottleneck being locally within unRAID, because the iPerf3 tests validate 10GB connectivity is working and my local write tests on the local endpoint are much faster as well than those on unRAID side. 

Edited by 1activegeek
Link to comment
2 hours ago, 1activegeek said:

EDIT: I think you may have had a typo and meant speeds

Yes, I meant speeds, I asked cause in a recent test local speeds using the write test scrip were well below the actual transfer speed over lan, e.g., this is the previously mentioned Crucial MX500:

 

writing 10240000000 bytes to: /mnt/disks/temp2/test.dat
1146062+0 records in
1146061+0 records out
1173566464 bytes (1.2 GB, 1.1 GiB) copied, 5.00033 s, 235 MB/s
2139461+0 records in
2139461+0 records out
2190808064 bytes (2.2 GB, 2.0 GiB) copied, 10.0027 s, 219 MB/s
3360510+0 records in
3360510+0 records out
3441162240 bytes (3.4 GB, 3.2 GiB) copied, 15.009 s, 229 MB/s
4545742+0 records in
4545742+0 records out
4654839808 bytes (4.7 GB, 4.3 GiB) copied, 20.0134 s, 233 MB/s
5625066+0 records in
5625065+0 records out
5760066560 bytes (5.8 GB, 5.4 GiB) copied, 25.0169 s, 230 MB/s
6686880+0 records in
6686880+0 records out
6847365120 bytes (6.8 GB, 6.4 GiB) copied, 30.0352 s, 228 MB/s
7778356+0 records in
7778356+0 records out
7965036544 bytes (8.0 GB, 7.4 GiB) copied, 35.0504 s, 227 MB/s
8863897+0 records in
8863897+0 records out
9076630528 bytes (9.1 GB, 8.5 GiB) copied, 40.0248 s, 227 MB/s
9904846+0 records in
9904846+0 records out
10142562304 bytes (10 GB, 9.4 GiB) copied, 45.0248 s, 225 MB/s
10000000+0 records in
10000000+0 records out
10240000000 bytes (10 GB, 9.5 GiB) copied, 45.4796 s, 225 MB/s
write complete, syncing

raid1_slc.png.575a37c62ae11a13654b3f9a02f4b7ff.png

 

Edited by johnnie.black
Link to comment

Well, I'll start with changing the TRIM setting and try seeing if I can't run a test tomorrow earlier in the day before any writes have happened after the scheduled TRIM. Or at least reduce the amount.

 

Another thought - is it possible since I have so much Docker stuff writing to the Cache that it could be interfering and/or chewing up something resource wise? I wouldn't think that should be the case, but just grasping at straws here. I just can't believe this drive is really this slow. 😕

Link to comment
  • 1 year later...

You guys ever figure this out?

 

I noticed that in most of your pics, you top out around ~300 MB/s uploading to UNRAID. I just installed an NVMe PCIe X4 SSD Cache drive and there is NO IMPROVEMENT and I am stuck at those same speeds coming up from a standard SATA6 SSD cache drive. However, I am able to fully saturate my 10GBe NIC with sustained 1 GB/s writes under a very specific scenario. Please let me know. I would like to get this figured out once and for all. My post is below. Thanks!

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.