Drives running at SATA150 instead of SATA300/600


Recommended Posts

I'm new to UnRaid. I'm on the 3rd day of my 30 day trial. Still figuring stuff out, getting things set up, etc. I plan on using it to run Plex and NextCloud. I currently have 2x 8TB drives (1 seagate ironwolf and 1 WD purple) as parity drives and 6x 2TB drives (5 WD purple and 1 Seagate baracuda) for storage. The parity drives are on the SATA3.0 ports (Since the LSI SAS has a 2.2TB limitation) while the storage drives are on the LSI SAS ports. Both the SATA and SAS ports are rated for 3Gb/s and the drives are SATA6.0 drives. But when I look at the drive info it says the storage drives are running at 1.5Gb/s. Why? When I use to have this set up as a Ubuntu server it would report the same thing. When I use to have Windows 10 Pro on this server it ran all the drives at 3Gb/s.

Screenshot_1.jpg

Edited by ElectroBlvd
Link to comment

Here's the diag zip file. Also I'm wondering why I top out at 40MB/s over SMB from my Windows 10 PC over a gigabit network regardless of having SSD cache drives or no cache drives. I doubt it's related to the drives interface being downgraded to 1.5Gb/s since that should still be sufficient to transfer closer to 100MB/s. But is it related?

ais-data-server-diagnostics-20200422-1636.zip

Link to comment
7 minutes ago, ElectroBlvd said:

Also I'm wondering why I top out at 40MB/s over SMB from my Windows 10 PC over a gigabit network regardless of having SSD cache drives or no cache drives

In those diagnostics, you do not have a cache drive installed.  40MB/s is actually in and around where you are going to be with the system in "read / modify / write" mode instead of "reconstruct write" (settings - disk settings).  There's extra reads required in that mode, which do result in a hit in transfer rates but gains in saving the power by only requiring the single data and parity drives to be spun up.  

Link to comment
6 minutes ago, Squid said:

In those diagnostics, you do not have a cache drive installed.  40MB/s is actually in and around where you are going to be with the system in "read / modify / write" mode instead of "reconstruct write" (settings - disk settings).  There's extra reads required in that mode, which do result in a hit in transfer rates but gains in saving the power by only requiring the single data and parity drives to be spun up.  

Yes, I realize I do not have cache drives installed at the moment. I originally had no cache drives. Saw transfer rates bouncing around 35-40MB/s. Did some research. Found articles explaining how cache drives work and the performance/speed gains I could expect by having them. Many people saying they were achieving 100MB/s through their gigabit network. Some saying they saw increases to 60MB/s. Thinking I could at the very least double my speed to 60MB/s I installed 2x 256GB SSD drives for cache drives. Made sure things were transferring to the cache instead of the array (saw the cache drives used space going up while array drivers were not as verification). But my speed was still sitting around 35-40MB/s. I saw no point in decreasing the life span of my SSD drives if there were no performance gains. So I removed the cache drives.

 

EDIT: I did also install the TurboWrite plugin, made sure things were set to "reconstruct write" etc. Followed every tutorial I could find. Still only got 35-40MB/s with 2x 256GB SSD cache drives.

Edited by ElectroBlvd
Link to comment
28 minutes ago, johnnie.black said:

Turbo write doesn't affect cache pool, also good idea to run an iperf test to check lan speed.

Yes, I know that it doesn't affect the cache pool. To put it in a nutshell it uses all the drives spinning to reduce the read/write cycle with the parity bits. I put that in a edit as a separate paragraph stating I also tried that to speed up my write speed with no change. I'll do a iperf through the terminal but I'm a field service engineer who works on copper and fiber networks daily for a living. I'm confident my network is fine.

Link to comment

If you get the same 40MB/s writing to the array with or without turbo write, and to an SSD cache pool, the problem is likely lan related, or on the source computer.

 

Also, regarding the original question, the disks on the omboard SATA ports are correctly linking at 3Gbps, the Seagate on the LSI is also linking at 3Gbps, only the WDs are linking at 1.5Gbps, so it's a controller or device issue/compatibility, make sure the LSI is running the latest firmware, but note that for those disks it won't make much difference since their max read/write speed is around 140/150MB/s.

Link to comment
27 minutes ago, johnnie.black said:

Test should be run with a single stream.

And that'll do it....So from my main PC to my secondary PC I'm getting 919Mb/s on a single stream. From my main PC to UnRaid I'm getting 299Mb/s. So my network connection to my UnRaid server is basically capped at 40MB/s on a single stream. Should I open a new topic for that or can I ask here, why is my network connection to my UnRaid server capping at 299Mb/s?

Screenshot_4.jpg

Screenshot_5.jpg

Edited by ElectroBlvd
Link to comment

Is this perhaps because I'm on trial at the moment? Or is it a driver issue in unRaid not utilizing the gigabit network adapter in my z800 properly? The main page shows eht 1 as 1000Mbps full duplex mtu 1500. So it's not making sense to me why I can't saturate that 1Gb connection. I want to purchase the Tier 3 Pro package but I really want to work out any kinks before making that leap. So far I believe I got everything worked out except for this transfer rate. I'm loving the usability of it and plan on implementing it in to my business to store my clients data off site. But the amount of clients I have that would be uploading to the server, 35-40MB/s just isn't going to cut it. If I can at least get to the 60MB/s I see a lot of people talk about this would be great.

Link to comment

Tips and Tweaks is installed. Made changes to buffers, offloading, etc. Nothing changed. Router is a EdgeRouter 6p. No QoS. Everything is optimized for throughput. Plus if it were a router issue it shouldn't be isolated to just the unRaid server. I did read somewhere that unRaid has driver issues with some built in network controllers, (mainly Realtek but my z800 has broadcom). And some built in controllers rely on the cpu thread rather then doing its own leg work. I wouldn't think a HP server would have those issues but then again it is an older server as well. So I'll grab a PCIe gigabit card, plop it in and report my findings.

Link to comment
  • 1 month later...

I never did get that PCIe gigabit adapter but I did state I'd come back with any changes. Turns out my issue was a failing hard drive and not network related. I don't know why iperf showed 300mbps on single stream tests. I have read the only way to saturate a 1G connection with iperf was to do multi streams though. I don't know much about it so I'm not even going to sit here and pretend. That's just what I read. Either or, I continued to see transfer speeds of 30-40MB after I left back in April with 40MB being the peak and the average being between 30-35MB (obviously, since I hadn't changed anything in the server). That started to slow down to around the 20-25MB range. And then one day I could no longer stream a 4K movie smoothly from the server when I had streamed it many times before. It would play the first 10-20 seconds and start to buffer for minutes before playing another 10-20 seconds. I poked around. Ran some tests. Out of the 6 2TB data drives I was only using roughly 2.75TB with the first 2 drives having 1TB of data each and the 3rd drive having .75TB of the data. The first data drive showed over 200 in the relocated sectors count. So I ran a SMART test. It failed at 33% with read errors. Moved all of the data off drive 1 to drive 4. Pulled the drive (have not replaced it yet due to financial hardships caused by this pandemic). Now when I transfer to the server I hit ~60MB peak and average around 52-54MB. I'm happy with 50-60MB/s. A lot better and faster than 30MB/s. That's all I wanted to being with since I saw many people stating they could achieve 60MB/s. Stupid failing hard drive....

Link to comment
59 minutes ago, johnnie.black said:

Good you fixed it but that's not correct, I can practically saturate a 10GbE connection with iperf and a single stream.

I read that on 3 different sites. Good possibility someone was just repeating info they read like I am. I cant find the other 2 now but here's one of them. I know this is old from 2015. But the other 2 I had found wasn't that old. Is it just older versions of iperf? Or just bad info?

 

https://community.spiceworks.com/topic/724397-iperf-gigabit-bandwidth-test

 

Would you mind explaining why I only got roughly 300mbps on a single stream but could get over 900mbps on multi stream? I just don't see how the 300mbps test result is accurate if the multistream test showed I could hit 1Gb and after pulling the bad drive I'm now transferring at 60MB/s which should be approx 500mbps. What am I not understanding?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.