Jump to content

EXTREMELY slow read speeds FROM array...


JimPhreak

Recommended Posts

I just upgraded both my UnRAID servers to the latest beta.  Everything seem to working great until I tried to initiate a backup from my main server to my backup server to via SyncBack today.  I noticed extremely slow transfer speeds so I stopped the backup to investigate.  All testing I did writing TO my main server's array I was getting roughly 100MB/s (cache drive).  However any kind of READ I did from my array whether it was a disk share or user share, I was getting anywhere between 3MB/s and 14MB/s.  I have rebooted my server, VMs, switch, but nothing has helped.  I've confirmed I have no network load when I do my tests and the fact that I can write to the cache drive over the network at 100MB/s tells me it's not a network issue.

 

What would cause slows transfer speeds ONLY when doing a read?

Link to comment

If you were writing to cache but possibly reading from array, maybe there is a disk problem. See here

 

So I should run some smart tests on all my drives and then post the log?  Read speeds are the same slowness when trying to do a transfer from all 6 disks so I can't image them all being bad.

Link to comment

Post the results of

ethtool eth0

from both servers just to make sure they are both networking at full speed.

 

I'm not clear from your description whether your testing involves both servers or maybe one server and another computer or what. If it is involving both servers, then maybe what you are interpreting as slow reads from one is actually slow writes on the other.

 

Are you actually trying to read from disk shares or user shares?

 

Post a complete syslog, perhaps from both servers. Maybe that will show whether any drive needs a smart report.

Link to comment

Are these transfers across your VPN?    Any chance the issue is a slow uplink speed at the main server?

Did I miss something in the OP? Are these on different networks over the internet? That would certainly explain a lot.

 

You didn't miss anything in this thread => but if you look at the OP's other threads you'll see that his main and backup servers are in different locations connected by a VPN.

 

Link to comment

Are these transfers across your VPN?    Any chance the issue is a slow uplink speed at the main server?

Did I miss something in the OP? Are these on different networks over the internet? That would certainly explain a lot.

 

You didn't miss anything in this thread => but if you look at the OP's other threads you'll see that his main and backup servers are in different locations connected by a VPN.

 

Both servers are on the same LAN right now.  I was trying to do a server to server backup over my LAN before I bring my backup server to my second location when I noticed the slow speeds.  I then did all subsequent testing from two different PCs on my LAN via disk shares and also via user shares and got the same slow speeds.  But when I go from the PC to my UnRAID server I get very good speeds.

 

So no, there is no VPN involved as all transfers are being done between two hosts on the same local subnet.

Link to comment

Still not clear, as you say you got slow speeds with PC, and you got good speeds with PC. Were the slow speeds with one unRAID and the good speeds with the other unRAID?

 

UnRAID Server #1 -> PC #1 or PC #2 = 3-15MB/s

UnRAID Server #2 -> PC #1 or PC #2 = 3-15MB/s

UnRAID Server #1 -> UnRAID Server #2 = 10-15MB/s

 

PC to UnRAID Server #1 or #2 -> 100+MB/s

 

Link to comment

VERY interesting ... and perplexing.

 

Note that high speeds in one direction and low speeds in the other CAN be a result of a discontinuity in one of the pairs in the cable somewhere along the way => but with the SAME results from two different servers and two different PC's, it seems exceptionally unlikely this could be the case.    Is there any common path that's used for all these transfers [e.g. perhaps a switch -> switch cable] ?

 

[One counterpoint to this possibility:  if it WAS the case it's likely that the slow speed would be fairly consistently in the 10-11MB/s range ... since the discontinuous pair would result in a dropdown to 100Mb speeds]

 

 

Link to comment

VERY interesting ... and perplexing.

 

Note that high speeds in one direction and low speeds in the other CAN be a result of a discontinuity in one of the pairs in the cable somewhere along the way => but with the SAME results from two different servers and two different PC's, it seems exceptionally unlikely this could be the case.    Is there any common path that's used for all these transfers [e.g. perhaps a switch -> switch cable] ?

 

[One counterpoint to this possibility:  if it WAS the case it's likely that the slow speed would be fairly consistently in the 10-11MB/s range ... since the discontinuous pair would result in a dropdown to 100Mb speeds]

 

I can't see it being a cable since all of them are basically brand new cables but I guess stranger things have happened.  I'll have to try and replace them tomorrow and see if that does anything.  Not sure what else I can do tonight having no other cables to swap out.

Link to comment

Unless there's a common cable involved in all of the transfers between the servers and your PC's [as I noted, perhaps a connection between two switches or between a switch and router], then it's very unlikely this is the cause.    I was just brainstorming a bit on possibilities.

 

Which Beta did you upgrade from?  ... and have you tried reverting to that version (perhaps just on one server for testing) to see if you still have the same speed issue?

 

Link to comment

Unless there's a common cable involved in all of the transfers between the servers and your PC's [as I noted, perhaps a connection between two switches or between a switch and router], then it's very unlikely this is the cause.    I was just brainstorming a bit on possibilities.

 

Which Beta did you upgrade from?  ... and have you tried reverting to that version (perhaps just on one server for testing) to see if you still have the same speed issue?

 

I upgraded from 5.06 to the latest 6 beta.

 

 

Are these all small files?

 

I've only tested with video files between 500MB and 3GB.

Link to comment

Do you by any chance have the VPN already configured locally on unraid? Perhaps the slow traffic is actually going out to the internet and back in?

 

I have nothing installed on UnRAID besides for UnMenu so no VPN is configured on it.  It simply has a static IP address in my LAN subnet.  Also my VPN maxes out at 9MB/s so the fact that the read speeds sometimes get as high as 15-17MB/s proves it's not going out over the VPN.

Link to comment

Have you tried power cycling the switch or other network gear? I've seen routers and switches get confused and cause weird issues.

 

Power cycled the my switch twice.  Will try power cycling my ESXi box when I'm on site tonight since that houses my pfSense VM which controls access to my network externally.  If that goes down and I can't get it back up I lose access remotely so I'll do that this evening.

Link to comment

UnRAID Server #1 -> PC #1 or PC #2 = 3-15MB/s

UnRAID Server #2 -> PC #1 or PC #2 = 3-15MB/s

UnRAID Server #1 -> UnRAID Server #2 = 10-15MB/s

 

PC to UnRAID Server #1 or #2 -> 100+MB/s

 

Looking at your speeds again, this is NOT a case of reverting to 100Mb/s, since even the slow transfers get above 12MB/s.    It's also interesting that the Server -> Server transfer doesn't get quite as slow as the Server -> PC transfers (this may just be a function of what you transferred in the tests).

 

Any chance you did these transfer tests while the servers were very actively using the disks [i.e. during a parity check] ?  (Just brainstorming a bit ... I doubt that was the case, but it IS a situation that will slow things down a lot - and writes wouldn't be impacted since they go to the cache)

 

Have you made any changes/updates since moving to v6 that would preclude reverting to v5 to confirm whether or not you still have the issue there?

 

 

 

Link to comment

UnRAID Server #1 -> PC #1 or PC #2 = 3-15MB/s

UnRAID Server #2 -> PC #1 or PC #2 = 3-15MB/s

UnRAID Server #1 -> UnRAID Server #2 = 10-15MB/s

 

PC to UnRAID Server #1 or #2 -> 100+MB/s

 

Looking at your speeds again, this is NOT a case of reverting to 100Mb/s, since even the slow transfers get above 12MB/s.    It's also interesting that the Server -> Server transfer doesn't get quite as slow as the Server -> PC transfers (this may just be a function of what you transferred in the tests).

 

Any chance you did these transfer tests while the servers were very actively using the disks [i.e. during a parity check] ?  (Just brainstorming a bit ... I doubt that was the case, but it IS a situation that will slow things down a lot - and writes wouldn't be impacted since they go to the cache)

 

Have you made any changes/updates since moving to v6 that would preclude reverting to v5 to confirm whether or not you still have the issue there?

 

No parity checks were being done and disks weren't very active at all.  I'm starting to think this is more of a networking issue though either related to the cabling or some other type of configuration on my network.  At the same time I upgrade my UnRAID servers I also did a bit of a network overhaul by installing pfSense in a VM and tying that same physical NIC to both pfSense as well as the rest of the VMs on my local network.  That could be causing some kind of issue but I won't know until I can get over to that site this evening to do some testing.  First thing I'm going to do is swap out some cables and do some testing with that.

Link to comment

... At the same time I upgrade my UnRAID servers I also did a bit of a network overhaul by installing pfSense in a VM and tying that same physical NIC to both pfSense as well as the rest of the VMs on my local network.  That could be causing some kind of issue ...

 

Definitely sounds like a likely source of the problem  :)

[At this point, I doubt that it's a cable issue]

Link to comment

Well it looks like it was indeed a networking issue.  I replaced all my cables and also added a second physical NIC to the vSwitch on my ESXi host so that I have two NICs shared between all my VMs including the LAN interface on my pfSense box.  Seems to be working much better as of now.  I will keep an eye on this over the next few days as I continue to do my initial backup from my main server to my backup server.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...