Jump to content
pcbistro

Why is my Unraid file transfers are so slow compared to others?

26 posts in this topic Last Reply

Recommended Posts

I have the Gigabyte EP45-UD3P motherboard running Unraid 4.5 beta 6.  I am getting slow speeds on the writes and the reads transfer rates and I am trying to find out why.  I get the same speeds with or without a cache drive installed and something slower on the cache reads.  I have tried replacing the network cards on my desktop and the Unraid server with Intel PRO/1000PT Server Adapter with no better results.  I am seeing speeds on the Writes between 10MB and 18MB transfer rates using TeraCopy .  I am seeing between 10MB and 12MB transfer rates on the read speeds as well.

I notice in the SysLog that even though all of my hard drives are SATA2 some of them at max speed is defaulting to 1.5Gbps instead of 3.0Gbps bus speed on the controller.  All of the controllers are configured in the BIOS to AHCI.  I only tested with DVD files for the best results.

I am going to attach my SysLog to the thread.  I am remove the Mover lines from the log because it was too big to attach.

I guess what I am trying to find out if anyone else is experiencing the same types of problems and get some suggestions on how to fix this.  Some of the threads I read stated that they are getting the same speeds as mine for Writes without the cache drive installed and I am fine with that but on the Reads and Writes with a cache drive installed some are claiming to get almost full disk speeds 60-80Mbs.  I am not seeing this when connected to Unraid.  I am seeing transfer rates between 80-100Mbs when transferring file between Windows XP and 7 and Ubuntu 9.  And this is without Jumbo frames enabled.

 

Share this post


Link to post

What does 80Mbps = 10 MBps mean? Are you saying they are lying about getting 80Mbs on the read speeds with a cache drive installed?

 

And if that is do why are my speeds are the same whether I disable the parity or add a cache drive or leave parity intact and copy files directly into the array? 

 

Also, why would my sata ports are be stating SATA link up 1.5 Gbps (SStatus 113 SControl 300) instead of SATA link up 3.0 Gbps (SStatus 123 SControl 300) when I have all SATA II drives attached?

Share this post


Link to post

80 Mbit/s = 10MByte/s. Teracopy is reporting MBytes/s (MBps). Since your network appears to be 100 Mbit/s (Mbps) the maximum transfer rate across the network will be 100 Mbps/8 bits/Byte = 12.5 MBps. In the real world you'll never get that sustained speed so 10 MBps sustained is reasonable.

 

A cache drive won't really help with network transfers on a 100 Mbps network. The network is the bottleneck. Going to 1000 Mbps network structure will (in theory) allow 125 MBps and a cache would help in that situation.

 

The 1.5 Gbps vs. 3.0 Gbps SATA issue could be any number of things. Check if the drive has a jumper setting speed to 1.5 Gbps (doubtful but worth ruling out as a cause). On a more practical level that speed difference will only be significant on parity checks or disk to disk transfers inside the server. It won't matter much if at all on transfers in or out of the server.

Share this post


Link to post

Oops, just noticed your server network card is 1000 Mbps. Maybe something else on the network is limiting speed to 100 Mbps.

Share this post


Link to post

Also, there are reports that the stock Linux AHCI drivers do not perform well in some situations.... so I would not use AHCI mode.

Share this post


Link to post

I'm not sure this is what BubbaQ is referring to, but disabling NCQ has improved performance for some using AHCI.  Your syslog, however, does NOT have the set of lines disabling NCQ for your SATA drives, even though you are running v4.5-beta6!  The only thing I can think of is you may have changed the NCQ setting (on the Settings page) for "Force NCQ disabled" to NO.  If so, change it back to YES and reboot.

 

Razmajazz is right, you have 3 Seagate 500GB drives that probably have their SATA150 jumper still installed.  Please see the Improving unRAID Performance, Remove SATA150 Jumper section.  HOWEVER, I doubt if you will see a difference, the change will probably be negligible.  The bottleneck is almost always the network.  You also have a Seagate 120GB drive, but it is probably too old to support SATA II with 3.0 Gbps, so 1.5 Gbps is correct for it.

 

Your network looks like it is working correctly, at gigabit speeds with an Intel PRO/1000 card.  You also have dual gigabit onboard, and it is being fully set up.  That is not hurting anything, but you may want to disable the onboard gigabit networking, since you aren't using it.

 

In general, read speeds from your server and writes to non parity protected drives such as the Cache drive should be typically between 22MB/s and 35MB/s.  Write speeds to parity protected drives (not to the Cache drive) should be between 8MB/s and 15MB/s.  There are many factors that affect these numbers, and it is not always possible to figure out why a given transfer is at a somewhat different speed than another, even with the same machines.

 

Check out the Improving unRAID Performance wiki page for other ideas, such as the "Increase Read-Ahead Buffer" tip.  However the info in that tip is for older versions, see this post instead, for the correct loop.

Share this post


Link to post

This question of read and write speed often comes up, or some variation of it, so I finally wrote a broad FAQ entry for it:  "How fast is unRAID?"

 

This is such a subjective and variable-dependent issue, that I decided to be very broad, very general, but include a little guidance near the end.  I'm quite sure none of us would write this the same way, and there is probably much to 'argue' in it, so feel free to edit or rewrite, or provide feedback here (constructive please!).

 

I'd especially like to hear others opinions on the speed numbers I have used.  Do these ranges of numbers sound right for the typical user to you?  I'm sure there are faster rates out there, and perhaps an extra note can be added about what some users have been able to accomplish.

Share this post


Link to post

I appreciate all of you r replies. 

 

So far I have checked the 500GB drives and yes they still had the 150 jumpers on them.  I feel like such as idiot because I thought I already checked for those.  That is what I get for trying to looking for something with limited lighting.  I checked the Syslog and they are all showing up as 3.0Gbps except for the 120Gb hard drive.  I already knew that the drive was at 1.5 speed . 

 

I will play with the NCQ , AHCI settings, do some transfer tests and report back.

 

I posted the updated SysLog for you to see if there are any other changes that occurred since the removal of the 150 jumpers.

Share this post


Link to post

This is what I came up with!

 

With 120GB SATA 1 drive installed as a cache drive.  I was getting writes speeds of 23MBs.

 

Took 1.5TB SATA 2 parity out of  the array and added as a cache drive.

 

Copied files directly to cache share and received transfer rates between 76MBs and 102Mbs on the Writes

 

Copied files to the array and received transfer rates between 18Mbs and 20Mbs on the Writes

 

Some reason I notice there is no files in the cache drive that was copied to the array.  I think the files copied directly to the array and by passed the cached drive.  Even though the share has cache enabled. Maybe the moving of the drives without a reboot cause the problem.

 

Before I reboot the server I changed the cache back to the 120GB Sata 1 drive and did a speed test copying files directly to the cache share.

 

Copied files directly to cache share and received transfer rates between 38Mbs and 76Mbs on the Writes

 

Rebooted server

 

With 120GB SATA 1 drive still installed as the cache drive I copied files into the array and received transfer rates between 19Mbs and 22Mbs.

 

I did verify the that files are to the cache drive and not directly into the array.

I believe anytime you are the array offline and moving drives around you should reboot immediately afterwards.

 

Changed cache drive from 120GB Sata drive to 1.5TB and rebooted server and received transfer rates of 20Mbs to 24Mbs

 

So I came to the conclusion that the speed are faster when copying files directly to the disk than they are when copying files to the array shares.

 

So my question is do I only have to copy files into the user shares (slower copy) or can I copy files directly to the disk shares (faster copy)?  Will I have a problem with Split Levels by doing that? If thought I do not think that is a problem giving that I am copying the directly to the disk so I am creating my own Split Level? Finally, if it is okay to copy files directly to the disk will that create a problem with the parity sync?  I am thinking not because the parity sync is happening at block level and not file, folder, or share level but I am just making sure.

 

 

Share this post


Link to post
So my question is do I only have to copy files into the user shares (slower copy) or can I copy files directly to the disk shares (faster copy)?  Will I have a problem with Split Levels by doing that? If thought I do not think that is a problem giving that I am copying the directly to the disk so I am creating my own Split Level? Finally, if it is okay to copy files directly to the disk will that create a problem with the parity sync?  I am thinking not because the parity sync is happening at block level and not file, folder, or share level but I am just making sure.

 

You can copy directly to the disk shares, which will be faster, or you can copy to the User Shares, completely up to you.  The Split Level setting only applies to decisions that unRAID makes when you are saving files to a User Share, as to which drive to place the files on.  It does not affect what you yourself decide to do.  The parity protection is at the very lowest level, intercepts all writes to parity protected drives, no matter how you make those writes, and ensures that parity info is kept consistent.  There ARE ways to bypass that parity protection (and corrupt your parity info!), but it takes Linux experience and console commands.

Share this post


Link to post

This thread has been very informative for me, I'm glad I read it.  As for the speeds you mentioned in the wiki, I think they are accurate for most users.  However, maybe we should add a clause about copying to/from user shares being slower than copying to/from disk shares.  I didn't know that before now, and I've been using user shares exclusively up to this point.  I definitely get some writes that are 5 MB/s or slower, but I believe that is due to a combination of writing to a user share and being bottlenecked through the PCI bus. 

Share this post


Link to post

You are right.  Since I don't use User Shares, I forget about them.  For the same reason, I don't feel qualified to 'make up' a number range for them, what a typical user might expect.  We need some input from users who have a good idea what their average write speed is, to their user shares.

Share this post


Link to post

Writing to the user shares my rates are typically between 2 MB/s and 5 MB/s (according to Windows Explorer), even to disks that I know to be off the PCI bus (this is without a cache drive).  If I try to write several files at once, the speeds drop below 1 MB/s (so I learned that lesson!).  Other details: All SATAII, GigE LAN, Windows Vista x64.

 

This is just from memory, I'll double check these numbers tonight at home.

Share this post


Link to post

I took the parity drive out of array and used it as the cache drive.  That is how I am able to obtain such high speeds now.  I am going to leave it this way until I finish ripping all of my movies down which is about another 600 discs or so .  For me all 8 of SATA II ports are on the motherboard.  I don't have any Bus based controllers installed yet.

Share this post


Link to post

RobJ: here's some stats from my server that you are welcome to use in the wiki if you like.  These are kind of rough figures, I just watched the transfers and wrote down the min, max, and perceived average (whichever number I saw the most) for each one.  No statistical or mathematical analysis went into this.  My fastest drive is a 1 TB WD Green over SATAII connected directly to the motherboard (hence, off the PCI bus).  My slowest drive is a 250 GB Seagate, also over SATAII, but bottlenecked through the PCI bus (Promise card).  I used disk includes/excludes to make sure that files written to the user share were being written to the drive I intended.

 

For this set, my test file was a 1.09 GB single file (movie):

 

From desktop to user share on fastest drive: Range: 12MB/s - 18MB/s,  Average: 14MB/s

 

From desktop to disk share on fastest drive: Range: 14MB/s - 24MB/s, Average: 15MB/s

 

From desktop to user share on slowest drive: Range: 9MB/s - 14MB/s, Average: 13MB/s

 

From desktop to disk share on slowest drive: Range: 15MB/s - 25MB/s, Average: 13MB/s

 

 

For this set my test file was a 10.1 GB folder (entire TV show).  I cancelled each transfer after letting ~1GB of the transfer finish (I didn't want to wait 10+ mins for each trial).  The drives used were the same as above.

 

From desktop to user share on fastest drive: Range: 12MB/s - 15MB/s, Average: 14MB/s

 

From desktop to disk share on fastest drive: Range: 12MB/s - 32MB/s, Average: 13MB/s

 

From desktop to user share on slowest drive: Range: 4MB/s - 12MB/s, Average: 11MB/s

 

From desktop to disk share on slowest drive: Range: 14MB/s - 22MB/s, Average: 15MB/s

 

In summary, the difference in average write speed in each condition is negligible for these small write operations.  However, if I had several hundred GBs to transfer all at once, that little boost in speed from writing directly to the disk share would compound to create a more significant time savings.

Share this post


Link to post

You have propably already noticed that I did some benchmarking for UnRAID with IOZone:

http://lime-technology.com/forum/index.php?topic=3958.0

 

One part of the test was to verify the performance difference between disk and user shares. This is what I found with my setup (All drives Samsung Spinpoint F1 1TB models connected through MB/PCI-e):

Disk share performance was covered in Q3 (115MB/s read, 45MB/s write). User share read speed (chart 5h) is 70MB/s and writing speed 30MB/s. So the user share performance is around 60% of disk share performance.

Note that these are server internal numbers, I don't yet have official numbers for client side. Though the writing speeds are very close to my real life experience.

 

Rajahal, is your parity drive also 1TB WD connected to MB/PCI-e? As the writing speeds for disk and user shares are almost identical, there definetely seems to be a bottleneck somewhere.

 

Edit: The average numbers are very close to 100Mbps network speed (though some peak values go way beyond it). You do have Gigabit ethernet on all of the components? Some of the ADSL-modems/routers have only 100Mbps ports. I for one have such a modem so I have a separate 1Gb switch.

Share this post


Link to post

Rajahal, is your parity drive also 1TB WD connected to MB/PCI-e? As the writing speeds for disk and user shares are almost identical, there definetely seems to be a bottleneck somewhere.

 

Edit: The average numbers are very close to 100Mbps network speed (though some peak values go way beyond it). You do have Gigabit ethernet on all of the components? Some of the ADSL-modems/routers have only 100Mbps ports. I for one have such a modem so I have a separate 1Gb switch.

 

That's a good point, and one that has occurred to me.  Yes, my parity drive is also a 1TB WD Green connected directly to the mobo.  I know that my desktop and my unRAID server both have onboard Gigabit ethernet ports, and I know that my router supports GigE as well.  Also, 100Mbps translates to exactly 10MB/s, right?  Just last night I helped some friends transfer about 80 GBs from computer to computer (both with GigE) through a router that was just 100Mbps.  The transfer stayed at a steady 10MB/s the whole time.  So if my transfers are greater than 10MB/s, I must be in GigE territory, right?  I'm using generic ethernet cables that I picked up for free, Cat5 I believe.  Would Cat6 cables or better quality Cat5 cables help?

Share this post


Link to post

I'm using generic ethernet cables that I picked up for free, Cat5 I believe.  Would Cat6 cables or better quality Cat5 cables help?

Cat5 is not rated for Gigabit Ethernet.  It is rated for 100MBit use. Cat5e and Cat6 are rated for Gigabit use.

 

See here:

http://en.wikipedia.org/wiki/Category_5_cable

and

http://en.wikipedia.org/wiki/Category_6_cable

 

It would be nearly impossible for most people to tell the difference between cat5 and cat5e just by looking at the cables.

 

A new cable or two would eliminate one possible source of your poor performance, and not cost too much to try.

(monoprice.com has 50 footers at under $5.00 here)

 

Joe L.

Share this post


Link to post

OK, well, that certainly sounds like the problem.  I'll take another look at my cables tonight (maybe they will explicitly say 'Cat5' or 'Cat5e', but I doubt it).  I'll probably buy new cables tonight or tomorrow.

 

So which do you suggest, Cat5e or Cat6?  Does it matter?

Share this post


Link to post

OK, well, that certainly sounds like the problem.  I'll take another look at my cables tonight (maybe they will explicitly say 'Cat5' or 'Cat5e', but I doubt it).  I'll probably buy new cables tonight or tomorrow.

 

So which do you suggest, Cat5e or Cat6?  Does it matter?

Not really.  You don't need to pay a premium for Cat6 cables... Cat5e will be fine.

 

It is not a guarantee to fix everything, but it will certainly eliminate one possible cause of poor performance.

 

Do you install your own connectors on the ends of the cables? (I'm guessing you do not)  If so, it is very critical to wire the connectors to the correct standard.

There are two standards... one for pairs of wires for telephone use, the other for LAN use...  Use the wrong standard and you'll be lucky to get any kind of performance at all.  If you purchase the cables pre-made, you'll be fine.

 

Do any of your LAN connections use older wiring in the walls of your house?  They might be suspect if not cat5e.

 

Joe L.

(I do make my own cables from time to time... My "crimper tool" has the telephone standard pasted on the inside of the lid of its case, clearly, it was originally marketed to telephone installers. It would be very bad to wire the connectors for LAN use following the supplied diagram in its lid  >:( )

 

Share this post


Link to post

I buy most of my cables from monoprice (can't beat the prices).  I have a couple 50 and 75 foot lengths of there Cat5e cable in my house.  I also have a 25 foot wire that i ran through the wall in my apartment (don't tell my landlord) to my basement so that i could hook up my server to my network.

 

You should have seen the look on my room mates face when he walked in and half the furniture along the one wall was in the middle of the room and there was a small hole in the wall where i was trying to fish the cable through. When i buy my first house I am going to run Cat5e/Cat6 cable to all the room just to make sure i don't have to run cables around my baseboards again.

Share this post


Link to post

My opinion is a little different from Joe here, but I'll admit I am not a networking expert.  My understanding is that Cat5e is considered Adequate for gigabit networking, and Cat6 is Superior.  The price difference is small, and it gives you some headroom for the future.  It is the same amount of labor, same hassle, time, and effort to install.  And speeds are usually always increasing.  It seems preferable to me that if you are going to this amount of trouble, you might at well be ready for future requirements as well, for only another $10 perhaps, for a set of ready-made cables.  Just my theory, but by running Cat6 now, you may be able to skip running Cat5e now and then having to run Cat6 (or Cat 7!) in a couple of years.

Share this post


Link to post

RobJ's take seems reasonable to me, and the price difference is almost negligible at less than $5 ($8.01 for 100 ft Cat5e vs. $11.24 for 100 ft Cat6).

 

I don't crimp my own cables; I have never needed enough cable at one time to justify buying the $60 crimping kit.  It is a skill I would like to learn eventually, but for most situations it is cheaper to buy the pre-made cables.  Also, none of my cables are installed through walls or anything complicated, just desktop>router>unRAID server, so it will take me a mere minute to swap the cables out. 

 

Even though I don't need it right now, I'm thinking of buying a 100 ft cable for maximum flexibility in the future.  It would allow me to put my unRAID server in a back room or basement (if I had one).  Any reason not to?

 

...and then there's the really hard part - what color to choose?  I'm leaning towards purple or green because I've never had those colors in a cable before (I've had all the other colors, I believe).  Then again, gray or white blends in with the wall more....choices, choices.

Share this post


Link to post

RobJ's take seems reasonable to me, and the price difference is almost negligible at less than $5 ($8.01 for 100 ft Cat5e vs. $11.24 for 100 ft Cat6).

 

I don't crimp my own cables; I have never needed enough cable at one time to justify buying the $60 crimping kit.  It is a skill I would like to learn eventually, but for most situations it is cheaper to buy the pre-made cables.  Also, none of my cables are installed through walls or anything complicated, just desktop>router>unRAID server, so it will take me a mere minute to swap the cables out. 

 

Even though I don't need it right now, I'm thinking of buying a 100 ft cable for maximum flexibility in the future.  It would allow me to put my unRAID server in a back room or basement (if I had one).  Any reason not to?

 

...and then there's the really hard part - what color to choose?  I'm leaning towards purple or green because I've never had those colors in a cable before (I've had all the other colors, I believe).  Then again, gray or white blends in with the wall more....choices, choices.

If the price difference is that small, you won't find me arguing with the decision.  I know when I ran the cables in the walls of my home it was a royal-pain to go from floor-to-floor.  I ran cat5e, as cat6 was not available.  I did run cable rated for in-wall use.  If I were to do it today, I'd use cat6 for in the walls.

 

As far a color goes...Nice to know you have a choice.    I usually do not care.  Most of mine are gray  ;) ;)...(I purchased a 500 foot roll of gray cable when I stocked up, and I make the cables as I need.)

 

Joe L

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.