Let's talk 100MB/Sec+ Read/Writes


chickensoup

Recommended Posts

What would it take to be able to sustain average read/write speeds over 100MB/Sec when accessing an unRAID server over LAN?

Some points to note, feel free to correct me if I'm wrong:

 

Hardware 

1: You WILL need a cache drive. Is a RAID/SSD cache drive required to achieve these speeds?

2: You WILL need a gigbit LAN and decent quality CAT5E/CAT6 cable

3: You WILL need decent read & write speeds at the other end as well ("your PC")

4: Is an aftermarket NIC going to be beneficial; such as an Intel Pro/1000?

5: Is an aftermarket controller card going to be beneficial/required?

 

Software

1: What version of unRAID would be most beneficial to copy speed; will it matter? 4.7 vs 5.0x

2: What is the most accurate way to record throughput? Terminal vs Windows vs Teracopy vs 3rd Party

3: Which operating system and/or filesystem offers the best performance? Linux/Windows

4: Assuming most unRAID servers are used for storing media; what settings would help optimise file transfers for larger files? ie. 1GB - 8GB in size.

 

I'm sure theres more to note but if anyone is currently able to achieve these speeds, please post your setup below so others can get some ideas  ;D

 

Link to comment

My box in my sig has a 4-port PRO/1000 PT NIC connected in etherchannel mode (static LACP) to a HP Procurve 1810g-24.  My main PC has a 2-port version of the same card, again trunked up in etherchannel.  Highest burst I've seen is around 160, sustained typically sits around 120 or so.  That's with a Corsair Force 3 SSD on a SATA 3 port, RDM'ed through to the unRAID VM.  High quality Molex CAT6 cable.

 

Internally, from the vmxnet3 NIC on my Win7 Sab/Sick/Couch box, going to the unRAID VM, again with a vmxnet3 NIC, it can burst even higher than that (vmxnet3's are 10Gb/sec adapters, so if everything remains internal.. yum.)

 

Jumbo frames is also enabled on all NIC's, the Procurve and vswitches, and I don't use the 82579LM NIC on my board because of this (the user-created ESXi driver doesn't play nice with Jumbo.)  I'm sure faster is achievable with a bit of tuning etc, but it works great for me.

 

Link to comment

Hell, I would be happy to hit 60 to 80 MB/s.  The highest I hit is 35 MB/s, but after a few minutes it goes down.  I don't have a cache drive yet.  I used to have a cache drive, and the speeds would be about 50 to 60 MB/s with a green drive, but then transferring (write) that to the array was slow. So, I ended up changing the cache drive to data drive.

 

However, I've been pondering in using a SSD drive for cache drive and see if this gets me higher transfer / write speeds.

Link to comment

chickensoup,

 

Thanks for the link.  I was able to enable my jumbo frames in my W7 PC.  I also did the jumb frames for my server.  I did noticed transfer speeds went up a little bit about 3 MB/s.  My transfer speed remained constant between 35 to 36 MB/s.  The best part is that I do not get any packet or erros in my eth0.  After a 1.5 months of asking questions, buying a new nic, reading countless post and finally settling with living with the errors and dropped packets, I was able to fix it.

 

Thanks you so much ;).

Link to comment

I have saturated my write speeds by using an SSD for a cache drive...

 

BUT. I have been playing with the new Seagate spinners that are 7200RPM with 1TB platters.

I really hate to say it, but these things fly. My 3TB (ST3000DM001) model has gotten up to 190MB/s write speed bursts and a sustained of 165MB/s when copying to one from an SSD on the same PC while copying BR rips.

 

Mind you this was an empty drive writing to the fastest part of the drive. it also did not perform as fast with about 100,000 10kb images. plus its seek time was rather slow.

 

With only 3 platters it ran quieter and LOT cooler...

 

For video use this looks like the new king of drives...

 

as cache drive, you should be able to get close to getting saturated gigabit with a single spinner....

 

BUT..

 

This "but" is the show stopper.. it is only rated at 2400-hour power-on... thats pretty sad. that is only 100 days of 24/7 service...

You will need to have this puppy sleep/spin-down whenever possible. Then to top it off, it only has a 2 Year warranty... I think Seagate knows it will burn itself out quickly...

Link to comment

I'm curious if anyone has done this yet, but my plan is to use a 3-drive striped raid array for the cache drive.  That will at least provide high write speed.

 

(Any recommendation on a raid card supported by unraid in the manner?)

 

I have run a raid5 with 4x 7200RPM 500GB Samsung 2.5 laptop drives for a cache drive on an areeca raid card as an experiment. the array could get close to 500MB/s read/write.

 

but with that cost ($800ish for a cache drive) and added power consumption. you might as well buy a 256GB Corsair Performance Pro SSD. unless you are looking for protected cache drive.

 

(striped opens up a new world of data loss potential BTW)

Link to comment

I am not looking for crazy speeds, but it would be nice to have at least 60 t0 70 MB/s transfer and write speeds to the array.  Will this be possible with either a 1tb 7200RPM drive or should i go for a SSD drive?

 

I think the most I've ever transfer from my W7 PC to my server at once was 120 GB.  I mainly transfer 1 or two movies a day, which is less than 90GB.

Link to comment

I have run a raid5 with 4x 7200RPM 500GB Samsung 2.5 laptop drives for a cache drive on an areeca raid card as an experiment. the array could get close to 500MB/s read/write.

 

but with that cost ($800ish for a cache drive) and added power consumption. you might as well buy a 256GB Corsair Performance Pro SSD. unless you are looking for protected cache drive.

 

(striped opens up a new world of data loss potential BTW)

 

The application is a backup storage array so losing the current days image backups on the cache drive is not a worry. 

 

I'll need about 500GB for the cache to hold a days backups. I'll need to work the numbers again but I think SSD will be more costly but maybe not by much.  But of course SSD does have the added benefit of simplicity over striping spinners.

Link to comment

I am not looking for crazy speeds, but it would be nice to have at least 60 t0 70 MB/s transfer and write speeds to the array.  Will this be possible with either a 1tb 7200RPM drive or should i go for a SSD drive?

 

I think the most I've ever transfer from my W7 PC to my server at once was 120 GB.  I mainly transfer 1 or two movies a day, which is less than 90GB.

 

The new 7200RPM 1TB platter Seagates should get you the speed you want for a cache drive. if your array is based on 3TB drives, I would get a 3TB cache drive. that way if a drives fails, you can use it as a warm spare (loosing the ability to cache until you buy a new drive). I would NOT use it for an application drive (or boot drive for windoze) looking at its crappy lifespan..

 

SSD is a luxury that is not really Ideal for most unraiders.. look at my atlas build though about the new Corsair drives and how they are ideal for unraid if you do go SSD.

 

I have run a raid5 with 4x 7200RPM 500GB Samsung 2.5 laptop drives for a cache drive on an areeca raid card as an experiment. the array could get close to 500MB/s read/write.

 

but with that cost ($800ish for a cache drive) and added power consumption. you might as well buy a 256GB Corsair Performance Pro SSD. unless you are looking for protected cache drive.

 

(striped opens up a new world of data loss potential BTW)

 

The application is a backup storage array so losing the current days image backups on the cache drive is not a worry. 

 

I'll need about 500GB for the cache to hold a days backups. I'll need to work the numbers again but I think SSD will be more costly but maybe not by much.  But of course SSD does have the added benefit of simplicity over striping spinners.

If you are doing 3 drives, a raid5 could be possible assuming your drives match. hardware raid5 is pretty quick with the correct hardware, it should saturate your GiB.

 

It sounds like an SDD would be to small for you.. I do have my mover run every 2 hours to clean off my SSD. but if your copying all of your data at once. you would loose the benefit of the SSD after you hit the size of the drive.

 

Are you truly creating 500GB of new data each day or copying mostly the same data over and over? maybe you need to think about your backup strategy?  you could use rsync to just get the changed data, you could just run your backup job it at night when speed makes no difference if you're sleeping. We are getting a bit off topic now.. I think you had another thread... I'll look over there.

Link to comment

I am not looking for crazy speeds, but it would be nice to have at least 60 t0 70 MB/s transfer and write speeds to the array.  Will this be possible with either a 1tb 7200RPM drive or should i go for a SSD drive?

 

I think the most I've ever transfer from my W7 PC to my server at once was 120 GB.  I mainly transfer 1 or two movies a day, which is less than 90GB.

 

The new 7200RPM 1TB platter Seagates should get you the speed you want for a cache drive. if your array is based on 3TB drives, I would get a 3TB cache drive. that way if a drives fails, you can use it as a warm spare (loosing the ability to cache until you buy a new drive). I would NOT use it for an application drive (or boot drive for windoze) looking at its crappy lifespan..

 

SSD is a luxury that is not really Ideal for most unraiders.. look at my atlas build though about the new Corsair drives and how they are ideal for unraid if you do go SSD.

 

 

All of my drives are currently 2tb only.  I'm waiting for the prices on the 3tb to come down so I can move up to those.  I think I am going to buy this drive 2tb 7200 rpms from seagate http://www.amazon.com/Seagate-ST2000DM001-Barracuda-3-5-Inch-Internal/dp/B005T3GRN2/ref=sr_1_3?s=electronics&ie=UTF8&qid=1335713540&sr=1-3.

 

Thanks for your input.

Link to comment

For jumbo frames on unRAID, edit /boot/config/network.cfg and put a line at the bottom that says MTU=xxxx - I run everything at 9000.

 

Note that for jumbo frames to work, EVERYTHING on your LAN must support them, otherwise you run the risk of actually worsening performance.  When I say everything, I mean your switch(es), and every network interface, physical or otherwise. 

 

The switch is the most important part though - it should support the maximum possible jumbo frame size on your network (the HP I use supports 9220, so no issue with everything set to 9000.)

 

 

Link to comment

I recently replaced my older WD4000 cache drive with a (slightly) newer Samsung HD502.  My write speeds have, more or less, doubled, from around 40MB/s to 80MB/s average.  I am now seeing peak writes in excess of 100MB/s.  This is with 2-3GB files, and using nfs.

 

My raw network speed (between Ubuntu desktop and unRAID), as reported by iperf, is slightly in excess of 110MB/s (Intel Pro/1000 network interface on both machines).

 

Perhaps I could gain a slight improvement by installing a faster cache drive, but I'm getting close to the network limit, so channel bonding would be required to achieve any significant improvement.

 

Average write speeds are, I believe, being impacted because I often see pauses in the movement of the transfer progress bar on the Ubuntu desktop machine.  I'm not sure what causes these pauses and why they don't always occur, but if they could be eliminated, I reckon that my writes would be hitting 100MB/s on average.

Link to comment
  • 3 weeks later...

After reading through the replies here, I'm thinking about upgrading my cache drive in the not-to-distant future. I'll probably replace it with an SSD once the prices come down a little more as that should help to eliminate any write/cache read bottlenecks. Plus this would help maximise the cache drive read speed when writing to the array- although the bottleneck will be calculating parity by this point.

 

For jumbo frames on unRAID, edit /boot/config/network.cfg and put a line at the bottom that says MTU=xxxx - I run everything at 9000.

 

Note that for jumbo frames to work, EVERYTHING on your LAN must support them, otherwise you run the risk of actually worsening performance.  When I say everything, I mean your switch(es), and every network interface, physical or otherwise. 

 

The switch is the most important part though - it should support the maximum possible jumbo frame size on your network (the HP I use supports 9220, so no issue with everything set to 9000.)

 

How much difference does Jumbo Frames make? It sounds like a fair bit of messing around for a minimal gain. When you state that every network interface has to support JF; do you mean any device writing to the array or even devices which are reading? Such as my WD Live's and MediaPC which only ever read from the array. Also would this affect other network devices which don't interact with the unRAID server at all? Such as printers, phones, tablets, other machines which only require internet access.

Link to comment

As someone who writes about 5TB of data to my primary unRAID server every month at a speed of 50-60MB/sec to the cache and 25-35 to the array, I'm curious what application your server is being put to that you feel the need to increase write speeds? I'm pretty happy with the read/write speeds to my 5400rpm cache drive.

Link to comment

I honestly have no clue what my sustained write speeds are. Everything my server does aside from the occassional transfer of pictures, it does automatically. For all I know it takes all day to transfer a 4 Gb file, but it doesn't impede the performance of streaming, so I don't really care either, :).

 

My server only see's about 50-60Gb of new data a day, sometimes lower, sometimes higher, so average about 1.5TB a month. It works for me, but now I'm interested, so off to see what my sustained speeds are. I would be willing to bet my tests won't go above 54Mbps though, I'll test it with my laptop with a wireless connection,  :P

 

Ok, apparently even my level of don't care isn't this low, I'm gonna have to try and get this speed up some. I was getting 3.3Mbps write speeds, with and without a cache drive. I'll have to test from a wired PC in a little bit, as having the same speed with or without cache makes me believe the bottleneck is my laptop, which is more than powerful enough, so I'm guessing its because of the Wireless connection. Wish I could test it with Wireless N atleast, but for compatibility with our smartphones I have to keep it on b/g, if I use N the phones can't see the network.

Link to comment

Ok, apparently even my level of don't care isn't this low, I'm gonna have to try and get this speed up some. I was getting 3.3Mbps write speeds, with and without a cache drive. I'll have to test from a wired PC in a little bit, as having the same speed with or without cache makes me believe the bottleneck is my laptop, which is more than powerful enough, so I'm guessing its because of the Wireless connection. Wish I could test it with Wireless N atleast, but for compatibility with our smartphones I have to keep it on b/g, if I use N the phones can't see the network.

 

Don't test array write performance using wireless. Wireless performance is quite variable and you just don't need that in array testing.

 

Be careful in reporting 3.3Mbps vs 3.3MB/sec. If you are getting 3.3Mbps and not frustrated, you are a saint.

Link to comment

Yes, your correct, MBps, not Mbps. And the reason I tested with wireless was because thats what I'm on most and was interested to see what I'd get. I know the desktops would get better, but I hardly ever use it. Its so...stationary, plus, my recliner is a lot more comfortable than that computer chair.

 

I'm going to be testing from the desktop in a little while to see what I get there as well

Link to comment

As someone who writes about 5TB of data to my primary unRAID server every month at a speed of 50-60MB/sec to the cache and 25-35 to the array, I'm curious what application your server is being put to that you feel the need to increase write speeds? I'm pretty happy with the read/write speeds to my 5400rpm cache drive.

 

I am always interested in reducing my bottlenecks and increasing performance, not just with my unRAID server but with any of the PCs I build. Why settle for 60MB/sec if I could be writing at 90MB/sec? I am very happy with my server and don't use it for anything seriously demanding (backups & streaming media to 2-3 machines at a time). I would say on average I probably only write 50-75GB/day (~2TB/month) although will occasionally want to dump a heap of data onto the server, for whatever reason.

 

For interest sake I did a few copy tests. Results were based on the following setup:

 

Client PC: Win7 x64 w/ RAID0 (two 500gb single platter 7200rpm drives) & Teracopy v2.27

File Copied: 1x 6.55GB MKV

Cache Drive: 150GB WD Raptor (older model)

LAN: Gigabit, JF disabled at present

 

Reported Speeds:

Array -> PC ~ 50MB/Sec

PC -> Cache ~ 74MB/Sec

PC -> Array ~ 31MB/Sec

 

I was surprised to see the difference in speed between writing to the cache drive vs writing to my PC, I actually assumed the fastest speed would be reading from the server. I don't trust these figures entirely though as one of the cables running to the server is an old (fairly worn) CAT5E running out the window and along the roof (temporary i swear!) and suspect this is the reason for my ~32000 dropped packets  :o

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.