Jump to content

Dual gig worth it?


tucansam

Recommended Posts

I am in the process of updating my unraid servers.... just saw the 64-bit version is out, wow! 

 

I initially ran sab/sb/etc on my unraid server, but had numerous problems.  I've settled on unraid being a standalone system for my use, server only, and I favor a clean install with minimal add-ons.  I have two now, one is a backup for the first (critical files), but in the future I will have two identical systems to provide backup and load balancing (nothing fancy, movies on one, tv shows on the other) to some degree.

 

I am planning on a pair of the new Silverstone cases, with 8 3.5" hot swap bays.  I'm looking at more modern motherboards with super low power CPUs... But socketed so I have an upgrade path if needed down the road.  Socket 1150 etc.

 

One that strikes me is a Gigabyte board that has dual gig NICs on it.  Using SATA3 ports, and eventually SATA3 drives, I am wondering if this would be a benefit as I could bond the pair together for faster link speeds to my switch.

 

The servers will serve as "My Documents" redirects for all Windows users in the house, and my own workstation will put it under a decent load, with 2GB Outlook PST files and the like.  The servers will also serve movies and tv shows, music, pictures, and will be a repository for a camera server.  So theoretically, traffic to each could get pretty high.  Nothing a single gigabit couldn't handle, most likely.  But with the My Docs redirect, I don't want interactive performance on my desktops to suffer.

 

Worst case scenario would be I have a server supporting three clients streaming a bluray via my Plex server, a couple of IP cameras are recording, I am running Outlook and other programs using large amounts of data from the server, my sab/sb system is unpacking a rar file, and running an rsync backup; all at the same time.  In this situation, would network speed become the limiting factor, assuming SATA3 drives?

 

So wondering if a bonded pair of gig NICs is a worthy consideration, given my intended uses for the servers?

 

Thanks.

Link to comment

Your switch or router needs to also support link aggregation for it to work.  Most routers an switches don't.  You need a smart switch that supports 802.1ax or the older version 802.3ad.  I believe that if you do this without a switch that supports this it will just act as backup if one gets unpluged or stops working.  I think there is a IEEE name for this function but I can't remember.

 

I set up a link aggregation on my WHS a few years ago but didn't see much of a increase for it to be worth it.  The switch was over a $100 and each of the nic's that supported it were almost $100 so all in all I spent $500 on somthing that had no increased benefit to have.  But now new nic's support 802.1ax but I don't think new router yet support it.

 

Sent from my SCH-I545 using Tapatalk

 

Link to comment

IIRC unRAID can only address a single NIC in the box. I'm sure this has come up on the board before, have you tried searching?

 

unRAID supports NIC teaming/bonding/aggregation...whatever you wish to call it.  But as the previous poster said, you do need a switch that can support it.

Link to comment

I just looked at the latest Asus ac routers and they don't support it.  And it is not just a firmware thing it has to be supported by the hardware.

 

Sent from my SCH-I545 using Tapatalk

 

 

 

You need a "managed" gigabit switch.  I use HP 1800 series Netgear GS724T switches in my house.

Link to comment

At the risk of being accused of a bit of heresy on the UnRAID forum (although I'm sure it's clear I'm a BIG supporter of UnRAID) ... with the usage you've described, it's not clear UnRAID is the right choice.

 

#1 => With all that activity, you clearly don't want to encounter spinup delays, so the ability to spin down selective disks isn't very useful.

 

#2 => Since you're building new servers, it's unlikely it matters that you can mix & match drive sizes

 

#3 => With all of your data from all of your computers being redirected to this system; and all the other functions you've outlined; then even assuming you have a good backup strategy (perhaps nitely syncs with another fault-tolerant server -- and THAT could easily be UnRAID) ... I'd think you might want a higher level of fault tolerance.

 

...  in short, you may want to consider using a hardware RAID controller with RAID-6 for the primary server you've described.

 

Link to comment

garycase,

 

I appreciate the honesty.  I was thinking of worst case scenario, more likely I'll be on my PC and one or two people will be watching a stream, maybe doing something on their PC.  I'm the only "power user" in the house, most of the My Docs folders for the other family members are under 5gb... I will be re-using my existing drives, really just transplanting my main server's data onto newer host hardware.  I was looking at nic teaming as a future proofing mechanism, to give me the most available bandwidth. 

 

I can get a 1155 Gigabyte board with dual nics and four SATA3; an 8-port SATA3 add on, and the new Silverstone 8/4 bay case, and I should have a pair of servers that will last me many years. 

 

I'll look into switches that support the protocol.  Hoping I can get some benefit from teaming, given SATA3 disks.

 

 

Link to comment

Unless you are reading from a sata 3 ssd or raid array your not going to see a large enough increase in speed to warrant the price of smart switch.  I can get about 80Mb/s read from my unraid server.  My fastest sata 3 drive reads at about 100 to 110 Mb/s so you may only get about a 20Mb increase when reading from one drive.

 

What you could do is put an ssd in your unraid and limit that disk to just your My Docs folder then using link aggregation would be beneficial on reads but not rights because you will be limited by your parity drive on rights.  Your other option is to install an ssd as a cache drive and set you my docs folder to cache only then there will be a increased speed on reads and rights but no parity.

 

Also with a ssd you don't have to wait for a disk to spin up.

Link to comment

I think unRAID would still fit for your needs. As Gary suggests I would get a good fast SSD for the cache drive. Make sure it's 6GB.

use that for the my documents folder.  From there you can do an rsync linked rotating backup to the array, and also rsync that to another machine. I would get a machine capable of a lot of ram. With the right kernel tunings, the larger ram can be a good buffer cache when unRAID x64 is out and stable.

 

 

What you describe is how I used my machine, only I had mad torrents, very high my documents and music share usage.

It was the NFS server for all of my source with all of my virtual images.

 

 

What I had going for me was a RAID0 parity drive to help speed things along. Since you will may be doing many writes to the array, do not skimp on a fast 7200 RPM parity drive.  I saw a benefit from using a caching areca controller for a 2 drive RAID0 parity.

 

 

Today I use a simple N40L with a 3GB 7200 RPM parity drive and the same drive for my home folder. I'm getting 50-60MB/s burst writes.

Link to comment

Unless you are reading from a sata 3 ssd or raid array your not going to see a large enough increase in speed to warrant the price of smart switch.  I can get about 80Mb/s read from my unraid server.  My fastest sata 3 drive reads at about 100 to 110 Mb/s so you may only get about a 20Mb increase when reading from one drive.

 

Not entirely true.  Current gen 7200 rpm 1TB/platter drives have sustained throughput in excess of 180MB/s, so they can easily saturate a Gb link.  That said, even with dual NIC bonding on unRAID, you are still limited to a single stream per NIC.  So bonding is beneficial when you have multiple clients simultaneously READING large amounts of data from the array quickly (i.e. file transfers, not media streams). 

 

Bonding will not help with writes directly to the array, due to the parity overhead with writes.  It will help if you are using a fast SSD cache drive and are caching writes.  Most top notch SSD's (Samsung EVO for example) can do sustained read/write around 500MB/s.  So bonding could definitely help in this area, but again only during simultaneous writes from multiple clients.  A write from a single client is still limited to a single NIC and will therefore top out at 120MB/s.

Link to comment

I appreciate the help guys.  I'm not interested in putting an SSD anywhere near my servers.  I would nuke the life out of one with write cycles.  I'll stick with one gig connection for now; the MB I am looking at has two, so maybe I'll be somewhat future-proofing my servers.

Link to comment

I appreciate the help guys.  I'm not interested in putting an SSD anywhere near my servers.  I would nuke the life out of one with write cycles.  I'll stick with one gig connection for now; the MB I am looking at has two, so maybe I'll be somewhat future-proofing my servers.

 

You can always connect both and set the bonding mode to active-backup to provide failover capability.

Link to comment

Not entirely true.  Current gen 7200 rpm 1TB/platter drives have sustained throughput in excess of 180MB/s, so they can easily saturate a Gb link.

 

This may be true if using 7200 rpm drives but I would suspect a good portion of us use green and red drives.  I'm sure some use all high speed performance drives in there array but gets pretty expensive.

 

 

Link to comment

Not entirely true.  Current gen 7200 rpm 1TB/platter drives have sustained throughput in excess of 180MB/s, so they can easily saturate a Gb link.

 

This may be true if using 7200 rpm drives but I would suspect a good portion of us use green and red drives.  I'm sure some use all high speed performance drives in there array but gets pretty expensive.

 

Sent from my SCH-I545 using Tapatalk

 

While I have no doubt that a good portion of unRAID users utilize Green and Red drives, there are also a number of us that use the Seagate 1TB/platter drives.  I wouldn't consider them expensive though, they are cheaper than WD Reds. 

 

My comments weren't aimed solely at the Seagate 1TB/platter drives either as MOST current gen drives, including WD Reds, can easily saturate a Gb link.  Reds have a throughput of ~150MB/s on the outer cylinders and still get ~120MB/s mid-drive.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...