Jump to content

New build for Unraid 6


dialmformumbojumbo

Recommended Posts

Hey everyone,

 

I'm looking to build a more serious server for Unraid 6 as opposed to the old PC that I'm still using for Unraid 5 (5.0.5).

 

I want to look into using docker and possibly KVM as well although I'm a little concerned that there's the risk of a docker app or virtual machine going berserk, take up too much resources and interrupt the video or music that I'm streaming from the Unraid server. I'm sure KVM virtual machines can be restricted to only use whatever virtual hardware you define (I'm used to VMWare, not KVM) but docker is completely new to me. Is it possible to limit resources in Docker?

 

The docker apps that I'm interested in using are SABnzbd, Sickbeard, Plex, and perhaps a few more that I can't think of right now.

 

I have some hardware that I can possibly re-use. A Corsair HX650W power supply (http://www.corsair.com/en/professional-series-hx650-80-plus-bronze-certified-modular-power-supply) and a Highpoint RocketRAID 2740 card (http://www.highpoint-tech.com/USA_new/series_rr276x-rr274x.htm).

 

Then there's the disks. I currently have 8 disks in use but I'm looking to expand. The disks are: WD30EZRX (parity), WD30EZRS, WD20EARS, Samsung HD153WI, WD10EAVS, ST3000DM, WD2000JD, ST3000VN.

 

I'm really not sure what hardware would be the best option for my needs. What do I need still?

 

- Processor: Intel Xeon? Intel I5 or I7? AMD?

- Motherboard: should I go for a MB with a good PCI-E slot for the RocketRAID or would I be better off ditching the RocketRAID and getting a MB with a lot of onboard SATA ports? I like the idea of having a management interface though (like the Supermicro IPMI interfaces). Would I get a benefit from a MB with a bunch of network interfaces? Is it possible to assign interfaces to docker apps or VMs in KVM?

- Memory: will depend on which processor and MB I decide to get.

- Case: another big question mark. I would like to be able to house 12 to 16 disks. For some reason I seem to prefer tower cases over rackmount server cases although the server will be in the attic where noise isn't much of an issue (unless it gets REALLY loud). Best option that I've been able to find seems to be the Fractal Arc XL (http://www.fractal-design.com/home/product/cases/arc-series/arc-xl).

 

As you can see, I still have a bit of a journey ahead of me trying to find the best possible build. Any help would be really appreciated.

 

Thanks,

 

J.

 

 

 

Link to comment

I'm still poking along with unRAID v 5 myself, so wiegh the value of my comments appropriately.

 

I think you should consider and decide whether you want to have your drives externally accessible in hot-swap cages.  It seems like a very expensive proposition but if you end up having to manage a boatload of drives, the benefits of being able to swap one out without having to open a case, and potantially mucking up the cableing on other drives, are said to be almost priceless.

 

The reason I state this is that there are far fewer models of suitable tower cases available today then there were a few years ago.  A tower case that will accommodate three or four of the 5-in-3 drive cages needs to have 9 or 12 external 5.25" drive bays.  A quick look at PC Partpicker http://pcpartpicker.com/parts/case/ will show you the limited availability.  The models with no prices shown are older models that you can probably acquire used.

 

I'll limit my comment to this topic, althuogh there's plenty of areas ripe for further discussion.

Link to comment

Hi Chugiak,

 

Thanks for your reply.

 

Tower cases do seem to be hard to find. Whenever I come across one that I like it seems to be end-of-sale.

 

I don't think not having hot-swap would be much of an issue. I don't need to replace or add disks that often although I can see the benefit of not needing to mess with cabling.

 

I don't know, server cases or large towers and extra 5-in-3 cages are quite pricey over here (Belgium, Europe).

 

I find it hard to justify spending a lot of money on a case cause it doesn't really do anything except hold ald the parts that do. :-)

 

 

Link to comment

I'm still poking along with unRAID v 5 myself, so wiegh the value of my comments appropriately.

 

I think you should consider and decide whether you want to have your drives externally accessible in hot-swap cages.  It seems like a very expensive proposition but if you end up having to manage a boatload of drives, the benefits of being able to swap one out without having to open a case, and potantially mucking up the cableing on other drives, are said to be almost priceless.

 

The reason I state this is that there are far fewer models of suitable tower cases available today then there were a few years ago.  A tower case that will accommodate three or four of the 5-in-3 drive cages needs to have 9 or 12 external 5.25" drive bays.  A quick look at PC Partpicker http://pcpartpicker.com/parts/case/ will show you the limited availability.  The models with no prices shown are older models that you can probably acquire used.

 

I'll limit my comment to this topic, althuogh there's plenty of areas ripe for further discussion.

 

The old days you could find a 9 external 5.25" with a clear opening for cages that would slide right in. They aren't very popular anymore and finding one like it is harder and harder. Even back in 2010 when I bought my Coolermaster 590 I had to take a hammer and bang down every little metal lip that would hold a CD-rom drive so my HD cages could slide in. What a pain!

 

Link to comment

With respect to VMs and Containers, you can limit their access to system resources (particularly CPU and RAM).  With containers, limiting RAM is probably far less necessary as most Linux apps are very well behaved, but it is possible. The biggest thing you can do to limit how different containers and VMs impact your experience is to isolate processor access through CPU pinning.

 

 

Link to comment

I'll chime in on the need for removable cages. The problem with having to open the case and swap out a failing drive with a new one is the ease of disturbing the fragile sata and power connections. The ability to resist minor tweaks to cable routing is irresistible. With 14 drives you have close to 50 individual connections between spliters and both ends of data connectors. Reaching in there and not touching another cable is impossible.

 

So a drive fails, you replace it and then the rebuild fails because another drive connection is disturbed. You then spend hours trying to track down the loose cable, disturbing other cables. Locking cables and discipline help, but I would personally never again have a server where I could not easily swap out a disk. I have wasted too many nights and weekend swearing like a sailor trying to get my array protected. I can now swap a disk out in 2 minutes. All my connections burned in for hundreds or thousands of hours are pristine and undisturbed. This is priceless IMO.

Link to comment

I'll chime in on the need for removable cages. The problem with having to open the case and swap out a failing drive with a new one is the ease of disturbing the fragile sata and power connections. The ability to resist minor tweaks to cable routing is irresistible. With 14 drives you have close to 50 individual connections between spliters and both ends of data connectors. Reaching in there and not touching another cable is impossible.

 

So a drive fails, you replace it and then the rebuild fails because another drive connection is disturbed. You then spend hours trying to track down the loose cable, disturbing other cables. Locking cables and discipline help, but I would personally never again have a server where I could not easily swap out a disk. I have wasted too many nights and weekend swearing like a sailor trying to get my array protected. I can now swap a disk out in 2 minutes. All my connections burned in for hundreds or thousands of hours are pristine and undisturbed. This is priceless IMO.

 

This is probably the biggest reason why I want to continue to use HD cages with my new build. Well worth the investment. I've went through about 6 hard drive replacements and never had to open my case or touch any wiring at all. Having quick access to replace a drive on your NAS is a very convenient factor. This is why I won't stop until I find the perfect case. Right now all leaning towards the Cooler Master Trooper.

Link to comment

Perfection is elusive. I love the Supermicro CSE-M35T. Keeps drives cool and when snapped in place you KNOW that drive its securely connected. 92mm fans are better than std 80. Flawed designs cover a significant portion of the bottom (hottest) part of the drive. Drilling a few holes isn't enough to enable good airflow over this part of the drive! Norco cages fall in this area IMO.

 

At the other extreme are the Roswell 4in3s. Cheap at will under $50, these mostly plastic wonders make terrific external enclosures to expand an already full array. You give up the build quality of the Supermicro, and have to be sure the drive is fully inserted in its slot, but they do their jobs. And at something like $10 per slot, won't blow the budget. I also like that these guys use 120mm fans for quiet and effective cooling.

 

CoolMaster trooper looks like a nice case. Depth (front to back) is an important dimension and would just confirm sufficient room for your cages. Curious what are you liking about it over the Antec?

Link to comment

Perfection is elusive. I love the Supermicro CSE-M35T. Keeps drives cool and when snapped in place you KNOW that drive its securely connected. 92mm fans are better than std 80. Flawed designs cover a significant portion of the bottom (hottest) part of the drive. Drilling a few holes isn't enough to enable good airflow over this part of the drive! Norco cages fall in this area IMO.

 

At the other extreme are the Roswell 4in3s. Cheap at will under $50, these mostly plastic wonders make terrific external enclosures to expand an already full array. You give up the build quality of the Supermicro, and have to be sure the drive is fully inserted in its slot, but they do their jobs. And at something like $10 per slot, won't blow the budget. I also like that these guys use 120mm fans for quiet and effective cooling.

 

CoolMaster trooper looks like a nice case. Depth (front to back) is an important dimension and would just confirm sufficient room for your cages. Curious what are you liking about it over the Antec?

 

I actually have 3 of the iStarUSA 3in5 cages. Been running since 2010 for 24/7 365 days a year with no problems at all. They are built real nice and all fans are still blowing hard. I looked at the Supermicro and Icy Docks at the time and just went with the iStarUsa for no reason or the other. Glad I did. Been trouble and maintenance free. I'll either buy them again or try another brand. I know I'll be spending $300 for 3 cages. Cost of building a nice box. Why Newegg has the iStarUSA cages for $175 each is beyond me. I think it is an error. The red model is $174 and the blue model is $107. I usually stick to what I know works. They have trays, which I would rather have since they slide in nice and give lots of support.

Here are the blue ones for $107

http://www.newegg.com/Product/Product.aspx?Item=N82E16816215048

 

And the same ones but in red for $174?

http://www.newegg.com/Product/Product.aspx?Item=N82E16816215047

 

Also, there is no real reason I picked the Coolermaster Trooper over the Antec 1200. The only real reason was because the Trooper was roomier. The Antec is 22.9 inches deep and the Trooper is 23.9 inches deep. And from reading a little more I saw the Antec 1200 can go up to 12 External 5.25" if needed. My current Antec 590 box is 20 inches deep and it is a tight squeeze with HD cages. Anything above 20 inches deep I guess would suffice. Since I may add additional SSD's to my array and/or drives, I just may look into the Antec...from just reading it is a 12 5.25 external. I guess in my mind I was stuck on finding the deepest case, but 22.9 inches would be just fine.

 

Here is my current build with the three iStarUSA cages.

 

front.jpg

Link to comment

I'll chime in on the need for removable cages. The problem with having to open the case and swap out a failing drive with a new one is the ease of disturbing the fragile sata and power connections. The ability to resist minor tweaks to cable routing is irresistible. With 14 drives you have close to 50 individual connections between spliters and both ends of data connectors. Reaching in there and not touching another cable is impossible.

 

So a drive fails, you replace it and then the rebuild fails because another drive connection is disturbed. You then spend hours trying to track down the loose cable, disturbing other cables. Locking cables and discipline help, but I would personally never again have a server where I could not easily swap out a disk. I have wasted too many nights and weekend swearing like a sailor trying to get my array protected. I can now swap a disk out in 2 minutes. All my connections burned in for hundreds or thousands of hours are pristine and undisturbed. This is priceless IMO.

 

You make a good point. I'll keep this in mind, thanks!

Link to comment
- Processor: Intel Xeon? Intel I5 or I7? AMD?

 

With that sort of processor...

 

I'm a little concerned that there's the risk of a docker app or virtual machine going berserk, take up too much resources and interrupt the video or music that I'm streaming from the Unraid server. I'm sure KVM virtual machines can be restricted to only use whatever virtual hardware you define (I'm used to VMWare, not KVM) but docker is completely new to me. Is it possible to limit resources in Docker?

 

this won't be an issue.

 

Check out my build in my sig.  Been streaming LiveTV today and watching stored media, whilst running about 20 docker containers, running a parity check and preclearing two 4TB drives simultanously and no problem with any of the streaming, plus I'm doing it over powerline networking which is the bottleneck for my streaming.

 

Hope that helps put your mind at rest and good luck with the build.

 

Can recommend my case as you can get ten 5.25" bays to slot in hot swap bays.  Got sixteen hotswap bays all ready and waiting.

 

EDIT: Oh and I can run a couple of VMs as well (Windows 10 & Linux Mint with 4GB each) and 32GB is way overkill for memory really.  I think 16GB would still be more than enough.

 

Link to comment

If your box runs good with those specs I would imagine the one I'm building this weekend isn't going to skip a beat. The only bottleneck is actually the hard drives themselves.

 

Ok, 20 dockers? Whatcha' doing over there?

 

Actually just counted, got a total of 31, but not all running (RDP App ones that I only use once in a while) 20 wasn't exactly accurate, it's actually 23....

 

I just love the functionality of them.  Can't resist a good docker.  ;D

Link to comment

If your box runs good with those specs I would imagine the one I'm building this weekend isn't going to skip a beat. The only bottleneck is actually the hard drives themselves.

 

Ok, 20 dockers? Whatcha' doing over there?

 

Actually just counted, got a total of 31, but not all running (RDP App ones that I only use once in a while) 20 wasn't exactly accurate, it's actually 23....

 

I just love the functionality of them.  Can't resist a good docker.  ;D

 

No doubt. I blame Lime-Tech for me spending booku bucks on hardware. All because of version 6. Hey, at least I can take down my security workstation, my tor workstation for my ultimate privacy and my bitcoin wallet image. Found that the best way to eliminate %100 of anyone hacking my bitcoin wallet. Can't log into a VM that's not powered on. :)

 

 

Link to comment

Been streaming LiveTV today and watching stored media, whilst running about 20 docker containers, running a parity check and preclearing two 4TB drives simultanously and no problem with any of the streaming, plus I'm doing it over powerline networking which is the bottleneck for my streaming.

 

Hope that helps put your mind at rest and good luck with the build.

 

Wow! Impressive! Consider my mind rested! Thanks for the info, CHBMB.

 

 

Link to comment

Been streaming LiveTV today and watching stored media, whilst running about 20 docker containers, running a parity check and preclearing two 4TB drives simultanously and no problem with any of the streaming, plus I'm doing it over powerline networking which is the bottleneck for my streaming.

 

Hope that helps put your mind at rest and good luck with the build.

 

Wow! Impressive! Consider my mind rested! Thanks for the info, CHBMB.

You're welcome, the ability of UnRAID 6 has to be seen to be believed...

 

Don't forget to let us know how you get on and what you think.

Link to comment

Hi Chugiak,

 

Thanks for your reply.

 

Tower cases do seem to be hard to find. Whenever I come across one that I like it seems to be end-of-sale.

 

I don't think not having hot-swap would be much of an issue. I don't need to replace or add disks that often although I can see the benefit of not needing to mess with cabling.

 

I don't know, server cases or large towers and extra 5-in-3 cages are quite pricey over here (Belgium, Europe).

 

I find it hard to justify spending a lot of money on a case cause it doesn't really do anything except hold ald the parts that do. :-)

 

You are right considering the costs. But drive cages are much more convenient and reliable. I was lucky to get an Lian-Li PC-A77 cheap from Amazon Germany (open box at reduced cost) and got some Supermicro CSE M35 5in3 cages refurbished of ebay,de. I couldn't be happier.

 

You might check ri-vier in Netherlands for server cases, they sometimes have server cases or drive cages on sale.

Link to comment

Been streaming LiveTV today and watching stored media, whilst running about 20 docker containers, running a parity check and preclearing two 4TB drives simultanously and no problem with any of the streaming, plus I'm doing it over powerline networking which is the bottleneck for my streaming.

 

Hope that helps put your mind at rest and good luck with the build.

 

Wow! Impressive! Consider my mind rested! Thanks for the info, CHBMB.

You're welcome, the ability of UnRAID 6 has to be seen to be believed...

 

Don't forget to let us know how you get on and what you think.

 

I stream all my movies/tv shows/music files, basically anything related to "media" from my Dune Media player. It plays full blown bluray menus and has been exceptional. I get 90MB/sec read speeds and above and is maintenance free to operate. It plays avi, mkv, mov, m2ts, pretty much anything I've thrown at it. How does your media streaming compare to something like that? I may want to try it out once I get the super-computer up and running. I have a Dune media player in every room. How do you physically stream from your unraid box to the TV?

 

Link to comment

Don't forget to let us know how you get on and what you think.

 

Right, here's what I'm thinking about getting:

 

- 3 x Supermicro CSE-M35T-1 drive cages. (definately, already ordered 2 that were on stock) -> thanks Chugiak and bjp999 for the tip!

- Zalman MS800 Plus or Antec 1200 or any other case that I can fit the drive cages in (suggestions are welcome)

- Intel i5 or i7 cpu. Not sure which one yet. Not sure if I can make full use of the i7, could be overkill.

- ASRock Z97 Extreme6 or Gigabyte GA-Z87MX-D3H or any other board that supports virtualization (again, suggestions are welcome)

- 16GB of DDR3-1600 RAM (2x 8GB). Don't know what brand yet.

- My RocketRaid 2740 card (that I already have)

- My Corsair HX650W power supply (that I also already have)

 

 

So, what do you guys think?

 

J.

 

Link to comment

Don't forget to let us know how you get on and what you think.

 

Right, here's what I'm thinking about getting:

 

- 3 x Supermicro CSE-M35T-1 drive cages. (definately, already ordered 2 that were on stock) -> thanks Chugiak and bjp999 for the tip!

- Zalman MS800 Plus or Antec 1200 or any other case that I can fit the drive cages in (suggestions are welcome)

- Intel i5 or i7 cpu. Not sure which one yet. Not sure if I can make full use of the i7, could be overkill.

- ASRock Z97 Extreme6 or Gigabyte GA-Z87MX-D3H or any other board that supports virtualization (again, suggestions are welcome)

- 16GB of DDR3-1600 RAM (2x 8GB). Don't know what brand yet.

- My RocketRaid 2740 card (that I already have)

- My Corsair HX650W power supply (that I also already have)

 

 

So, what do you guys think?

 

J.

 

OK, so about an hour has gone by and now I'm thinking of going the Supermicro MB + Xeon cpu again because of IPMI.  :-\

Link to comment

Don't forget to let us know how you get on and what you think.

 

Right, here's what I'm thinking about getting:

 

- 3 x Supermicro CSE-M35T-1 drive cages. (definately, already ordered 2 that were on stock) -> thanks Chugiak and bjp999 for the tip!

- Zalman MS800 Plus or Antec 1200 or any other case that I can fit the drive cages in (suggestions are welcome)

- Intel i5 or i7 cpu. Not sure which one yet. Not sure if I can make full use of the i7, could be overkill.

- ASRock Z97 Extreme6 or Gigabyte GA-Z87MX-D3H or any other board that supports virtualization (again, suggestions are welcome)

- 16GB of DDR3-1600 RAM (2x 8GB). Don't know what brand yet.

- My RocketRaid 2740 card (that I already have)

- My Corsair HX650W power supply (that I also already have)

 

 

So, what do you guys think?

 

J.

 

OK, so about an hour has gone by and now I'm thinking of going the Supermicro MB + Xeon cpu again because of IPMI.  :-\

 

I think I've had it with drive cages. I'm in the transition of a build right now. The "new" box is up and running but with an open case, not screwed in drive cages because all the screw holes stripped out, a failing CPU cooler and an Antec 1200 V3 case sitting right next to it which I'm selling. I just purchased the Antec 1200 V3 from Newegg for $100+free shipping and sort of changed my mind. I was going to get 3 new shiney hard drive cages and a deeper case. I have a full size ATX board and wanted more room for expansion. I'm up to 12 drives already and may want to add a couple SSD's are some more drives so I basically have out grown the computer case. I've been told and recommended the Norco 4224 server case a dozen times. It is basically 6 x 4 HD cages in one box. Buying the Norco on sale or even regular price cost just as much as three 3x5 HD cages and a full size computer case. I already have all the Noctua fans in the world so I'm set with cooling.

 

I don't have any experience with any of the hardware you want, but I have to say the Antec 1200 V3 is a beautiful case. I already received 4 responses from Craigslist on interested people wanting to buy it but I like it so much I may keep it! And if I were to buy HD cages I would be interested in those CSE-M35T-1. Be careful when you mount them. Make sure you install the cages with no drives in them at all. When you insert the cage into the case they won't slide in because the case has metal tabs you need to bend down. Someone suggested a C-clamp to bend them down. When you screw the cages in the case may be just a couple MM wider so you may need to squeeze the case together to scree them in. Why? Because the screws to install them are very short. And they don't grab to well. They have about 3 or 4 turns and that's it. All three of my iStarUSA cages have stripped screw holes. Can't use longer screws since the tray slots won't slide in. Just be careful that's all. And have fun and get some OCD like me.

 

Link to comment

So I'm looking for a motherboard and keep reading about Asrock H77 Pro4-M however it appears its been discontinued?  at least newegg is out of stock and has been.

 

Anyone recommend a similar board that they're using with unraid 6?  I'm looking to build a new unraid 6 server (basic nas and potentially a windows vm or two (not doing anything crazy, just serving videos to my apple tv).  I have a budget of about $600-$700 (minus drives).

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...