Starting out, need Australian hardware recommendations


Recommended Posts

Hey, I'm planning to switch my mac server over to an Unraid server, unfortunately coming from a mac background I have little to no experience building pcs. I currently have a 5 bay raid5 and an 8 bay JBOD connected to a mac mini serving plex, transmission (torrent app), music & backups over SMB to about 5 users. The system works well but there's a heap of unused space on individual volumes, plus no more physical room to expand.

 

I figure a 16 bay enclosure would be a good place to start. Being from Australia, I've found it hard to find much of the equipment recommended here, would something like this work?

https://www.auspcmarket.com.au/tgc-316-3u-16-bay-mini-sas-hot-swap-rack-server-chassis-no-psu/

 

Anything I should be cautious of before jumping in? Thanks in advance.

 

 

Link to comment

Thanks very much, the links in your signature are very helpful, although most info is over my head for now.

 

In your build you've listed:

Quote

Storage Adapters: 2x IBM RaidServe M1015 flashed to IT mode

 

Excuse my ignorance, but do these plug into the Six internal SFF-8087 Mini SAS connectors supplied in your Norco RPC-4224?

What does flashing to IT mode achieve?

 

Thanks

Link to comment

IT mode is basically JBOD, which is required for unRAID.  The 1015's have two ports on them, each one can drive one of the 4 port backplanes (so basically 8 disks per card).  Some of the stuff in my signature is a bit old (I built in 2012) so be aware of that :)  The US dollar and shipping was way better back then!

 

You can get the M1015's pretty cheap on eBay, and they are super reliable.  My two have been in there for 6 years now... I actually bought 3 way back when (one as a spare/future expansion) but the third card has never left its static bag!

 

The other main points I'd offer are:

 

1.  Don't skimp on the PSU.  Get gold-rated or better, Seasonic or similar.  Make sure it's a single rail 12V unit.

2.  Consider server-class boards like the Supermicro ones with IPMI so you can run it headless.  Not super necessary, but once again, their reliability is second to none.

Edited by BetaQuasi
Link to comment

Your thread here is exactly what I needed thanks. It seems you're in a very similar situation to me, about to have 100/40 NBN connected and use some automated movie/tv apps with plex. Are you still running Plex through the Ubuntu 11.10 VM? As someone who is comfortable in the terminal on a mac, but has no experience with Linux or VMs, am I digging a hole for myself if I follow your lead here?

Link to comment

Actually I no longer do that - I use dockers for everything (much simpler!).  The guys over at linuxserver.io maintain a bunch of great dockers, and have one for every application that I use.  They'll be available once you get unRAID up and running, via the community applications plugin:  

 

Link to comment
  • 2 weeks later...

Hey, I've progressed on the planning, although I've hit a stalling point on the motherboard and could do with some advice.

 

I've had two issues with my planning so far:

  1. Many of the hardware recommendations are from 3+ years ago, shouldn't there be more current tech on offer now?
  2. Many of the recommendations are for tech not readily available in Australia.

Trawling through the UCD posts a few regular motherboard recommendations popping up are:

Supermicro ATX DDR4 LGA 2011 Motherboards X10SRI-F-O
SuperMicro X8DTH-iF
Supermicro X9SCM-F-O
SuperMicro - X9DRL-3F/IF

 

This site has my favourite looking build so-far http://cswithjames.com/diy-20-bay-nas-build-budget-2018-edition/ however even a secondhand "retired" SuperMicro X8DTH-iF motherboard from China will set me back US$300

 

For the 24bay Norco RPC-4224 is there a brainless, "this will work well and isn't overkill" option which is available in AU?

Apologies for asking what must be a common question, however my research the last few weeks has only revealed how little I know about PC hardware & compatibility (I'm getting there though).

 

Link to comment

You might get lucky and find a fellow Australian who looks in on this thread and is able to make a motherboard recommendation.  Unfortunately for those of us who live in the States we really don't know what available there :(.

 

But, if you can post some motherboards that *are* currently available there we can offer comments.

Link to comment

I've not seen another option for a bunch of bays in Australia that is more cost effective at this time.  The only other route is a case like an Antec900/1200 which you can throw a bunch of 5 in 3 drive cages into if you want to get to a high drive count.

 

When I weighed it up at the time, the Norco was cheaper, with the added win of less cables (most of the cages need a SAS to SATA forward breakout cable.)

 

In terms of motherboards, there are plenty of options.  We get quite a few Supermicro boards locally:  

 

http://www.staticice.com.au/cgi-bin/search.cgi?q=supermicro+motherboard&spos=3

 

Most other mainstream boards are available. 

Link to comment

So if you do go the M1015 route with SAS cables in the 4224, you'll need one M1015 for every 8 drives (4 per SAS cable, which nicely mates up with 1 of the 6 rows in the 4224).  Each M1015 should be put in a minimum PCI-E 8x slot.  The old X9SCM has 2 of these, and 2 physical x8 slots that run as x4.

 

It seems this on  is the evolution of that board:  https://www.skycomp.com.au/ld-supermicro-up-e3-1200v5-4x-ddr4-ecc-sata-raid-2x-i210-gbe-c236-micro-atx-x11ssm-f.html   It supports more recent processors, an extra 32Gb RAM and is a modern chipset that supports Intel series 6 and 7 CPU's.  Full specs here, along with RAM compatibility list:

 

https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSM-F.cfm

 

All that being said, you can happily buy most mainstream boards and run unRAID on those.  Those of us that went the Supermicro route did so mainly because of long term reliability (my build is 6 years old now and hasn't skipped a beat for example.)

Link to comment

Ok, I'm back quicker than I expected.

Can I check the following with you:

  • Each of the 3 x M1015 cards requires 1 x PCI-E slot 
  • How many PCI-E slots does the above linked Supermicro X11SSM-F contain?
  • If there are 4 x drives connected for each of the 2 x ports on each of the 3 x M1015 cards, then 6 x SFF-8087 to SFF-8087 1m SAS cables will be required (this cable has one plug on one end and 4 on the other end like this)
  • If the case can contain 24 x 3.5" drives with an OS Drive Bracket for 2 x 2.5" drives, can the remaining 2 x 2.5" drives connect directly to the motherboard via SATA?

 

Link to comment
19 minutes ago, enmesh-parisian-latest said:

How many PCI-E slots does the above linked Supermicro X11SSM-F contain?

Four slots

20 minutes ago, enmesh-parisian-latest said:

If there are 4 x drives connected for each of the 2 x ports on each of the 3 x M1015 cards, then 6 x SFF-8087 to SFF-8087 1m SAS cables will be required

Correct, you need 6x SFF-8087 to SFF-8087 cables.

 

21 minutes ago, enmesh-parisian-latest said:

can the remaining 2 x 2.5" drives connect directly to the motherboard via SATA?

Yes

Link to comment

Just note that 1m SAS cables will leave you a bit of leftover cable to clean up inside the case.  I ended up switching them out for some Molex 0.6m SAS cables instead, which are much tidier (you can see a photo in my build thread.)

 

I can't seem to find any of those any more, but a few eBay sellers have 0.7m ones for $7 a pop.

Link to comment

Sorry yes, typo :)

 

Just looking further, if you went with that X11 board above, it has 8 sata ports on board.  Assuming you use one for cache, and perhaps keep a couple spare for mounting other external drives as needed, you could use 4 of them to connect to one of the backplanes.  Depending on how many total drives you plan on adding, this could help keep costs down initially.  (You don't need to connect all of the backplanes, in fact I still have 2 of mine not connected, i.e. only 16 total drives, as I've been upgrading to larger drives over time.)

 

1x M1015 will run 2 backplanes (4 drives in each) with two of the mentioned SAS cables.  So 8 drives per M1015, and 4 drives off the motherboard.  Can always add more capacity later.

 

If you did want to connect the motherboard ports to one of the backplanes, you need a REVERSE breakout connector, which is basically the same thing you linked above, but the data travels in the opposite direction (this is an important differentiation.)  It doesn't seem any of those listings on Amazon define forward vs reverse though...

 

Anyway, just food for thought.

 

 

Edited by BetaQuasi
Link to comment

Missed your mention about Plex earlier - will largely depend on what you're streaming to, but I've set my server to top out at 4Mbps (720p), which is more than fine for tablets, phones etc - even looks pretty good on a 49" TV (only remote TV I've tested it on.)

 

I'm on 50/20 though (fixed wireless, so can't get 100/40), which is why I've limited it to 4Mbps.

Link to comment

Here's another thought on case selection and number of drives. 

 

* As hard drives will eventually fail, you could buy a case that holds a smaller number of  hard drives and budget for purchasing one per year of the size that would hold your yearly download amount. 

For example if your storage requirements increase by 4TB per year you would buy 2 4TB hard drives in your first year (1 x Parity and 1 x Data). In the second year you add another 4TB drive etc.

 

Every year you add another 4 TB drive until you have one free slot left in your case.(8 years). In the 9th year, you take your 4 TB parity and move it to a data slot and add in an 8TB parity and then each subsequent year you replace a 4TB drive (whicever has the greatest number of hours on it),  with an 8TB drive.

 

By the time your case is full the first time (eg: 10 slot Norco Case), 9 years would have passed and you have a server which is hold 9 x 4 TB (eg: 36TB) and an 8TB parity. Your drive replacements of 8TB drives then lasts you another 9 years before you have to go to the next size up. If you start out with 8TB drives, the same logic applies (except in 9 years time), you are hoping that 16TB drives are available. Of course it's hard to predict what drives we will be using in the future, but a 9 year horizon for the first round seems reasonable. 

 

Advantages of this (IMHO) are:

* You only buy what you need (Expecting that drives get cheaper over time)

* Drives get cycled out after approximately 8 years of use to minimize the chances of failures

* Drives are from different batches (minimizing the possibility of problems with multiple drives from the same batch).

* By the time you are replacing the smaller drives with larger ones they should be cheaper and 8 years later the next jump will be possible (16TB anyone?)

 

Of course you can replace your parity at any point in time if your storage needs increase significantly and just start using the next size of drive for additions.

 

Just my 2c worth.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.