Looking to build Atom 5 disk system


NAS

Recommended Posts

I am not sure if anyone has considered this, but there are a lot of very nice compact 4-disk Windows Home Server PC's out there that might work for unRAID.  Here is one from Acer that comes out soon for $400:

 

http://www.engadget.com/2009/05/21/acer-launches-easystore-home-server-1tb-expandable-storage-for/

 

I am not sure if these would boot unRAID or not.  It would be hard to build something so small with a full warranty though - and it comes with 1 disk to start with!

 

Link to comment
  • Replies 54
  • Created
  • Last Reply

Top Posters In This Topic

This should work then;

http://www.newegg.com/Product/Product.aspx?Item=N82E16811219032

 

or this;

http://www.newegg.com/Product/Product.aspx?Item=N82E16811219023

 

Mini-ITX fits where mATX fits, right? It so, then an mini-ITX should also mount where a ATX fits??

 

Peter

 

 

One of these looks the same as I posted in this thread here: http://lime-technology.com/forum/index.php?topic=3864.msg34086#msg34086

 

However in the EU they are double the cost that newegg are selling them at :(

 

There are some reports however of MB fitting problems and loud fans... but it is a cheap case after all.

 

I am not sure if anyone has considered this, but there are a lot of very nice compact 4-disk Windows Home Server PC's out there that might work for unRAID.  Here is one from Acer that comes out soon for $400:

 

http://www.engadget.com/2009/05/21/acer-launches-easystore-home-server-1tb-expandable-storage-for/

 

I am not sure if these would boot unRAID or not.  It would be hard to build something so small with a full warranty though - and it comes with 1 disk to start with!

 

 

Very interesting product. Checking out prices but at $400 with a 1TB disk included thats potentially viable (unless it becomes $800 in the EU :(

 

I am going to take a different approach to deciding pricing and look at per disk cost which would be all hardware + license costs - cost of disks divided by number of disks it supports.

 

 

So the Acer would be:

$400 for the hardware

$119 for unRAID license

$70 subtracted for included 1TB disk

 

which is $89.90 per slot

Link to comment
  • 2 months later...

Don't know if you've seen this - but a small 1.5GHz solution with 8x sata ports.

http://www.auspcmarket.com.au/show_product_info.php?input[product_code]=MB-NAS7800-15LST&input[category_id]=1764

 

 

Two months on and this board is still elusisve, special offer or massively expensive in the EU.

 

So i spend an hour last night looking here and on google for easier options.

 

Heres another potential core build:

 

Codegen 4U RackMount Case 500mm Deep - 1.2mm Steel - No PSU

Gigabyte GA-MA74GM-S2H 740G Socket AM2 onboard VGA 8 channel audio mATX Motherboard

AMD Sempron LE-1250 Socket AM2 L2 512KB 2.2GHz Energy Efficient 45w Retail Boxed Processor

 

Readily available cheap parts from good manufacturers costing in at about 290 USD for a max of 6 dives and excluding unRAID license

 

4U case for 6 drives is complete overkill but I cant find any 3U cases that can handle 6 drives that are < 100USD (in fact 3U cases seem to be expensive full stop)

Link to comment

Yeah i have considered building a disk rail. So this would be a pair of rails to fit in a standard cabinet. they would hold HDD in free air with 120 fan plate in front of it. The unraids would sit in boxes under the rail and standard SATA cables would be run to the rail.

 

Not ideal.

 

The main driver for this is that building 20 disk arrays is too troublesome. MB and cards have issues when you start racking up the HDD count. PCI-E SATA cards are as expensive as a complet 6 disk system. PSUs for 20 disks are very expensive. Calculating parity on 20 disks takes forever unless you buy uber expensive SATA cards.

 

My data consumption rates are such that I need a simple way to go from "running out fo space" to "3 months worth of free space" easily. As it stands he time it takes me to manage and deal with all the quirks of a large array costs me more in lost wages than the hardware itself.

Link to comment

I was thinking more of a shelf type arrangement.

 

Mount the motherboard on a Shelf and the disks and PSU above stick some fans into the mix to keep everything cool,  stick everything in a 42u rack front and back and you should be able to achieve some pretty good density.

 

Moving from 20 to 5 disk systems you going to be spending more on Parity and networking.  You'll definitely need to choose a low power CPU too or I imagine the power costs will ramp up too.

Link to comment

Actually apart from power bill costs building multiple smaller systems is no more expensive than building bigger ones if you are looking at rack mounting.

 

Motherboards are cheap, PSUs are cheap, cases are cheap, CPU are cheap, you dont need expensive PCIE sata cards. Even parity drives are cheap. It is only cheaper building 20 disk arrays if you you dont value your time and/or everything goes perfectly.

 

Example i can get an 84% efficient 350W PSU  for like $25.

 

There is an economy of scale but its not as much as you would think. Also I can save on power by moving older less useful data onto boxes that get turned on when needed.

 

IMHO this is a better route and I absolutely wont ever be building a 20disk unRAID again unless a MB comes out with 20 SATA ports and costs $60

Link to comment

Actually apart from power bill costs building multiple smaller systems is no more expensive than building bigger ones if you are looking at rack mounting.

 

Motherboards are cheap, PSUs are cheap, cases are cheap, CPU are cheap, you dont need expensive PCIE sata cards. Even parity drives are cheap. It is only cheaper building 20 disk arrays if you you dont value your time and/or everything goes perfectly.

 

Example i can get an 84% efficient 350W PSU  for like $25.

 

There is an economy of scale but its not as much as you would think. Also I can save on power by moving older less useful data onto boxes that get turned on when needed.

 

IMHO this is a better route and I absolutely wont ever be building a 20disk unRAID again unless a MB comes out with 20 SATA ports and costs $60

 

I agree with this to a certain point.  Different needs for different people.  Right now i could afford to build multiple system because as you say they are not all that expensive given you search around for deals and the like.  The added benefit of smaller multiple systems is the parity check length and not having to buy more expensive add-on cards for more SATA ports later on.  Granted there are cheap ones out there that will do the job but not many are of the high density sort that are economical to buy.  The SuperMicro Card i believe is really the first; which is not quite yet supported by unRAID.  There are 2 main factors that keep me from doing the mulitple smaller systems though.  One is the space required.  I live in an apartment with 2 other room mates and therefore space is a little limited.  I can't feasibly put the server(s) in my room (i trust my room mates but still i do not want them messing with server at all). And two would be "consolidation" or data.  I like to have all my information available at my fingertips.  I would love to see the ability to "merge" multiple unRAID boxes into one (i know it can be done via some command line stuff) but it would have to be able to be done easily.  In the end for me right now, multiple boxes does not make a lot of sense, it is just to much hardware to worry about, not enough space to put the hardware, and not having one "central" repo for it all the data.

Link to comment

But cheaper licenses. Ive done the calculations it really does cost in.

 

There is also another major factor. By building a fully populated array at one go you spend more per GB than if you add disks as you need them as time passes... but since its only 6 disks its not that much more... and you NEVER have unprotected data. In the life of a system that starts as 5 disks and ends at 20 the reality of the situation is that you have days (weeks if using PCI) of unprotected data.

 

Keep in mind I am talking RACK system here with say the NORCO.

 

The gut feeling is that there is an economy of scale... and there is... but its not as much as you would think and there are significant downsides of building huge arrays. Look at the current beta, for lots of people it is fine with small arrays, bug free, but as soon as you hit 16? disks many people have complete system crashes. I personally went through 2 motherboards (from the user-has-it-working list) only to find that once i hit a larger number of drives I started hitting BIOS, IRQ, PCI incompatibility problems.

Link to comment

I don't disagree, the fact that you are on the edge of buying enterprise class hardware to support 20 disks means its almost certainly not going to be cheaper. 

 

Personally if I was building multiple small systems I probably wouldn't use unRAID to do it, much better just to use a normal linux distro imho. That way you could look at using distributed file systems, something I investigated before deciding to use unRAID myself.  (learning curve too steep, not enough linux experience put me off)

 

I've personally moved to unRAID for the same reason prostuff1 mentions, I like all my data in one place.  Also it helps that I probably wont ever need more than 15 disks, since by the time I fill the 15, disks will be bigger and I can start ditching the smallest ones.

Link to comment

ersonally if I was building multiple small systems I probably wouldn't use unRAID to do it, much better just to use a normal linux distro imho. That way you could look at using distributed file systems, something I investigated before deciding to use unRAID myself.  (learning curve too steep, not enough linux experience put me off)

 

There are no other solutions that offer parity and intact stand alone disk filesystems worth mentioning. You can go raid 5 or the like but I have personaly and business experience of whole arrays being lost and it was and is the reason I stick with unRAID regardless of array size.

 

I've personally moved to unRAID for the same reason prostuff1 mentions, I like all my data in one place.

 

Thats a unRAID lack of feature thing. There are solutions to have multiple boxes seem like one on a LAN.

 

... since by the time I fill the 15, disks will be bigger and I can start ditching the smallest ones.

 

I have been in that situation for a LONG time. I have perhaps 6TB of 500GB disks alone. With smaller systems ratehr than rotate disks you can fill the array up with stuff that you want to keep but is old and of less day to day interest but still have access to it in a few minutes by pressing a power button.

 

Space for boxes is and always will be an issue but 12U and 42U racks take up the same floor space.

 

There are two key factors here:

 

1. Cost of running the systems. Power etc

2. The per online disk cost (lets call it PODC). This is not the cost of the disks but rather the sum total of all the other hardware and licenses excluding the disks divided by the number of disks you can support. Here I can make a 6 disk array with a PODC of 5% more than a 20 disk array. This will use more power but the performance of each array will be MUCH higher than the 20 disk array. If we were to look at 20 disks of PCIE then the PODC actually is more for the 20 disk array.

Link to comment

There is also another major factor. By building a fully populated array at one go you spend more per GB than if you add disks as you need them as time passes... but since its only 6 disks its not that much more... and you NEVER have unprotected data.

 

Being able to segregate your data (knowing where it is also) helps in this. Especially if you can turn off the machine and use WOL when needed.

 

In the life of a system that starts as 5 disks and ends at 20 the reality of the situation is that you have days (weeks if using PCI) of unprotected data.

 

I'm kind on the fence about this as an issue. Many times you bring this up about long periods of unprotected data because of parity checks.

I'm not as sympathetic to this issue. If you are using such old drives that you do not trust them enough, then maybe they should be put to rest.

If you need to exercise them for testing, then do periodic smart tests, check the results and do periodic parity checks.

 

The gut feeling is that there is an economy of scale... and there is... but its not as much as you would think and there are significant downsides of building huge arrays.

 

I'm in agreement here. The downside is if you have to do hardware maintenance of any kind you loose access to a large portion of your data.

 

 

Look at the current beta, for lots of people it is fine with small arrays, bug free, but as soon as you hit 16? disks many people have complete system crashes.

This is an issue caused by expansion of an internal array and not having compensation for it somewhere else.

I really believe this is a software issue. Remember the software still is in beta.

 

 

I personally went through 2 motherboards (from the user-has-it-working list) only to find that once i hit a larger number of drives I started hitting BIOS, IRQ, PCI incompatibility problems.

Part of this issue is in the driver support list. If unRAID supported more advanced hardware this could be overcome. (at the cost of hardware)

Once the Supermicro SAS card has better support, you will probably see people wanting 20-24 drive support.

Link to comment

Yup I agree. My parity build is slow cause im using the old offical MB and PCI cards and lots of disks. That is the reason its so bad but it wasnt that long ago when this was the only official system board.

 

Its just so easy to build an unraid when you go for an all in one well tested motherboard and one drive per on-board sata port. No cards, no huge power supply, no BIOS or any other silly little issues. Once you start getting to 15 drives the amount of epople that have tried this on your particialr MB must be miniscule and its not unreasonable to expect unforseen unusual issues.

 

If i had a wish I would like to see an unraid that wasnt run from USB, was updatable online and could add software as you please. I would like the only feature limited per version to be disk count.

 

If I was to be honest I cant see either of them happening but its nice to talk.

Link to comment

Old disks are a matter for debate also..... 6tb of 500gb disks....  12 months time you'll probably be able to replace those with 3 drives.  So you have an age out process, either when you decide that a drive is too old to trust or when the power needed to run it is usurped by newer, higher capacity drives.

 

Well said, this is what is happening to me. Starting to age out my 1TBs and selling off my older idle 500GB's.

Link to comment

Its just so easy to build an unraid when you go for an all in one well tested motherboard and one drive per on-board sata port. No cards, no huge power supply, no BIOS or any other silly little issues. Once you start getting to 15 drives the amount of epople that have tried this on your particialr MB must be miniscule and its not unreasonable to expect unforseen unusual issues.

 

This was the beauty of the ABIT AB9 Pro and Coolermaster 590. 9 SATA ports, 9 5.25 slots of trayless sata.. Worked great and was nice and compact.

I think one of the ASUS boards had 10 SATA ports.

 

If i had a wish I would like to see an unraid that wasnt run from USB, was updatable online and could add software as you please. I would like the only feature limited per version to be disk count.

 

Although I could continue with this section, it really belongs somewhere else.

 

If I was to be honest I cant see either of them happening but its nice to talk.

 

I can see some of this happening, it will just take time and a more positive approach to reveal the benefits.

Link to comment

Old disks are a matter for debate also..... 6tb of 500gb disks....  12 months time you'll probably be able to replace those with 3 drives (cheaply).  So you have an age out process, either when you decide that a drive is too old to trust or when the power needed to run it is usurped by newer, higher capacity drives.

 

If the disks are protected by parity and are not all from teh same batch then why care care how old they are? The chances of two disks breaking at once  are slim. If one breaks I replace it.

 

I wont ever sell a disk, its shelved ad nausium. Thats just my own personal/company policy.

 

But I see the point you are making and it is a good one. Tricky one to work out though without some good historical HDD cost facts and some relatively accurate predictions on future costs. Will ponder it

Link to comment

If I was to be honest I cant see either of them happening but its nice to talk.

 

I can see some of this happening, it will just take time and a more positive approach to reveal the benefits.

 

I should probably have said I cant see this happening any time soon and it is IMHO more liekly to be done by someone else. I love unRAID and sing its praises all over teh place. I am directly reposnisble for quite a few license sales and Tom derserves it full stop, no question. I just think that its all a bit too slow and a couple of smart dudes and a startup loan could go from nothing to a viable competitive product in no time at all.

 

Anyways I digress into an area that best dealt with in another thread.

 

Where I am at is i need more space. I always need more space and I need the path of least resitance to get there. :)

Link to comment

 

 

I wont ever sell a disk, its shelved ad nausium. Thats just my own personal/company policy.

 

 

I keep all my disks too but I sure as hell don't want to be booting legacy machines to access the stuff on my stack of 10gb IDE drives or the stack of 120gb drives that replaced them :) 

 

Maybe a good policy would be to dump an array when you can buy 1 drive to replace it.

Link to comment

 

 

I wont ever sell a disk, its shelved ad nausium. Thats just my own personal/company policy.

 

 

I keep all my disks too but I sure as hell don't want to be booting legacy machines to access the stuff on my stack of 10gb IDE drives or the stack of 120gb drives that replaced them :) 

 

Maybe a good policy would be to dump an array when you can buy 1 drive to replace it.

 

That used to be teh case but you have to consider how much data is actually on it. Yes dump 10GB disks when a 120GB is out but a 500GB disk with 1500 tv recordings... not quite the same thing.

 

Disks will expire usefulness due to being low relative capacity but that could be years... I mean I am not going to go spend $250 bucks on a 2TB drive just for the priveledge of shelving 2*500GB ones. Certainly not when a complete server to run 6 of those drives costs only marginally more than one 2TB drive.

 

Its a balancing act.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.