Jump to content

How much is "overkill" for a unRAID server?


McFly

Recommended Posts

Hi guys!

 

I am about to build me a HTPC, with a dedicated server to store all my DVDs, HD movies, music and so on.

UnRAID sounds very interesting indeed!

 

I like performance, I prefer a bit overkill in performance to bottlenecks. But of course "unnecessary overkill" is just a waste of money.

 

I read on the hardware compability webpage: "Processor, recommend 2.0GHz or higher."

OK, but how much higher? I mean this processor will only be involved in reading and writing data, right? Is a Core2 Duo 1.86 GHz more than enough, or would I get better performance with a 3.0 Ghz Duocore?

 

Does UnRAID support Intel Core2 Quad processors (Im guessing that it would be way overkill, but I am trying to find out where the fine line from "overkill" to "total overkill" is crossed. :)

 

 

 

The Case I have been looking at is by the way CoolerMaster Stacker 830 evolution with 4-in-3 sets with 120mm fans for ventilation. I plan to fill it with up to 16 disks eventually, so it will need a lot of power when starting! Then however, as the drives spin down the power requirements drop. So..., I wonder how much power a harddrive needs at the peak reading (probably at startup)? I was thinking about setting up two of these, Chill Innovation:  http://www.chill-innovation.com/cp520a4-specs.asp but I have no idea how to get the second PSU to start...

 

How much power is enough (or slight overkill) for up to 16 disks, and how much is "toal overkill"?

By using 2 PSU I could buy the second PSU later, when I have more disks.

Link to comment

I'm not knowledgable enough (about unRAID) to address the specific technical questions, but I thought I'd share my philosophy for situations like this. I always buy the most that I can comfortably lay out, even if it's complete overkill for the immediate task at hand. The reason is that I've built a dozen boxes over the years and every single one of them has eventually been re-purposed for something else down the road, and who knows what that down-the-road box might require?

 

I'm not suggesting laying out three grand for an over-powered unraid server if $800 will do it. But in my personal experience, spending a few bucks more up front that doesn't seem needed at the time always ends up being appreciated for that later project that hasn't even been considered yet.

 

For the record, I built an unraid box with a p5B-VM DO motherboard, a Core 2 Duo 2.13 ghz CPU, and 2 GB RAM. Not a screamer or anything, but I'm sure unRAID would run just as well with less.

Link to comment
I read on the hardware compability webpage: "Processor, recommend 2.0GHz or higher."

OK, but how much higher? I mean this processor will only be involved in reading and writing data, right? Is a Core2 Duo 1.86 GHz more than enough, or would I get better performance with a 3.0 Ghz Duocore?

 

Does UnRAID support Intel Core2 Quad processors (Im guessing that it would be way overkill, but I am trying to find out where the fine line from "overkill" to "total overkill" is crossed. Smiley

 

unRAID currently only uses one processor.  Dual core or more is wasted at this time, BUT will probably be useful in the future.  Tom has been steadily upgrading unRAID.  The Core2Duo processors are, I believe, a faster architecture, so that 1.86Ghz should be more than enough, even on one core alone.

 

Link to comment
Dual core or more is wasted at this time, BUT will probably be useful in the future.  Tom has been steadily upgrading unRAID.  The Core2Duo processors are, I believe, a faster architecture, so that 1.86Ghz should be more than enough, even on one core alone.

 

OK, but if lets say one of the cores in a Core2 1.86Ghz or even a 2.33Ghz is more than enough (as in the one core is never at full load, perhaps at 50% load?) why would he bother to "waste" time to get both cores working? It would indeed be a "total overkill".

 

Is there any way to measure how hard the CPU has to work? What loads are you guys getting with your Core2 processors when they are stressed out?

Link to comment

i like the idea of a kick-ass unRaid server... so, as a thought experiment, I tried to come up with the fastest server I could!  Here's what I came up with:

 

1) Use a server class motherboard.  None of this wimpy PCI interface nonsense, go big or go home with PCI-X!  When you are doing a parity build, you are reading from all of your data drives and writing to your parity drive at the same time.  All that data is streaming off your hard drives, through the PCI bus, into the CPU, back out of the CPU, back down the PCI bus and onto your parity drive.  With PCI, there is not enough bandwidth to handle the quantity of data your hard drives are capable of pushing up to the CPU... therefore, make this bottleneck go away with a larger bus architecture.

 

This'll take a bite out of those really long parity synch times.

 

2) Next, lets cut down on how much time it takes to move files from other computers on the network.  Gigabit is nice, but lets move the data.  How about a pair of gigabit ethernet cards channel-bonded so that you have two gigabits worth of bandwidth available for you to upload with?  Oh, don't forget, you'll need to do the same on your inbound system.

 

3) Moving now into more exotic options for speed... this would need a software change in unraid before it becomes really feasible.  String a bunch of 10K RPM hard drives together as your parity drive.  This would also reduce your parity build and parity write times significantly!  Too bad there aren't any 700Gig 10K rpm hard drives around, so you will need to string a bunch together.  Oh, and probably need to use a raid-5 approach on the parity drive array so that you don't risk your data should you lose a single parity drive.

 

4) This one is more towards up-time than pure performance, but if your system isn't up, you can't use it, so it counts.  Get redundant on the power supplies baby!  Hot swap power supplies -- should you lose one, it fails over to the second, and you can then pull the dead one and replace it.  Ideally, you plug each power supply into totally separate circuits in your house.  Most serious server installations plug into separate circuits in the city grid, crazy, I know!!  But hey, we are talking about "overkill"!! :)

 

5) Use ECC memory!  Typically a little slower than your regular ram, but this is a server and I want reliability!  If some little bit in one of those memory chips suddenly decides  it will never be anything other than what it is at the moment forever more, I want the ECC there to save me.

 

6) Of course, ECC may fail too. :(  Prevent that with redundant, hot-swap memory.  If a whole bank of memory fails, swap out to the auxiliary bank!!  No down time if you  have this feature!

 

In case you think I'm pulling your leg here with these features, I'm not.  Serious sever class hardware have these features (and a few others I didn't mention).  You are only thinking along one axis of performance.  Don't forget uptime/reliability count towards performance as well.  As Tom (and others) have said a few times here in the forums, buy quality.

 

In all honesty, worrying about RAM speed and CPU speed is a waste of time.  Worry about throughput -- eliminate bottlenecks and where you will spend most of your time waiting.  If you spent $200 more on a faster CPU, you would never notice it if you've stuck it behind a 100mbit network card, right?

 

Link to comment
1) Use a server class motherboard. 

 

Sounds expensive...

 

2)How about a pair of gigabit ethernet cards channel-bonded so that you have two gigabits worth of bandwidth available for you to upload with?  Oh, don't forget, you'll need to do the same on your inbound system.

 

Is it compatible with Unraid to have 2 Gigabit cards like that? Sounds tempting... But I have no idea how I would do it...

 

3)String a bunch of ....

 

No, I don´t want that. I like the way it works now, without any stiping or stringing.

I want the drives I am not using at the moment to spin down to save power and to get less heat.

 

4)Get redundant on the power supplies baby!

 

Cool! But as this will be my personal home network it really doesen´t matter if I get some downtime if a PSU fails. Hopefully it doesen´t happend that often. :)

 

5) Use ECC memory!  Typically a little slower than your regular ram, but this is a server and I want reliability!

 

Hmmm, but if it is bad for performace it is not interesting. I have never had a memory fail yet (that I know about). I have currently 5 computers running with dual RAM in all of them... :D

 

6) Of course, ECC may fail too. :(  Prevent that with redundant, hot-swap memory.

 

That too sound unnecessary for a home network, for a corporate network I would definetly understand this, but in my home this would be "total overkill". :)

 

In all honesty, worrying about RAM speed and CPU speed is a waste of time.  Worry about throughput -- eliminate bottlenecks and where you will spend most of your time waiting.  If you spent $200 more on a faster CPU, you would never notice it if you've stuck it behind a 100mbit network card, right?

 

OK. I am trying to put together the ultimate HTPC server. I like the ability to add drives of different size, and the ability to spin down the drives that are not currently used. These are the big "selling points" to me, otherwise I would go for RAID 5 with a hardware RAID (I already have three RAID 5 cards in my other PCs).

 

I want to find the bottlenecks and eliminate them. Dual Gigabit networks sound very cool, I wonder if it is compatible with UnRAID... also it needs to be connected to a gigabit router, I plan on using D-Links router DIR-635. I guess the router would be the bottleneck even if UnRAID would work with dual gigabit cards.

 

Also the reason I am asking is because it would just be a waste of money to buy a 3.0Ghz Core 2 if a 2.33Ghz gives the exact same performance. And the same goes for getting fast RAM. I don´t want to buy a CPU or a RAM that ends up being a bottleneck...

 

If the developer of UnRAID knows where the upper limit for performance is perhaps it could be added to the hardware compability page?

It is good to know that UnRAID never uses more than 1GB of RAM, so there is no need to buy more. And it would be good to know if the upper CPU limit is XXX Ghz, because the UnRAID will never use more of the CPU.

 

:)

Link to comment

Read up on the threads on the forum regarding bus bottlenecks, as they will likely be more of an issue.

What are your takes on the bus issue, are you going for PCI or PCI- express ?

 

Why would you pour money in an ultimate server, and then skip the UPS ??. Get the UPS and save your gear from spikes and data from power outages.

 

How about buying fast but underclockable processors, then you can upscale when needed.

 

/Rene

 

 

 

 

Link to comment

Read up on the threads on the forum regarding bus bottlenecks, as they will likely be more of an issue.

What are your takes on the bus issue, are you going for PCI or PCI- express ?

 

Not sure what you mean with "bus bottlenecks"... so I guess I need to read more. :)

If you are talking about Gigabit connectivity it will be on the motherboard already.

 

Why would you pour money in an ultimate server, and then skip the UPS ??. Get the UPS and save your gear from spikes and data from power outages.

 

Oh, I was talking about redundant PSU being "total overkill".

 

I already have four UPS running and they are not very heavily loaded, I will just plug the server to one of those.

Link to comment

OK, so now I have read about the bus bottleneck thingy.

It can be somewhat "cured" (or sliced down) by spreading the drives over several PCI slots, to avoid the bottleneck.

 

I wonder how the motherboards that have lots of onboard SATA connectoins handle the data? Are they all connected to one "bus" or are they spread over several to kick up the speed?

 

Something like the Gigabyte GA-P35-DS3R that has 8 onboard SATA connectors. I wonder if they all are connected to one "bottleneck"? The motherboard is mentioned in this thread: http://lime-technology.com/forum/index.php?topic=788.0

Link to comment

OK, so now I have read about the bus bottleneck thingy.

It can be somewhat "cured" (or sliced down) by spreading the drives over several PCI slots, to avoid the bottleneck.

 

I wonder how the motherboards that have lots of onboard SATA connectoins handle the data? Are they all connected to one "bus" or are they spread over several to kick up the speed?

 

Something like the Gigabyte GA-P35-DS3R that has 8 onboard SATA connectors. I wonder if they all are connected to one "bottleneck"? The motherboard is mentioned in this thread: http://lime-technology.com/forum/index.php?topic=788.0

 

I believe all desktop mobos use a single bus, so spreading over multiple slots may not do what you want.  Just get one with the SATA sitting on the faster bus and you'll be fine.

 

 

Bill

Link to comment

To your ultimate server, I would add:

  7.  Add support for 1 or 2 hot swap drives.  Drives fail, and when they do, performance drops tremendously, until you have replaced and rebuilt.  If you are away for the weekend or vacation and a drive fails, you want it automatically replacing the drive to restore your performance ASAP.

  8.  All SATA300 drives (or better).  No slower SATA150 or IDE drives.  As to the 10K drives mentioned above, they are currently too small for most of us.  In addition, there are reviews of the new WD750 GB drive that report it faster in some tests than the Raptors!

 

I agree about dropping all use of the PCI bottleneck.  I think Bill is referring to the fact that all I/O ends up at the Intel northbridge or AMD equivalent, but there are a number of busses, with very different speeds.  All PCI connected devices have to share a single PCI bus, which is slower than each single lane of a PCI Express bus.  For comparison purposes, see the 'Appendix IV. List of Bandwidth' near the bottom of http://www.avsforum.com/avs-vb/showthread.php?t=710828.

 

I'm not an expert, so I apologize in advance if there are technical errors above.

 

Link to comment

I'm not comfortable with the idea of an ultimate server using unRAID.  I believe in using the right tool for the right job.  unRAID is superior to other RAID's in certain uses, but is not optimal for high performance uses.  If you want the fastest performance, then you probably want a striped RAID, RAID 0, 5, 6, or similar.  That brings up the subject of striped vs un-striped, they both have their pros and cons.  The unRAID design is versatile, useful for many things such as a light media server, but I think is absolutely the best for 2 uses: backup server and non-critical gigabyte recordings storage.

 

It is ideal as a backup server (and superior to all striped storage schemes) because of its higher resistance to data loss after drive failure.  If a RAID 0 loses 1 drive, a RAID 5 loses 2 drives, or a RAID 6 loses 3 drives, then ALL data is lost.  If an unRAID system loses 2 drives, there may be 2 data drives lost or possibly just 1 if the parity drive failed, but the rest are still intact and readable from almost all operating systems.

 

Consider an 8 drive array.  This table shows how many data drives worth are still readable.

Drives lost              RAID 0        RAID 5      RAID 6      unRAID

    0                          8                7              6              7

    1                          0                7              6              7

    2                          0                0              6            5 or 6

    3                          0                0              0            4 or 5

 

(For those who are thinking that the RAID data could be recovered, 1. that's not guaranteed, 2. it's high cost and effort and long delay, and 3. if the data is that important, it should be backed up elsewhere, making it pointless to spend money and time on data recovery!  I fail to see how a data recovery service could ever be justified, if there has been sensible data management, with the appropriate data redundancy.)

 

The second use for which it is superior to other RAID or backup systems, is the storage of very large media recordings of non-critical nature.  Some things like home movies, photos, and purchased music are important enough to need redundant storage.  They should be backed up locally, and offsite.  But I consider ripped DVD's and CD's and most TV recordings to be non-critical, and too big to backup, practically speaking.  Ripped DVD's can be ripped again, and most PVR-produced TV recordings are not of sufficient importance to require the cost of additional terabytes of backup storage.  To lose a drive of Bones', CSI's, and House's might be a sad moment, but not a real catastrophe.  So unRAID's solution of parity protected, single drive failure protected design is ideal for them.

 

(My apologies to the original poster for my perhaps taking this thread even farther astray.)

 

Link to comment
To your ultimate server, I would add:

  7.  Add support for 1 or 2 hot swap drives.  Drives fail, and when they do, performance drops tremendously, until you have replaced and rebuilt.  If you are away for the weekend or vacation and a drive fails, you want it automatically replacing the drive to restore your performance ASAP.

 

Can UnRAID have hot swap drives for automatic replacement?

I good thing about a hotswap and automatic rebuild isn´t saving time, but security. If I go away for the weekend and one drive fails the rest goes "unprotected" in case of several drives would fail.

 

All PCI connected devices have to share a single PCI bus, which is slower than each single lane of a PCI Express bus.

 

Wow, interesting. I thought each PCI connection had its own lane.

Anyone know how/where they connect the onboard SATA connections? Do they all share a PCI Express bus or a PCI bus?

Link to comment
I'm not comfortable with the idea of an ultimate server using unRAID.

 

I´m not trying to build the ultimate server. Just the ultimate UnRAID server. :)

 

The unRAID design is versatile, useful for many things such as a light media server, but I think is absolutely the best for 2 uses: backup server and non-critical gigabyte recordings storage.

 

The second use for which it is superior to other RAID or backup systems, is the storage of very large media recordings of non-critical nature.  Some things like home movies, photos, and purchased music are important enough to need redundant storage.

 

This is exactly what I will be using it for, a huge non-critical storage with all my DVDs, HDDVDs, Bluray Discs, family photos and such. It has to be able to  play a stream of 1080p without stuttering, that is basically all the speed I need from it. I prefer overkill in speed, but "total overkill" is not necessary.

 

All my family photos and important stuff I will backup to DVDs (several copies), keep a few in my fireproof safe at home and another copy in another house (in case a huge emergency happends and the safe is lost). The safe is never locked, so any thiefs breaking in can see that all that is in there is my backup DVDs and the paperwork for my company. I don´t want them blowing it up or stealing it thinking it will contain jewelry or money.

Link to comment

To your ultimate server, I would add:

  7.  Add support for 1 or 2 hot swap drives.  Drives fail, and when they do, performance drops tremendously, until you have replaced and rebuilt.  If you are away for the weekend or vacation and a drive fails, you want it automatically replacing the drive to restore your performance ASAP.

 

Can UnRAID have hot swap drives for automatic replacement?

I good thing about a hotswap and automatic rebuild isn´t saving time, but security. If I go away for the weekend and one drive fails the rest goes "unprotected" in case of several drives would fail.

 

All PCI connected devices have to share a single PCI bus, which is slower than each single lane of a PCI Express bus.

 

Wow, interesting. I thought each PCI connection had its own lane.

Anyone know how/where they connect the onboard SATA connections? Do they all share a PCI Express bus or a PCI bus?

 

unRAID does not currently support hot swapping, but I believe it is on the Requested Features list, although below a number of more important items.

 

You are right, the quick restoration of full parity protection is very important.

 

I hope I didn't stretch my limited knowledge a little too far, as to a single PCI bus.  I did a little more searching, and found a note that you could have multiple PCI busses, but it may never have been implemented.  Someone with more expertise may correct me.

 

A good link (somewhat old) that simply explains busses, PCI in particular:  http://computer.howstuffworks.com/pci.htm

 

A Wikipedia article on PCI-Express:  http://en.wikipedia.org/wiki/PCI_Express

 

Both have great links at the bottom for more learning.

 

Here is a diagram of the Intel 945 chipset, as an example of how the the busses interact:  http://www.intel.com/products/i/chipsets/945g/945g_diagram.gif.  The grey connecting lines could be thought of as a 'bus', although those labeled with an 'each' should be multiple grey lines.  The PCI connection does not have an 'each'.  Confusingly, they labeled the SATA connection as 3 Gb/sec (small b), which corresponds to 300 MB/sec.  Although probably a simplistic view here, a picture is worth ...

 

Link to comment
  • 4 weeks later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...