Ultra Low Power 24-Bay Server - Thoughts on Build?


Pauven

Recommended Posts

  • Replies 307
  • Created
  • Last Reply

Top Posters In This Topic

Incidentally, virtually all UnRAID users who comment on their power consumption refer to "idle power" as the power their server draws when all of the drives are spun down and the server is not in active use (i.e. it's idle).

 

I agree that from the disk manufacturer's specification sheets, that means the disk are in "standby" when the system is in that idle state ... but clearly that semantic distinction hasn't had any impact on folks understanding what's been discussed in this thread  8)

 

 

Link to comment

 

 

Today I tested both spinning and idle drive consumption of my drives.  For the Samsung drives, I tested 4 drives simultaneously of each model, for the WD Red drives, I tested 10 drives simultaneously.  I took the total power increase and divided it by the number of drives tested to come up with the drive average.

  • Samsung F2 1.5TB:  1.20W Standby, 4.75W Spinning Idle
  • Samsung F3 2.0TB:  1.30W Standby, 5.00W Spinning Idle
  • WD Red NAS 3TB:    1.25W Standby, 4.50W Spinning Idle

I was very disappointed.  I'm not sure if some of the extra power consumption for the WD Red's is related to SAS backplane connectivity or even the 2760A maintaining connectivity, but I expected the Samsung drives to be about 0.4 Watts higher, as they are rated at 1W standby.

 

 

why not run the test with power to the drives but the data cable disconnected.  That will identify if there is an overhead for the SAS backplane connectivity as you have suggested.

Link to comment

why not run the test with power to the drives but the data cable disconnected.  That will identify if there is an overhead for the SAS backplane connectivity as you have suggested.

 

I tried this, but it gave me no control over the drives, so I was unable to be sure what state the drive was in (spinning, standby, some weird hybrid state), and I didn't feel the numbers were reportable (it was in the neighborhood of 1W).  I only have two SATA headers on this motherboard, and I feel two drives doesn't give me enough accuracy when measuring sub 1W.

 

Since other users on the forum have the same WD Red 3TB drives, like garycase, hopefully one of them will be nice enough to perform this test.  Since garycase has 6 Red 3TB drives connected directly to his MB via SATA, I think his test will give us the most accurate WD Red 3TB standby readings.

Link to comment

On my 2 SAS2LP servers, upgrading to RC14 from RC13 resulted in about a 30% speed reduction in parity checks even with the different parameters. I know he downgraded the kernel again due to a bug (the same bug I upgraded to RC14 to fix), and that's probably why. Hope RC15 uses newer kernel with that bug fixed. :(

Link to comment

On my 2 SAS2LP servers, upgrading to RC14 from RC13 resulted in about a 30% speed reduction in parity checks even with the different parameters. I know he downgraded the kernel again due to a bug (the same bug I upgraded to RC14 to fix), and that's probably why. Hope RC15 uses newer kernel with that bug fixed. :(

 

This may be due to the new memory settings limiting memory to 4095, which Tom changed to default this go around.  Have you tried the same test under both start-up options?

 

I'm testing RC14 now too.  Running a parity check with the mem=4095 setting, and it does seem slower, but not 30% slower.  I'm seeing about 5%-10% slower, but won't know for sure until it is complete. 

 

I have exactly 4GB RAM installed, so I'm probably best case either way.  I see you have 8GB installed, so these memory settings are probably having a larger impact on you than they would on me.

 

I never tested RC13, outside of turning on the server and measuring power consumption - I never actually started the array, so I have no idea if RC13 would have affected performance.  So far, RC14 without the mem settings performs like RC12a to me.

Link to comment

FWIW, I ran a parity check with RC14 as soon as I installed it with the same tuning parameters I had settled on after trying various options [1920/768/896], without the 4GB constraint (system has 8GB installed), and it took 7:42 (9 seconds longer than my RC13 time ... but I MAY have been using 1024 as the last parameter then -- I tried up through 2560/768/1280 when I was "playing" around with it => my system's spent a LOT of the last 2 weeks running parity checks !!

 

I'm happy enough with 7:42, so I'm going to leave everything as is.    I did boot with the 4GB parameter and STARTED a parity check, but it was clearly running slower than without the 4GB switch, so I just killed it.

 

Link to comment

I also found the fan control script "unraid-fan-speed.sh" which xamindar wrote here: http://lime-technology.com/forum/index.php?topic=5548.msg52398#msg52398

...

You simply adjust a few parameters in the script, set it to a cron job every so many minutes, and it reads the temps of all your drives, picks the highest temp, and sets the fan speed based upon that.  This is exactly what I wanted!

 

I was running a parity check using 5RC15 today, and I was surprised when my fan speed would jump from silent to full blast when my drive temperatures increased from 37 degrees to 38 degrees (which is the temperature value I had for triggering High Speed).

 

So I checked out the unraid-fan-speed.sh script, and I found that it didn't have any linear speed ramping for the fans.  Basically, it gave you three speeds:  Off, Low, and High.

 

So I added some logic to the script to gradually ramp the speeds between Low and High as the drive temperatures increased.  I updated the original thread with the new script v0.5, here:

 

 

After my changes, I was able to increase the High Temp to 40 from 38, and even with the higher max temp the new speed ramping logic kept my drive temps at 35 or below during a parity check, so my drive temps have actually dropped 3 degrees (they were hitting 38 before), and the fans are much quieter now since they aren't running full-blast.

 

-Paul

Link to comment
I updated the original thread with the new script v0.5, here:

 

 

After my changes, I was able to increase the High Temp to 40 from 38, and even with the higher max temp the new speed ramping logic kept my drive temps at 35 or below during a parity check, so my drive temps have actually dropped 3 degrees (they were hitting 38 before), and the fans are much quieter now since they aren't running full-blast.

 

-Paul

 

Paul, can you elaborate on how to setup the new script?  Maybe start a new thread so that your update doesn't get lost in the shuffle? 

Link to comment

Rebuilds on the atom are painfully slow. My take eleven days...

 

Sent from my Nexus 4

 

That is painful!  Sounds like something is wrong with your build, though, and is probably fixable.  Have you posted a thread somewhere on this issue?  If so, share the link.  If not, create one.  I would be interested in helping you troubleshoot, but not in this thread.

 

-Paul

Link to comment

Paul, can you elaborate on how to setup the new script?  Maybe start a new thread so that your update doesn't get lost in the shuffle?

 

In the thread I linked to above, Guzzi and aiden both replied that they are using a more sophisticated fan script (they helped write it), and from looking at it I think it is the way to go.  aiden has a link to it in his signature.

 

In the script I was using, I had to edit several variables in the script itself (instructions are inside the script), including specifying the drives and the path to your pwm fan control file.  Once all the variables are properly configured for your system, you simply run the script, or call it from a cron job (I call it every 5 minutes). 

 

Of course, you have to have pwm fan monitoring and control already set up on your system before you can even think about using these scripts, and that's a whole topic itself.  aiden's script helps with a lot of that, so it's a better path.

 

I'm not completely satisfied with the script I'm running, and to be honest the other script also has a few features I don't like, so I'm contemplating writing some more features into this script and making it available in a separate thread.  Not sure if I will bother to do it, though... no promises.

 

But for now, that other script is probably best.

 

-Paul

Link to comment

Rebuilds on the atom are painfully slow. My take eleven days...

 

Sent from my Nexus 4

 

Something's not right then.  They shouldn't take very long at all.  My system does parity checks in 7:42 ... and a rebuild should be in the same ballpark (haven't done any, so don't know for sure -- but the computations are essentially the same as a parity check).

 

Link to comment

Your power stats are impressive, are you running any addons?

 

I'm running unMENU, CacheDirs, and a few packages installed from inside unMENU (Screen, LSPCI, the UPS support package, and maybe a couple more).  I don't do any of the 'Extra Credit' stuff, like Plex, torrents, SQL, etc.  I also don't do Simple Features.

Link to comment

Do you think you could of got the idle lower?

 

The easy answer is YES, but not without some sacrifices.  As a quick reminder, the very first post in this thread has been updated with my power consumption figures.

 

The biggest power consumer is the HighPoint 2760A, at 28W.  Other controllers have been reported to consume lower wattage even when not idle, so this might be the biggest opportunity.  These other controllers are more expensive, though, so at this price point it is hard to beat 28W.  Alternatively, port multipliers may be more energy efficient (not sure, haven't tested), and affordable, but sacrifice drive bandwidth, which may lead to longer parity checks.

 

With my testing, each drive is consuming ~1.15W in Standby, remarkably higher than published specs.  This extra power consumption might be a result of general SAS connectivity (someone with SATA only, no SAS, could test and report), the backplanes in my X-Case chassis, or even the 2760A using a little extra power for each new drive.  It may be that a SATA only setup would be more energy efficient (the sacrifice here being cable management, and finding enough SATA ports for 24 drives).

 

While the motherboard + CPU + Memory was very efficient in my system, idling at 17.5W, other configurations are even more efficient.  Take, for example, garycase's setup which consumes 20W with 6 WD Red 3TB drives.  If each drive consumes 0.6W (published specs), then his MB idle wattage (without drives) is about 16.5W - a watt better than my configuration.  If each drive consumes 1.15W (my SAS test results, same drives) then his MB idle wattage is closer to 13W!  His MB sacrifices some PCIe bandwidth, though, so the utilized controller card may not actually work at full speed, and may sacrifice parity check/rebuild speed.  Also, that CPU is less powerful, so it may not be up to the demands of running 24 drives plus numerous popular plugins (I've never seen it done, so I can't say one way or the other).

 

garycase... I really wish you would do this test, simply unplugging the power to your drives and measuring your idle wattage.  This would give us insight into SATA standby wattage vs. SAS standby wattage, and also show just how low your MB/CPU combo idles.

 

Lastly, the three 120mm fans in the X-Case chassis consume 6.5W at idle (lowest PWM speed), which is still higher than other fan setups at full speed - though arguably my cooling is superior to those setups.  So a few more watts could be saved with a different chassis/fan configuration, though possibly at the expense of cooling capability.  The best solution here would be the ability to turn fans completely off when not needed, something my X-Case fans don't seem to be capable of doing.

 

So if I made 3 known changes (more efficient MB+CPU, more efficient 24-port SAS controller, turning off case fans at idle), that would save at least 21 watts at idle, maybe quite a bit more...) bringing my 81W idle down to 60W, so yes it is possible to idle lower.  But this would be more expensive (about $400-$600 more for the controller and MB/CPU), and I might be sacrificing parity check/rebuild performance due to limited PCIe bandwidth.

 

With careful selection of components, you might be able to overcome these limitations today.  Unfortunately, my testing budget is exhausted, so I will live my current build for a long, long time.  Without any regrets!  8)

 

-Paul

 

 

Link to comment

I would also like to build a low power consumption unraid server and from what I've read here on the forum good choice's seem to be for now:

 

1. supermicro atom motherboard, preferably S1260 based (since this atom supports ECC ram).

2. asus P8H61-MX mobo with celeron G1610 CPU, with the advantage being this combo is very cheap, G1610 even supports ECC ram, the  disadvantage is that the  mobo doesn't support ECC , so maybe I'll have to combine the CPU with a supermicro mobo (which the again raises the price).

3. asus M5A78L usb, very cheap mobo combined with opteron 3350 HE, very solid combo, expandable, ECC supported.

 

Or just wait for the AMD X2150 cpu integrated mobo's (which seem to be the wet dream for a file server according to specs).

 

Since I'm not in a hurry should I wait for the AMD platform, or go for one of the three already available solutions

 

 

Link to comment

Since I'm not in a hurry should I wait for the AMD platform, or go for one of the three already available solutions

 

Only you can answer that question.  Personally, I'm VERY happy with the D525 SuperMicro board, and would use the same board if I was building another small server.    SuperMicro does make a board with the S1260, and while it does support ECC (which I'd love to have), it has (IMHO) a major flaw:  it only has 4 SATA ports AND the one slot is a PCI slot !!  [ http://www.superbiiz.com/detail.php?name=MB-X9SBAAF ]      My trusty little D525 board has 6 SATA ports and a PCIe x4 (x16 slot, but runs at x4) slot.  [ http://www.superbiiz.com/detail.php?name=MB-X7SPA5 ]

 

The S1260 IS an attractive processor -- TDP 4 watts lower than the D525;  PassMark score 40% higher than the D525;  and ECC RAM support.  I suspect the X9SBAAF may even draw slightly less power than my X7SPA ... but NOT after you add a PCI card to support additional SATA ports ... and the number of ports you could reasonably add is 2 with full bandwidth, or 4 with reduced per drive bandwidth.    With the X7SPA you have 6 ports on board, and can add at least 8 more drives at full bandwidth with the PCIe x4 capability [You could even use a 2760A to support 24 more drives, although it would be very bandwidth limited at x4 speeds when doing parity checks].

 

Link to comment

garycase... I really wish you would do this test, simply unplugging the power to your drives and measuring your idle wattage.  This would give us insight into SATA standby wattage vs. SAS standby wattage, and also show just how low your MB/CPU combo idles.

 

Paul =>  somehow I overlooked this post.  I will do exactly that sometime in the next few days and post the results.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.