Running an Energy Efficient Unraid Server


Recommended Posts

6 hours ago, mgutt said:

Maybe you can reuse those from the old one as it is the same brand.

Don't use any modular cables that didn't come in the box with the PSU without testing with a meter. Brand is irrelevant, some brands use different pinouts on some models. There are only a handful of manufacturers, most PSU's are custom spec rebrands.

Link to comment
10 hours ago, pille said:

So one just need to know that or how to determine that?

The correct card has 6 SATA ports. ^^

 

And you should buy it with PCIe X4. There are some X1 available which would slow down the card.

 

10 hours ago, pille said:

So this is 3 cables with 4 connectors. Adding up to 12drives being able to be powered

Not enough? You were talking about 11 HDDs and your board has 6 onboard SATA ports. So onboard + ASM1166 = 12. If you think about adding more HDDs: buy bigger ones. It's extremely inefficient using a huge amount of small HDDs as every HDD consumes up to 1.5W while in spindown.

 

11 hours ago, pille said:

That would add up in 24 drives, if that works voltage wise.

The internet says that the total maximum of a Molex cable is 120W. A HDD drains up to 30W at startup, but only for a very short time, which should not add in any heat to the cable. So the maximum per cable are 4 to 5 HDDs. So you should only use only a single y-adapter if you want to power an additional HDD.

 

PS I like these, because you never face the 3.3V DevSlp with Enterprise drives and you have a single plug to connect your HDD:

https://www.ebay.com/itm/352210844323

 

 

  • Like 1
Link to comment
6 hours ago, JonathanM said:

Brand is irrelevant, some brands use different pinouts on some models.

This warning is fine, but finally you won't be able to connect a wrong one as Corsair has different layouts of the square and round connectors in the plugs. And doing a little research: RMi, RMx and SF are using the same cables. So his HX cables won't fit.

Link to comment
1 minute ago, mgutt said:

This warning is fine, but finally you won't be able to connect a wrong one as Corsair has different layouts of the square and round connectors in the plugs. And doing a little research: RMi, RMx and SF are using the same cables.

I see an average of 1 incident every month or two where someone posts about dead drives where they reused a modular cable. The consequences of getting it wrong are so severe that it warrants a blanket statement of NEVER, unless you are qualified to verify it's ok. I never assume someone has those qualifications unless they say so up front, and I make sure to warn whenever someone makes a statement that could be construed as blanket permission to reuse modular cables.

 

Nothing personal, just trying to save as much data as possible.

  • Like 2
Link to comment

Yeah but it's not valid for Corsair. Nearly every model uses the same SATA power cables (red dots):

https://www.corsair.com/de/de/psu-cable-compatibility

 

And those that are not compatible, even won't fit at all.

 

There wasn't even a relevant change in Type 3 to Type 4:

https://forum.corsair.com/forums/topic/156672-corsair-cables-4-gen-vs-3-gen/?do=findComment&comment=921020

 

So I was wrong. HX SATA cables can be used with an RMx power supply.

Link to comment
  • 4 months later...

My setup is:

Qnap tvs-473e 

40gb ram

4x 8tb Toshiba n300

2x 500gb m.2 ssd

External 6tb usb hdd

Nvidia 1050ti

Noctua 120mm

2x noctua 80mm

8 Port poe switch

Osmc vero4k (running as adguard server) (poe powered)

Linksys wrt3200acm (poe powered)

3x deco m9 plus WiFi routers (poe powered)

Apc ups powering it all

 

All running idle 80-90w or 130-140w parity check. Happy to keep things under 100w! Everything runs around 30-40'c and processor gets to about 60-70'c when working decently. 

 

Had to modify the case somewhat to add additional fans to keep temps down as qnap bios doesn't control fan speed unless running qnap software and unraid doesn't have cpu based fan control either. So I run the curves based om drive temps as best I can. 

Edited by ricostuart
Link to comment
  • 3 months later...

Hey,

 

Main Server:

 

Supermicro X10SLL-F

Intel® Xeon® CPU E3-1230 v3

16GB ECC

PicoPSU

2x WD30-EFRZ

1x ST3000DM

1x SanDisk 1TB SSD

2x CT480BX

 

Idle at 27 Watt, (Spin down)

 

 

Backup Server:

 

Terra Master F2-221 (with Unraid Basic)

Intel® Celeron® CPU J3355

2x ST2000DM,

1x CT240BX (SSD)

 

Idle at 7 Watt, (Spin down)

 

 

 

  • Like 1
Link to comment
  • 2 weeks later...

The plan this summer was to build Server2, kjell, into a fairly power efficient server on latest tech.  Spinners was changed to ssd´s, GPU was removed, The server is running at 100-110W. Autotune changed lines from Bad to Good, but did not change power usage more than 2-3 W. Powersave in bios as well as in the Server settings. I had hoped for 70-80W at this point, but I guess water-cooling with pump drawing +20W and 10 fans, and a Highpoint 8 nvme hba with 5 nvme drives need some power.

 

I will change to passive cooling, remove the special/logs nvme drives from the Pool and change to a smaller, hopefully more efficient power supply. No Corsair RM 550 to be found, so maybe a Corsair RM750e V2.

 

1752479282_Screenshot2023-09-02at22_22_45.thumb.png.e12f457ab23bcedf99995a992935c3bc.png

Link to comment
On 9/2/2023 at 10:51 PM, frodr said:

The plan this summer was to build Server2, kjell, into a fairly power efficient server on latest tech.  Spinners was changed to ssd´s, GPU was removed, The server is running at 100-110W. Autotune changed lines from Bad to Good, but did not change power usage more than 2-3 W. Powersave in bios as well as in the Server settings. I had hoped for 70-80W at this point, but I guess water-cooling with pump drawing +20W and 10 fans, and a Highpoint 8 nvme hba with 5 nvme drives need some power.

 

I will change to passive cooling, remove the special/logs nvme drives from the Pool and change to a smaller, hopefully more efficient power supply. No Corsair RM 550 to be found, so maybe a Corsair RM750e V2.

 

1752479282_Screenshot2023-09-02at22_22_45.thumb.png.e12f457ab23bcedf99995a992935c3bc.png

 

I removed the Highpoint 1508 , 4 x 1 GB NVMe drives, Intet T540-2 and a USB pcie card. At first power usage was exactly the same as before. But when the sata drive fell to sleep the power consumption is 60-65 W. Tomorrow I will take down the water cooling, a pump which is running at 20W+ and 10 fans. After that the only hardware change is the power supply. Which is a 1200W Xilence today.

 

 

 

 

 

Link to comment
  • 2 months later...
On 9/5/2023 at 4:08 PM, frodr said:

 

I removed the Highpoint 1508 , 4 x 1 GB NVMe drives, Intet T540-2 and a USB pcie card. At first power usage was exactly the same as before. But when the sata drive fell to sleep the power consumption is 60-65 W. Tomorrow I will take down the water cooling, a pump which is running at 20W+ and 10 fans. After that the only hardware change is the power supply. Which is a 1200W Xilence today.

 

 

 

 

 


It's really impressive that you managed to reduce the power consumption of your server down to 60-65W, especially considering that you're still using a 1200W power supply. 

Link to comment

Backup-Server:

 

Fractal Node 304 Case - Fans replaced for 2x Noctua NF-B9 Redux 92mm/3-Pin @ Front and 1x Noctua NF-A14 140mm/4-Pin/PWM at back connected to PWM-Pin on Mainboard

Mini-ITX BKHD-N5105-NAS Board with 6 Sata Ports On-Board (ASM1166) with Modded BIOS

32GB (2x16GB) Crucial RAM -Kit CT2K16G4SFD8266 - DDR4 2666MHz CL19

200W Inter-Tech 88882190 PicoPSU + 12V/120W Power-Supply (No-Name)

 

Drive Temps are down to <25 °C accross everything now after the NF-B9 upgrade - the NF-B9 Redux are working lovely in this case ...

 

Array - Spin-Down 15 Minutes, Mover enabled each 8h + Mover Tuning Plugin to Move files > 2 days old:

 

2x18TB Seagate X20/18TB - ST18000NM003D

1x10TB WD White-Label @ 5400RPM - WDC_WD100EZAZ

1x8TB >SMR-Hell< Seagate Archive - ST8000DM004

 

ZFS-Mirror - Cache + System (Docker+VM):

1x500GB NVMe - Samsung_SSD_970_EVO_500GB_S466NX0M914394B

1x512GB SATA SSD - Crucial_CT512MX100SSD1_14300CB78BC7

 

~15.8-16.8W currently in spin down over the day, semi-idle - goes up to single disk non-spin-down (18TB) ~20W and at average ~45-48W at load multiple disks, with very short peaks to about 60W ... (there is also staggered spin up if you enable the sata options and set spin down to just one drive on but I didn't verify that lately, might be interesting for PicoPSU users ...)

 

Hourly backups are stored via Restic to the SSD-Cache - running restic-rest-server

 

Keep in mind thats only possible with a modded bios that unlocks all options to enable GEAR2, enable ASPM,... and properly makes it go to C8

 

Apparently there's also an ECC-Support option to enable (which is seemingly enabled by default, maybe I accidently did that) - that makes no sense but I also do not have an UDIMM SO-DIMM ECC-RAM myself so I really didn't try if it works ... I doubt it does.

 

Write/Read speed averages at around ~200MB/s

 

I'll share the BIOS later on in a seperate thread going onto the specific settings ... so that others might be able to reproduce it

 

Edit - BIOS shared here for the time being: 

 

 

 

Edited by jit-010101
Link to comment

Hi all,

It is only my 3rd post (I believe), I thought I would post my experience with trying to setup an energy efficient server... sorry for the long post !
It all started a while back when I had a Synology NAS (DS918+) that became a bit short in terms of processing capacity.
In addition, because "Synology Surveillance Station" is attached to relatively expensive licenses, and also because I would have had to fully reconfigure my existing NAS Array to keep 1 of 4 HDD just for recording video streams from cameras, I also decided to install Security Camera and have a dedicated BlueIris machine (an Optiplex 7040 micro PC, i7-6700T, 16GB, 250GB NVME, 2TB HDD) for that. Then my son was using an old computer for gaming (i7-3770 with a GTX 1060 nvidia GPU) which was starting to become a bit too slow for the type of games played...

 

This meant that I had to look after 3 different pieces of equipment (1 NAS, 1 Tiny CCTV server, 1 gaming desktop), with an overall combined power consumption at least of just below 60-ish watts at the very least.

Then I discovered Unraid 🙂... And it fully decided to give it a go, with the following goals for this new server:

  • a Linux VM for HomeAssistant OS (previously I used the docker installation, but really the OS version is just much easier to manage and maintain)
  • a Windows VM dedicated to BlueIris for the Security Camera (using a dedicated "Surveillance type" HDD passed through for all video recording)
  • another dedicated VM for gaming (using parsec, a dedicated GPU passthrough, some NVME drives passthrough), that can be started remotely (from a web page on the LAN) by my son (using RWSOL-Server for this)
  • at least the following containers apps (around 30 containers at the moment) :
    • a proper authentication solution (with Authentik) allowing SSO (this was actually triggered with some kind of discovery project I had to do anyway for work)
    • NginxProxyManager for most of the services exposed outside
    • Omada Controller to manage my Access Points and Switch
    • Nextcloud (with OnlyOfficeDocument Server, Redis, MariaDB)
    • All the "Arrs" applications so that I have a fully automated solution with Overseer as the frontend...
    • Plex media server with intel iGPU HW acceleration for transcoding needs with Plex
    • Some other apps...

Well, all of this was currently running on my new server composed of:

  • an intel i5-13600K
  • a MSI Pro Z690-A DDR4 motherboard (I had already access to some DDR4 sticks...)
  • 48 GB of RAM
  • 1 Array made of 4 x WD Red 3TB (WD30EFRX), 1 of which is used for parity
  • 1 cache NVME SSD Samsung 970 Evo+ (500GB)
  • Western Digital 4TB WD Purple Surveillance for the video camera recordings => passed through to the BlueIris Windows 11 VM
  • 1 Western Digital Blue SN570 1TB NVME drive => passed through to the Gaming Windows 11 VM (OS + games)
  • 1 Samsung SSD 980 1TB NVME drive => passed through to the Gaming Windows 11 VM (more games storage)
  • 1 Asus GeForce RTX 3060 Ti Dual OC 8GB Graphics Card => passed through to the Gaming Windows 11 VM (more games storage), else managed by Unraid Nvida-Driver plugin when the VM is not running.
  • 5 x 120mm fans (2 for CPU cooler, 3 in the case)
     

With CPU pinning configured to only use the (8) e-cores with Unraid, the containers and the BlueIris and HomeAssistant VMs, and dedicating the (6) p-cores only to the gaming VM, the following observations can be made:

  • Total power at the socket was just around 77W with the (4) drives from the array spun down, from which 12W are from the GPU in P8 state after applying this script from fellow member @MeisterPilaf (Many thanks, it works great!). This is when the server is doing is BAU stuff, including recording (4) video camera streams on the "Purple" HDD  and the gaming VM being not started.
    • Measured from a Zigbee Xiaomi plug that monitors power.
  • The CPU can really run efficiently and cool if the configuration is making use of the e-cores, but has enough sleeping capacity to have a great gaming experience whin using the gaming VM.

In the end, my son has stopped using the VM, moved to an XBox Serie S, mainly because of various issues related to anti-cheat implementation with online gaming where VMs are not welcome 😞, so the GPU was sold to finance the console !
 

As of now, the server consumes around 64W, which is in my opinion very acceptable given to reserve of power accessible for future projects, and not far from the initial combined power consumption (when not gaming):
image.png.8758d99bf51904e3717737b2c4cdb203.png
I thought of changing the whole HW to something more conservative, but I am not sure I want to go through that pain... any thoughts on all this? I know that this project was a great learning opportunity, having everything centralised is great and easier to maintain, from a cost perspective it is difficult to fully justify it.

 

### Update 30 Nov 2023 ###

I have now updated the bios on the mobo to the latest version, and also spent time to look at the bios settings:
- disable the onboard audio
- disable the LED thingy
- changed all "auto" settings in the ASPM section to L1, L0sL1 and C10 where I could
- disable Intel® Turbo Boost
- changed CPU cooler type to "Boxed Cooler" => this changed the power and current limits.

 

Also did the "powertop --auto-tune", and I saw that all the isolated p-cores ("Isolated CPUs") that I configured in the CPU pinning settings were never going below 3.5GHz..., and then I realised this was because these cores are not at all managed by Unraid OS any more, so they are not getting into a lower power mode even thought they are not in use (the VM using these p-cores is currently not started).
So I did revert that setting to remove all CPU isolation, and once this was done, the P-cores where finally going into a C7 state with a very low frequency. 

Now, the server is chugging along at ~46W, the CPU is at 32°C, and mobo at 28°C 🙂

 

Edited by LoloNZ
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.