My Customised Fractal Node 304 SFF Build


Recommended Posts

Build’s Name:  My Customised SFF Home Server Build
Full Spec:  PartPicker Link
Usage Profile:  Unraid Server for Containers, VM's and Storage Server

 

Time to upgrade my home server again, decided to down size from my current Fractal Design XL to something smaller and more power efficient. Decided to go with the Fractal Node 304 case, I’m moving from 12x 8tb disks with dual parity to 6x 20tb disks with single parity so will fit. Using a smaller case will mean taking up a lot less space in my small home office!

 

Here's a finished picture, with a rather long build thread below:
xpwu9Nzl.jpeg

 

________________________________________________________________________


So then, started of by ordering the case. I managed to swap the white HDD caddys for black ones with a colleague, looks much better.
G3k4cE7l.jpeg

 

Then swapped out the Fractal fans for black Noctua A9 and A14 Chromax fans. These look slightly better and much quieter.
j2YrL3tl.jpeg

 

Reused one of the LSI 9207-8i HBA from the old server. The brackets are powder coated black and have a Fractal R3 40mm fan on the heat sink. The fans are mounted using black nylon nuts and bolts, reusing the same holes as the heat sink to keep things neat and tidy. Updated the firmware while I was at it.
uXvoSwdl.jpeg

 

Installed a Corsair RM650 (2021) PSU and realised the motherboard ATX cables were pretty long. Time consuming, but re-pinned and braided the connectors to make the optimal length.
3QhQhgnl.jpeg

 

RLH1Rv5l.jpeg

 

The USB cables from the front IO to the motherboard were also pretty long, so found some shorter ones on Ali Express. Soldered these onto the original PCB and braided the cable.
iJHau2Sl.jpeg

 

i7jrW2vl.jpeg

 

Also shortened and braided a few other cables, like the on/off, restart buttons and hdd activity cables. I also removed the front panel audio cables as not needed for a server.
KxqVHcul.jpeg

 

Not a massive problem, but noticed that the power cable orientation meant the cable at the PSU end pointed up and needed to loop round, which looked messy. So found a left angle IEC (that’s a thing) cable, braided and re-terminated at the case end. Now it points down and runs along the bottom of the case, much tidier.
jbQfBkDl.jpeg

 

AdQlb56l.jpeg

 

Next up was the mother boards silver IO shield, didn’t look brilliant on the black case. I couldn’t find a black one, thought about 3d printing one, but ended up just powder coating the original.
zWnACuQl.jpeg

 

Came out really well and looks much better.
9dNt2zZl.jpeg

 

Installed everything on the motherboard and made a custom length, braided cable for the CPU fan. Did the same for the two front fans and exhaust fan.
YSgEemll.jpeg

 

The case takes 6x 3.5 HDD’s, these would be filled with my 20tb disks, so needed somewhere to install the 2x 2.5 HDD’s I use for cache drives. Easy option would be to mount on the outside of the HDD caddys, but where’s the fun in that! I decided to make my own bracket to mount them both on the side of the case.
ZxEDrkDl.jpeg

 

Fabricated these out of aluminium sheet and powder coated black.
GbnHMnGl.jpeg

 

Drilled two holes in the bottom of the case, then used some black, low profile bolts to secure the bracket. These are hidden by the plastic feet mounting covers that run round the bottom edge of the case, so can’t be seen.
bWx8nL2l.jpeg

 

Inside view of the bottom of the case, where the brackets secured. I used black nylock nuts and black washers to keep looking original.
YcNEkwjl.jpeg

 

Drilled two more holes a the top of the case and secured the bracket using rivnuts and some more low profile bolts, with black washers. These were needed to make sure the case top fitted without snagging.
onhAi6ql.jpeg

 

Made some custom length SATA power cables for the HDD’s to keep things tidy.
2RJmsVGl.jpeg

 

Then connected the HBA’s SATA data cables. I forgot to take a picture with the cables tied together, but looks tidy.
wK32rtXl.jpeg

 

Swapped the remaining PCI cable to a black one and all done!
xpwu9Nzl.jpeg

 

All sealed up, shame no one will ever see the hard work that went into the build! A fun project though, so worth it for me!
HEW5yETl.jpeg

 

________________________________________________________________________


Well done for making it to the end of this post!

 

I'm please with how the builds turned out. My old server use to average 190W power draw, this uses 110W. To be honest, I was hoping for a little lower, need to do some troubleshooting when I have some time.

 

I think the HBA is stopping the system from getting to lower C states, so may swap out for a ASM1064 based card in the future and check things out with powertop.

Edited by 1471
  • Like 8
Link to comment
Posted (edited)

Had a bit of spare time to troubleshoot the rather high 110w average power usage today.

 

Using powertop I can see the ZEXMTE USB Bluetooth adapter i'm passing through to my Home Assistant VM is stopping Pkg (HW) from getting to any C state. When removing this, I was able to get to C3, so will have to think about other options. Maybe look to using the onboard Bluetooth, would mean re-enabling the WiFi adapter as its shared with that. Running the nice and stable 6.11.5, should probably update to 6.12.8 too as suppose to bring better driver support, might help.

 

Also realised that when I updated the BIOS recently, all the BIOS setting I'm made had reset, such as disabling devices not used, customising the PL1 and PL2 settings etc. With the setting applied again, plus a few more ASPM related ones, average power draw has dropped to 105w. Expect this to drop more as that's an average of the last few hours, will be less over night.

 

Expect more power saving in the future and going to buy a 6 port ASM1166 based card to replace the HBA that's stopping the low C states.

Edited by 1471
Link to comment

Thanks for this post, 1471.  I'm heading down the same path with the Asrock, but plan on keeping it simple on the storage running Unraid. 

 

I'm planning on using a single 2TB NVME (docker, VMs, and Plex) then 2 x 18TB drives to start off with no parity.  Everything is automated to offload weekly to a Synology NAS for backup, then I do offsite with Backblaze with the Synology.  So, parity/redundancy/availability isn't so important to me and also, it's my understanding that if I'm running a 1G network, the write cache isn't that beneficial for me (using the NVMEs).

 

Can you please share why the dual NVMEs (I'm assuming for write cache) in your setup and the SSDs?  

 

I'm trying to think my build out a bit further and wondering if I'm missing anything or failing to understand such builds as yours.

Link to comment

No problem. Every use case will be different, yours sounds like a solid plan though.

 

I use one of the NVME's for appdata storage and the other for VM's.

 

The two SSD's are mirrored and used as a cache drive for the array. I also use for some docker container storage, which is why its mirrored.

 

All are just XFS for now as currently running 6.11.5, but will move over to ZFS when updating to 6.12.8. Been holding off updating as I want to continue using macvlan, which has problematic in 6.12.XX releases. There's a workaround for the in 6.12.4, but I've not been brave enough to try yet as not had much spare time for any troubleshooting.

 

Backup plan sound good too. I'm using Backblaze B2 with Duplicacy which works well enough.

Link to comment

Thanks for the response.  So to get some clarification on your build, your SSDs are in the write cache mode, correct?  What type of network are you running?  1G or 2.5G or 10G?

 

I'm trying to just see if I'm missing something in my approach - I'm considering putting in dual SSD in mirrored mode as well for some reason, but I'm not fully sure if it's worth it.  I can always put in a second NVME for write cache (but I'm not running a parity drive) - in doing so, it will help local writing from the apps/VMs to the machine, but I can achieve that with SSDs and maintain some sort of maintenance.  

 

But, right now with my 1G network, accessing the server remotely and such won't get me much advantage - however, if I'm using it locally doing all sorts of stuff, the dual SSDs for write cache might be worth it (but I'm not running parity - yet....).  

 

Anyway, thanks for reading and I plan on putting the hardware together this weekend.  Will spin up Unraid at some point then, start playing with it to see if I should layer in the dual SSDs and stick with the one NVMe.

Link to comment
Posted (edited)

Yea. The 2x SSD's are mirrored for redundancy and used as cache for the array. New files are written to the cache and moved over to the array on a schedule, more info on how that works HERE.

 

I wouldn't normally bother with redundancy for a cache drive, but in my use case I also use for Docker Container storage so something I needed. My NVME's for appdata and the other for VM's don't need redundancy as these are backed up weekly to the array.

 

Running a 1G network, more than enough for me. Good luck with your build!

 

 

 

Edited by 1471
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.