1471

Members
  • Posts

    52
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

1471's Achievements

Rookie

Rookie (2/14)

15

Reputation

  1. Thanks, yea probably the best option at this point. A job for next week.
  2. Thanks, the AsRock Z790 ITX motheboard i'm using has two onboard network cards, an Intel I226V (2.5G) and I219V. I have the I219V disabled, tried swapping round and no change. With a bit more testing I found that with the Win Server 2022 VM disabled I was getting ~20% C2, not brilliant but at least Pkg(HW) is working. Any idea what could cause that? Containers are fine, tried spinning one up at a time with the VM disabled and didnt effect things. I've been through the BIOS quite a few times not and can't see any obvious settings that are wrong, so suspect its something else.
  3. Help! This has been a really useful thread and a bit of a rabbit hole! I've been working my way through applying changes to try and save some power, but have an issue with getting in any C state with my array running. Relevant bits of my setup: ASRock Z790 ITX motherboard Intel 12500 CPU 6 port MZHOU ASM1166 card 6x 20TB WD 'white lable' disks (1x as parity) 2x Samsung 780 SSD (ZFS mirros, cache) 2x 970 Evo Plus NVME (ZFS mirros, cache) Lots of containers 2 VM's (Win Server 2022 and Home Assistant) With the above set up but with a LSI 9207-8i I was getting an average power draw of 113W, which seemed high. Running powertop this reduced to 111W but I was unable to get to any C state. This was due to the HBA and a bluetooth dongle I was using. So I did the following: I swapped the HBA for the recommended MZHOU 6 port ASM1166 card and updated the firmware to the latest available, HERE for info. I havent had any issues running 'powertop --auto-tune &>/dev/null' with this firmware (that I know of!) Following the guide HERE, using SCEWIN_64, I enabled 'Low Power S0 Idle Capability' and deactivated 'Native Aspm' I also removed the bluetooth dongle (moved to ESP32 bluetooth proxys) Installed the 'tips and tweaks' plugin and set the CPU governor to power save and made some BIOS changes, relevant ones below: SR-IOV support - enabled PCI express native control - enabled (tried disabled first, no difference in power usage) PCI ASPM support - L0sL1 PCH PCIE ASPM support - L1 DMI ASPM support - enabled PCH DMI ASPM support - enabled Deep sleep - Enabled in S4-S5 GNA device - disabled Onboard audio - disabled Onboard WAN device - disabled RGB LED - disabled Left running for a few days and the average power was 95W, so a reduction of 16W, which is about right with the HBA removed, but still quite high. Running powertop it seems like i'm not getting any Pkg(HW) C states which is odd. With the array spun down its mostly running in C2: But as soon as I spin up the array, it doesnt get to any C state: Checking the ASPM status of all devices shows this. Seems fine, although all the red can't be a good sign.. Things i've tried and info Shutting down VM's - no difference All disks are encrypted, but don't think that makes a difference powertops showing all tunable options and 'good' Feels like there's a BIOS setting i've missed (dont thing I have though) or some hardware stopping things? I have a UPS connected via USB From this thread I can see others have reached C8 or C10 with Asrock Z790 motherboards so it seems possible. I've exhausted my limited expertise, so any help would be much appreciated!
  4. Its not something I have the skills to contribute to, but thanks for looking, Varken has been really useful over the years. Sounds like updating Varken at some point would probably be the easiest option for the Ultimate UNRAID Dashboard? If not prometheus-plex-exporter might be another route, would also need some adapting though. Blog post about it HERE, github repository HERE for info.
  5. Looks brilliant, will check it out properly when I have some time, thanks for creating and sharing! Just a heads up, Unraid-API was replaced with Unraid-API-RE. I've been using in my dashboards for a while without problems and it seems to be developed still. (make sure you're using the repository for 6.12.xx - bokker/unraidapi-re:6.12) Unraid-API-RE Github link Unraid-API-RE Support page Hope that helps someone.
  6. Yea. The 2x SSD's are mirrored for redundancy and used as cache for the array. New files are written to the cache and moved over to the array on a schedule, more info on how that works HERE. I wouldn't normally bother with redundancy for a cache drive, but in my use case I also use for Docker Container storage so something I needed. My NVME's for appdata and the other for VM's don't need redundancy as these are backed up weekly to the array. Running a 1G network, more than enough for me. Good luck with your build!
  7. No problem. Every use case will be different, yours sounds like a solid plan though. I use one of the NVME's for appdata storage and the other for VM's. The two SSD's are mirrored and used as a cache drive for the array. I also use for some docker container storage, which is why its mirrored. All are just XFS for now as currently running 6.11.5, but will move over to ZFS when updating to 6.12.8. Been holding off updating as I want to continue using macvlan, which has problematic in 6.12.XX releases. There's a workaround for the in 6.12.4, but I've not been brave enough to try yet as not had much spare time for any troubleshooting. Backup plan sound good too. I'm using Backblaze B2 with Duplicacy which works well enough.
  8. Wooooo, looks like this post made it onto the Unraid Monthly Digest
  9. Well done, looks good! The Node 304 fits perfectly in some Ikea Kalax for me, nice and out of the way too.
  10. Had a bit of spare time to troubleshoot the rather high 110w average power usage today. Using powertop I can see the ZEXMTE USB Bluetooth adapter i'm passing through to my Home Assistant VM is stopping Pkg (HW) from getting to any C state. When removing this, I was able to get to C3, so will have to think about other options. Maybe look to using the onboard Bluetooth, would mean re-enabling the WiFi adapter as its shared with that. Running the nice and stable 6.11.5, should probably update to 6.12.8 too as suppose to bring better driver support, might help. Also realised that when I updated the BIOS recently, all the BIOS setting I'm made had reset, such as disabling devices not used, customising the PL1 and PL2 settings etc. With the setting applied again, plus a few more ASPM related ones, average power draw has dropped to 105w. Expect this to drop more as that's an average of the last few hours, will be less over night. Expect more power saving in the future and going to buy a 6 port ASM1166 based card to replace the HBA that's stopping the low C states.
  11. Thanks JorgeB, yea makes sense, appreciate the reply.
  12. This is a interesting thread. I've got an issue with my server not getting to a low C State, reading up things the likely culprit is the LS 9207-8i HBA i'm using. Looking to replace with a recommened 6 Ports 4X PCIe Asmedia ASM1166 card like THIS. However, i've stumbled across an 8 Port 3.0 SFF8087 on Aliexpress and wondering if any has tried it? Would be a like for like repalcement for my current HBA, which is appealing, but concerned about speeds and quite hard to find any info. It seems to be running a ASMedia1166 chip, but with 8 drive support, presuming some must be via a multiplier. I found THIS blog post covering a similar looking card, scroll down a bit to the info under 'the following section was added in March 2023'. TL;DR: Ports 6, 7, and 8 are connected to the ASM1093 port multiplier, which is connected to port 6 of the ASM1166. Any way, thoughts on disk speeds on the split ports? Any one used one or similar?
  13. Thanks! One of my old servers using a modified Fractal Define R5 case was also a fun little project, info HERE if you're interested.
  14. Build’s Name: My Customised SFF Home Server Build Full Spec: PartPicker Link Usage Profile: Unraid Server for Containers, VM's and Storage Server Time to upgrade my home server again, decided to down size from my current Fractal Design XL to something smaller and more power efficient. Decided to go with the Fractal Node 304 case, I’m moving from 12x 8tb disks with dual parity to 6x 20tb disks with single parity so will fit. Using a smaller case will mean taking up a lot less space in my small home office! Here's a finished picture, with a rather long build thread below: ________________________________________________________________________ So then, started of by ordering the case. I managed to swap the white HDD caddys for black ones with a colleague, looks much better. Then swapped out the Fractal fans for black Noctua A9 and A14 Chromax fans. These look slightly better and much quieter. Reused one of the LSI 9207-8i HBA from the old server. The brackets are powder coated black and have a Fractal R3 40mm fan on the heat sink. The fans are mounted using black nylon nuts and bolts, reusing the same holes as the heat sink to keep things neat and tidy. Updated the firmware while I was at it. Installed a Corsair RM650 (2021) PSU and realised the motherboard ATX cables were pretty long. Time consuming, but re-pinned and braided the connectors to make the optimal length. The USB cables from the front IO to the motherboard were also pretty long, so found some shorter ones on Ali Express. Soldered these onto the original PCB and braided the cable. Also shortened and braided a few other cables, like the on/off, restart buttons and hdd activity cables. I also removed the front panel audio cables as not needed for a server. Not a massive problem, but noticed that the power cable orientation meant the cable at the PSU end pointed up and needed to loop round, which looked messy. So found a left angle IEC (that’s a thing) cable, braided and re-terminated at the case end. Now it points down and runs along the bottom of the case, much tidier. Next up was the mother boards silver IO shield, didn’t look brilliant on the black case. I couldn’t find a black one, thought about 3d printing one, but ended up just powder coating the original. Came out really well and looks much better. Installed everything on the motherboard and made a custom length, braided cable for the CPU fan. Did the same for the two front fans and exhaust fan. The case takes 6x 3.5 HDD’s, these would be filled with my 20tb disks, so needed somewhere to install the 2x 2.5 HDD’s I use for cache drives. Easy option would be to mount on the outside of the HDD caddys, but where’s the fun in that! I decided to make my own bracket to mount them both on the side of the case. Fabricated these out of aluminium sheet and powder coated black. Drilled two holes in the bottom of the case, then used some black, low profile bolts to secure the bracket. These are hidden by the plastic feet mounting covers that run round the bottom edge of the case, so can’t be seen. Inside view of the bottom of the case, where the brackets secured. I used black nylock nuts and black washers to keep looking original. Drilled two more holes a the top of the case and secured the bracket using rivnuts and some more low profile bolts, with black washers. These were needed to make sure the case top fitted without snagging. Made some custom length SATA power cables for the HDD’s to keep things tidy. Then connected the HBA’s SATA data cables. I forgot to take a picture with the cables tied together, but looks tidy. Swapped the remaining PCI cable to a black one and all done! All sealed up, shame no one will ever see the hard work that went into the build! A fun project though, so worth it for me! ________________________________________________________________________ Well done for making it to the end of this post! I'm please with how the builds turned out. My old server use to average 190W power draw, this uses 110W. To be honest, I was hoping for a little lower, need to do some troubleshooting when I have some time. I think the HBA is stopping the system from getting to lower C states, so may swap out for a ASM1064 based card in the future and check things out with powertop.
  15. Also spotted, FolderView got a mention in the Unraid Digest email sent today!