_cjd_

Members
  • Posts

    48
  • Joined

_cjd_'s Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. Only huge value option pointing at AMD specifically is if you need lots of pcie lanes... Epyc can't be beat. Intel has the edge for low power and transcoding. I've found AMD server boards better equipped as well. So many variables. Sort what you need from the system or just use what's lying around. Can always change again some other day. My main Unraid server is AMD (x570d4u/5600x)... Backup is older Intel from random parts I had sitting around.
  2. If you want drive failure protection for an SSD cache you need a more traditional raid setup... With a plain mirror being the most direct solution. It is not protected by parity and doesn't have that as a pool option (nor do you want to use SSD in a parity slot)
  3. SFP+ DAC is lower power consumption at the switch... 0.1w fiber DAC, best rj45 I see is 1.8w when it's up (x540 on the other end) and 2+ (x550 on the other end) - note different transceivers so that's likely the difference, not the nic. I haven't tried pulling a copper DAC but it's going to be better than fiber. All 8 ports up on my Ubiquity Aggregation switch is still under 13w measured by a Shelly plug, just the two rj45 transceivers. X710 with the right firmware I recall can do higher c-states, forget which else. I haven't tried pushing hard on the system with the x540, it's my backup unraid and is rarely powered on. Older i3, idles around 28w. Check the powertop thread, this has come up multiple times.
  4. Power read is probably good enough - a point of comparison (via smart plug) might be interesting, regardless. I've got a Shelly on mine. It's quite a bit more sensitive than the clamp reading I have for the whole circuit in my load panel. I have an LSI9211-8i HBA. It was 10-12w total drop going to an ASM1166 m.2 adapter instead (10 sata drives in my system; 5 ssd, 5 spinners, only the latter were ever on the hba). At the moment power costs me ~us$1/yr/1w all in, so that's still a 2-3 year payback... with your *current* setup I'd be hard pressed to justify the hardware costs trying to get that many 1166; jumping to larger hard drives seems a better investment - the x4 and x1 slots could get you 8 sata ports comfortably; another 12 in the m.2 if you aren't planning nvme drives. But there IS a point of diminishing returns vs a good HBA with ASPM support I suspect. And not all ASM1166 seem to deliver good results. Of course I'm now thinking of a switch to a pcie card to open up the m.2 for a mirrored appdata setup (solo nvme drive for that right now) but... meh. backups and backups hopefully enough to get me running quickly enough in event of a failure. 19 drives are going to add quite a bit more idle power consumption (or 22) vs... fewer higher capacity drives too. Not to mention when they're all spun up, again making it a likely better investment if you are trying to reduce power consumption. Also the power supply demand for that many drives; fewer opens the door to more efficient power supply options (ironically, my sim rig has the best low-power supply I own... seasonic 750w titanium - but waiting to see what video card options open up in the future before buying a replacement to swap stuff around - and I have a multi-decade long grudge against nvidia so who knows when that'll happen) The downside to drive upsizing is parity has to happen first (and I hope with that many drives you've got dual parity in the mix) so the initial curve to larger capacity is $$.
  5. You are going to want the pcie lanes for multi-gpu - unless you only ever plan to have one + onboard. Assuming your existing a380 is going to just barely get you rolling it may fall into use just for transcoding anyway... you can do 4 gpus at 16 lanes on epyc vs just one on 'consumer' hardware. The second thing is you're likely going to want the space for gobs of RAM. And used Epyc are far and above the affordable way in (with the usual gotchas around used hardware...) It'll be more power hungry on the flip side but if you're doing enough ML/AI it's homelab or cloud costs either way. Power consumption is likely THE big downside though. If you're building a multi-gpu rig, it'll be hungry for that reason too, though. Consider case carefully here too for good airflow. And don't skimp on power supply (so much garbage out there - I stick to Seasonic, though I've run Corsair and one system has a Be Quiet! right now) 10g networking, I believe if it's just two machines direct wire SFP+ dac... copper or fiber as needed, no switch necessary; avoid rj45 unless you really really have to (also, connect-x 3/4 is cheap, x710 also reasonable). Once you need a switch, lots of options. Microtik has a 4 port SFP+ switch I believe, Microtik and Unifi and probably others have 8 port SFP+ switches at reasonable prices... I have two cat6 runs, x540 is my reco on that end - but the SFP+ to rj45 are more $ and run hot. No heat issues in my Unifi gear but I've heard Microtik can overheat. Also have 5 copper DAC connections, one fiber DAC connection. If we ever have to open the walls enough I can safely run fiber where I have the cat6 I will jump on the chance. Leave this as an upgrade path option when you find a use for it, regardless (though some epyc have built-in sfp+...) More than anything though, I'm going to reiterate - don't buy anything at all yet, if you can avoid it. Get things running and learn what you need next - unless the hardware simply will not work. It is cheaper to dream and learn with what you have till you know exactly what you need and buy the right things the first time (and of good quality). If you have to, put aside a few bucks every day into a build coffer... it adds up fast and is a healthy habit anyway. I started on an AMD 765 black, had stability issues and swapped to goodness knows what random hardware I had in a closet (some AMD "media" cpu from ages past) till I tracked down bad ram, went back to the 765 for a bit and sorted requirements, then built the setup I have now. I've since become more power consumption aware and vaguely wish I'd gone Xeon, but even today it's hard to find actual consumption details (and I do want ECC memory). I swapped to one of the big 4u Rosewill cases (the 15 drive capacity) after learning i didn't care about hot-swap and would prefer lower drive temps at lower fan curves... modded to take 140mm fans in the middle row, room for giant motherboards if I ever want it... probably could passively cool the CPU at this point. Down to ~43-44w mean consumption at idle (home assistant and some related bits add 5+w overhead). But... no gpu at all for me, even for Jellyfin. I still like having IPMI knowing that adds another ~5 or so (trusting data from others on this). Unraid does not need much to run quite a bit of stuff well. My backup Unraid is an older i3 and some things there are clearly cpu limited (I had cachedirs plugin for no sensible reason, but it would peg a core for hours on end). I can't speak to *specific* hardware recommendations on Epyc. Get a main board capable of a range of CPUs and start on the cheaper/older side for CPU. That gives you an upgrade path if it becomes a bottleneck. I've definitely seen some used options with memory included on ebay well within your budget, but a good PSU isn't cheap either... especially if you're looking at multiple GPUs down the road. Also unsure if you have a case which would support this well.
  6. ASM1166 m.2 to SATA adapter for 6 more SATA ports? Unless you have 10g network plain sata SSD cache is adequate, but this setup would let you mirror that and also have room for more spinners. Unless the case is the limit... New case?
  7. I have both connectx-3 and x710-da2, and in my (AMD) system power draw is a wash. DAC to switch. I have a 15m fiber DAC to a PC, draw at the switch is under half a watt when in use. I have two rj45 SFP+ for long cat6 runs - x540 on one PC, 550 on the other. 540 is what I should have done for both. Switch power draw ~2w, with the broadcom chip running lower power (also does multi-speed not fixed 10g). I'm using a Unifi aggregation switch and their SFP+ modules but have never had heat issues on either.. the other 6 spots are all DAC. 98xx to 99xx iperf scores over cat6 with jumbo frames and a couple settings tuned on windows (and mtu tuned all around). The 550 sometimes boots up missing so no network... Reboot fixes it.
  8. Power meter is probably accurate enough - a lot of folks use UPS stats which aren't. I don't know enough about this to say. I'd be curious if that changes more dramatically with attached drives (even spun down) or other add-in cards. HBA may reach higher states but also use more power up front. Given the improvements folks find going from C3 to C8 there must be more to it than you've captured so far. Also Unraid may not match Ubuntu... I shaved nearly 100w at home standby use chasing lots of low numbers like this... Though Unraid is in that total and ditching the HBA for ASM1166 SATA was more than a couple W. I'm also accepting IPMI cost and not changing to Intel because of the value + cost vs theoretical improvement. I am sticking with VLAN on a 10g/SFP+ network vs onboard Ethernet to save just 2-3w - I can't seem to disable the ports so probably missing some savings, but the switch is also lower use... I will also say I appreciate the numbers you're sharing because that's digging in with real data where I only had theoreticals before (ITX isn't enough for my setup and most ultra low use recommendations seem to be... Also very few ECC examples). Making me itch to spend more than I'd save in many many years just to experiment.
  9. How are you monitoring power consumption? It matters differently to different people. What does each watt cost you annually? Every budget balances differently. And for some it's a game - how low can you go...
  10. Any reason you're not looking at epyc? For the most out of multi-gpu that seems relevant... But maybe your future use case doesn't need the pcie lanes. Interesting project. I agree with the suggestion you start with what you have and learn what it needs before buying anything new at all. Also I'm wary of 2.5g networking... 10g is cheap especially if you can just run SFP+/DAC... Scope creep.
  11. x540-TA2 for rj45 to my backup Unraid from ebay. Painless install, great throughput with jumbo packets and proper MTU. Also have x710-DA2 and Mellanox Connectx-3 both have been in my primary, SFP+ copper DAC (1M) as it's in the same spot as the network equipment. Both ebay, I seem to have lucked out on the x710 price... But Mellanox ConnectX 3 and 4 both in the ~$30 range as well.
  12. Not quite what you're after, but I decided to go Unifi after not liking other camera options I could find... So much wifi and I wanted PoE. That was a few years back and you could still self host. I don't mind having to use their NVR - the features keep improving and the camera selection is good. As it stands a UDM Pro keeps up fine and does other useful stuff too. I would want an isolated drive array anyway since it's constantly running, and that's the bigger expense.
  13. I noted briefly in passing: AMD Ryzen 5600X, ASRock Rack X570D4U motherboard. Memory is ECC UDIMM at 3200. Should also note I have 11 fans in total, though all running pretty low speed they still add a few W. I wish I could drop them to zero at idle, but the system won't let me tune below 20%. Normal fan stuff doesn't work and I can't get iPMI control from a plugin to work so... I've tried both a ConnectX 3 and Intel x710-DA2 (both dual SFP+) without a consistent difference (I seem to have to re-tune fans every reboot making it hard) but the Intel is reportedly better on an Intel motherboard. Figured it was worth a try. I think you're going to need to start stopping containers and VMs one at a time and let the system idle for a bit each time - see what's actually sucking power. Same with pulling the various cards and even m.2 devices. *something* is preventing the system from idling efficiently. If all the bios settings and such are in the right place, this is the next logical step.
  14. Start here: Do your drives spin down? What 10Gbe, and rj45 or sfp+/dac? Swap the SAS card for a simple ASM1166 sata adapter if your drives are sata; good ~10w. Switch from VM to docker for HA and you might see some improvement. For the rest, you've got to start in the bios and powertop stuff. For reference, my system idles around 45w mean (absolute min logged ~38w) - 12 dockers (including home assistant) 5hdd, 1nvme, 5ssd, m.2 asm1166>sata adapter, dual 10gb (sfp+/copper dac), dual 1gbe, ipmi, external usb gadget connected to my power meter, 32gb mem/amd 5600x/x570d4u. I dropped ~12w net moving to the sata adapter in place of HBA card. Stopping home assistant is good for 4-5w.
  15. Set up an account for Firefox. Passwords, tabs, bookmarks... At least, if you're only stopped by having stuff sync.