-
Posts
38 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
Gazeley's Achievements
Noob (1/14)
6
Reputation
-
Ok the consensus here and on level1techs seems to be the following: • Losing Intel's superior transcoding isn’t worth the trade up in overall performance with AMD • The 14500 should be plenty good for my needs. • 1000W PSU is overkill. • Probably won’t need all 6 case fans. I went ahead and changed the CPU back to the i5-14500, and the mobo back to the ASUS Pro WS W680-ACE (I want ECC). I also changed the PSU to a Seasonic Vertex PX-750 since I can’t find any good Platinum/Titanium options around 600W. I’m willing to splurge a bit on the PSU for top-notch build quality and efficiency. The Seasonic Vertex PX-750 has 4 SATA power outputs - so I imagine with 4 of these CableMod cables I’ll be all good:
-
It was recommended to me over on level1tech forums. I've never heard of FSP either but it gets really good reviews. It's expensive (Titanium PSUs perform 3%-5% better in the 20W-80W range. So between a 0.6W - 1.6W difference. At best this would save about $5 a year vs any other 80+ Platinum) but I don’t mind paying a premium for good stuff. I’m hoping to get at least a decade of use out of this server - and I’m willing to splurge a bit for better build-quality/warranty/efficiency.
-
My peak rate is $0.13 per kWh, so the difference between an idle 50W system and a 100W system is only about $5/mo for me. I guess then the question becomes ‘is the extra performance of the AMD worth $60/year to me?’ This is assuming the system is idling 100% of the time (which it won’t) and the R9-7900 does seem very efficient under load compared to Intel. Depending on how hard I’m using it there’s theoretically some point where it becomes more power efficient than an i5-14500 (although I doubt I’ll be using it hard enough to hit that point) I also just noticed that the ASUS ProArt X670E-CREATOR has onboard 10Gb ethernet. Which means I could potentially go without the SFP+ NIC, except my switch only has SFP+ 10G ports and those 10GBase-T transceivers run HOT and presumably aren’t power efficient. So maybe I’d want to get the SFP+ NIC regardless so i can use a DAC.
-
Yeah I've been agonizing over that, but a few things are swaying me: 1) I've been warned I could run into "fun and interesting" performance characteristics with Intel's P and E-cores, which aren't always handled well in schedulers other than Windows (for now at least). 2) The R9 7900 looks pretty power efficient : 3) Adding a GPU doesn't seem like it will add that much to my idle power draw. According to SpaceInvaderOne it's possible to get a GTX 3080 down to ~8 watts on idle: 4) The R9-7900 is a much faster CPU. My biggest hesitation is I assume the R9-7900 will have significantly higher idle power draw than a i5-14500 would. I can't find solid #s on that yet, but that will either solidify my decision or pull me back onto the fence. ----------------------------------------------- EDIT: Found this thread which concludes: They also mention: 🙁 Ugh I just don't know what to do. I wish Intel didn't have the microcode issues with their 13th/14th gen which limits me to the older/lower-end stuff if I want to avoid it. I wish the new Arrow Lake was more of a slam dunk regarding idle power efficiency. I wish AMD had better video transcoding. I wish ECC-capable motherboards were easier to find. Sorry to vent. I've spent too much time obsessing over this and need sleep.
-
I posted a thread over on level1techs and got some great feedback. Made the following changes per their suggestions: i5-14500 changed to Ryzen 9 7900 (They convinced me that a Ryzen 7900 + GPU is better bang for my buck and has more ECC mobo options.) Noctua DH-D12L changed to BeQuiet! Dark Rock Elite (Better cooling and quieter.) ASUS Pro WS W680-ACE changed to a ASUS ProArt X670E-CREATOR (AM5 Socket, ECC Support) Samsung SSDs changed to WD Black SSDs (Apparently Samsung has had some firmware issues with recent SSDs.) Seasonic TX 750 changed to FSP Hydro Ti Pro 1000W Titanium (Better efficiency in general, but particularly in the 20-80W range.) TRENDnet 10G SFP+ changed to Intel X710-DA2 Dual 10Gbps SFP+ (Better/newer NIC) This means I'll have to add a GPU for Plex transcoding. Looks like the RTX 3060 (w/ 12GB RAM) can handle ten 4K to 1080p transcodes, so that's currently my top contender.
-
Wow you aren't kidding, the Seasonic Prime 750 Titanium appears to have amazing efficiency! Looks like I can still get them new from eBay or refurbished from Newegg. Changed my PSU out to that on the build list. Smallest I can find in that family is this Intel X710-DA2 with dual ports. You think that would still be more power efficient than something like this (Intel 82599EN) with a single port? I'll look into the motherboard specs and consider that. Thanks for the tip!
-
I did some reading and I understand what you mean here now. I've only ever built gaming PCs where I didn't give a shit about idle power use, so this is all sort of new territory for me. If I'm hoping to have the server idling around 50watts or so, then it would be down at like 78% efficiency most of the time on a 1000W PSU: Whereas if I went with a 750W PSU it would be around 83% efficiency at 50W: Although that only seems to go so far. The low-power efficiency of the RM550x looks equal to, if not worse, than the HX750i above at 50W: Anyway, now I see the reasoning behind going with a lower powered PSU and just upgrading it as needed if I ever do decide to add a GPU. I don't want to be running at a lower efficiency for potentially years and years on the off chance I might add a GPU someday. I ran another calculator and it looks like I could get away with a 550W PSU: But since the HX750i looks more efficient at low loads I think I'll go with that.
-
That PSU calculator is very helpful: Looks like 800W would be sufficient (assuming I add a GPU someday). I doubt there would ever be a scenario where all my HDDs are spinning AND I'm maxing out the CPU & GPU at the same time, so realistically I should never hit that max load. So I think 800W would be ok, especially if I don't add a GPU - although I may want to go with 1000W just to be safe, and give me headroom to add something like a RTX 3080 if I want to self-host an LLM or something someday. Either way you're right that 1200W seems to be overkill. Thank you again for taking the time to help!
-
Gazeley started following Intel 15th Gen 'Arrow Lake' Thoughts? and Ultimate Home Server Build Plan Review
-
Thanks for the tip! I swapped that part onto my build list. Good to know! Apparently I need to stop relying so heavily on PCpartpicker, their database is missing a lot of this server-grade stuff. Changed to this RAM on my plan. I'm planning to fill the case out with 16 HDDs + 2 NVME SSDs. So I feel like I'm definitely going to want the 3 front fans to maximize airflow over the disks, and obviously I'll need at least 1 exhaust. I guess I could hold off on the 2 top fans until I see the temps, but I figured the power draw from 2 more fans would be negligible, and I'd rather err on the side of having too much cooling rather than not enough. Interesting, I'm loosely basing this off builds like this, and they all have at least 1000W PSUs (because of all the HDDs I assumed). I'm sure in Unraid it will be very rare for all the HDDs to be spun up at once, but I want to make sure that if that happens it can handle everything. I also wanted some overhead so that if I add a GPU or something in the future I don't need to swap the whole PSU. What PSU size would you recommend? You lost me here. What is C3? Going from my current system's 200W idle down to 50W, while also having a more powerful machine, would be amazing. Thank you for your input!
-
Hard to tell how good the currently available Arrow power data is but it looks like it's ~3x Alder, Raptor, and AMD APUs’ idle draw. Perf/watt is ok, but for my server what I care about is that idle power consumption. In light of that I'm not going to be waiting around for Arrow Lake. I'm going to build something around an i5-14500 (partly because it is a re-badged Alder Lake and doesn't suffer from the recent microcode issues). Just posted my build plan here. Really appreciate everyone's input.
-
I’m currently running Unraid on an old Supermicro X9DRi-LN4+ with dual Intel Xeon E5-2670 v2 CPUs. It’s served me well for years but this old Supermicro is loud, hot, and power-hungry. I’m ready for an upgrade, and I’m looking for something that sips power on idle but can unleash the beast when it needs to. The heaviest load on my server is usually Plex transcodes (up to 6 simultaneous), but I also run 20+ other docker containers including Immich, Nextcloud, Matrix, Home Assistant, Mealie, Vaultwarden, etc. Here is my current tentative plan: CPU: Intel Core i5-14500 2.6 GHz 14-Core Processor CPU Cooler: Noctua NH-D15 chromax.black 82.52 CFM CPU Cooler CPU Cooler: BeQuiet! Dark Rock Elite Motherboard: ASUS Pro WS W680-ACE (ECC memory support) Memory: 2 x Kingston KSM48E40BD8KM-32HM 32 GB (2 x 32 GB) DDR5-4800 CL40 ECC Memory Memory: 2 x Kingston KSM48E40BD8KI-32HA 32 GB (2 x 32 GB) DDR5-4800 CL40 ECC Memory Cache Storage: 2 x Samsung 990 Pro 4 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive Cache Storage: 2 x WD Black SN850X 4 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive Case: Fractal Design Meshify 2 XL ATX Full Tower Case + 5 x HDD Tray Kits Power Supply: Corsair HX1200 Platinum 1200 W 80+ Platinum Certified Fully Modular ATX Power Supply Power Supply: FSP Hydro Ti Pro 1000W Titanium Power Supply: Seasonic Vertex PX-750 80+ Platinum Power Supply Case Fan: 6 x Noctua A14 PWM chromax.black.swap 82.52 CFM 140 mm Fan HBA: LSI SAS 9400-16i x8 lane PCI Express 3.1 SAS Tri-Mode Storage Adapter Network Card: TRENDnet 10 Gigabit PCIe SFP+ Network Adapter Network Card: Intel X710-DA2 Dual 10Gbps SFP+ Main array storage is 10 x 10TB Seagate Exos drives, and would be moved over from my current build. I picked the 14500 because it is a re-badged Alder Lake and doesn't suffer from the recent microcode issues, and that expensive W680-ACE motherboard for the ECC RAM support. My current Suprmicro server idles at 200W, so I imagine this would be quite a reduction on my power bill without compromising anything. I'd love to hear thoughts & suggestions.
-
Ah I see, I thought the "K" just meant it could be overclocked, I didn't realize that also meant a higher TDP than the non-K. In that case I'll definitely wait for the non-K SKUs and go with whatever the best 65W option is. I understand that, and I guess that's part of what I'm asking - is it worth waiting a couple months for Arrow Lake to roll out, or should I just go with an i5-14500? Apparently the i5-14500 (non-K) isn't affected by the Raptor Lake issues - but I'm leaning towards waiting because it looks like 15th gen is a big bump in efficiency, and presumably I'd be getting more bang for my buck.
-
I'm currently running an old Supermicro X9DRi-LN4+ with dual Intel Xeon E5-2670 v2 CPUs. It's served me well for years but this old Supermicro is loud, hot, and power-hungry. I've been planning a new build with an i5-14500, but I'm wary of the issues with Intel's 13th/14th gen. I just saw the news about the new 15th gen, and although it doesn't look like a huge performance boost over 13th/14th gen, it does look like it's a big leap in power efficiency, and presumably they'll have ironed out the issues that Raptor Lake recently had. What are your thoughts on Arrow Lake? Is the loss of hyperthreading a big negative? They say it's more power efficient than the previous gens but it looks like the "base power" of the Ultra 5 245K is 125W compared to the 65W on the i5-14500.... so does that mean the 245K would use more power for the same load as an i5-14500, or am I misunderstanding how that works? If you were rebuilding your server soon (and assuming long-term power efficiency & performance are more important to you than upfront cost) would you be looking at the 200S (Arrow Lake) series or would you get something different?
-
So far it's been staying up. I'll try to narrow down exactly which Docker it is. Is there some sort of Docker syslog that I can look at after a crash to see what's going wrong and report to the developer? Normally I'd just use the logs pop-up on the Docker tab, but that's not much good in this situation since everything goes down when the issue occurs.
-
Yeah the system seems to stay up just fine when the array isn't started. I also ran the Memtest for 30 min or so and didn't see any errors. Now I have the array started with all the Dockers stopped, and I've been starting them one at a time with a few minutes in between to see if one in particular is crashing everything. Is it possible for an individual Docker to crash the entire OS like this? I was running scrapers in the Stash docker at the time of the crashes. (Something I've done many times before without issue). If I get all my Dockers running and they stay stable for a few hours then I'll try running the scene scraper in Stash again to see if it triggers another crash.