Wintersdark

Members
  • Posts

    55
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Wintersdark

  1. Is there a way to disable checking for new docker versions? My dockers all auto-update on a set schedule, but this means I'll often have 20+ notifications (for docker updates) all of which have already automatically been updated since. It's not useful and spams the notification list badly.
  2. Yeah; iirc there where some issues with E-cores, but they where resolved pretty early. I didn't experience that myself however, the 12400 doesn't have those.
  3. Interesting. Maybe I'll just grab a 13400 instead of an arc GPU to get AV1 support. Hmmm.
  4. I'm not in a huge hurry. It's hard to get downtime (my server is used by a LOT of people) and I'm really not fond of hard crashes. Not to mention what an utter PITA it is to have my whole lan go down and require power cycling my router. The PCIe adapter I'm using now is a very good quad port job, and while bonds aren't as good as a fatter single pipe, it does do the job.
  5. Yes, narrowed it down to my 2.5g lan port. Tossed in an old PCIe Intel network card and the problem stopped, removed it a week ago and the system crashed in about 12 hours (again taking my whole lan with it). Back in went the network adapter, no more problems. Frustrating as I'd rather just use the 2.5g port, but it's not the end of the world.
  6. It'll be a lot more than that. My old 8th gen Celeron could handle 4 such HEVC streams, and literally dozens of 1080p h264 transcodes. This far, I haven't been able to get my GPU usage over 30% transcoding, but I haven't had time to stress test since everything was working correctly. As it stands, my 12400 has done 6 simultaneous transcodes, 2 of which where 4k HEVC > 1080p and 4 where random h264 transcodes. Honestly though there's very little appreciable difference in load between h265 and h264. When quicksync works, it's really spectacular.
  7. What? Take that elsewhere, while I don't know him at all, I'm willing to bet he personally had as much to do with Germany's actions as you did. Really though, please, *please* let's not go on political rants.
  8. It gets WAY more complicated than that though, because when the discrete GPU is transcoding it'll draw FAR more power than an Apollo Lake iGPU. But we'd also need to consider pricing. An Intel Xeon w-1290 costs twice what a (comparably performant) i5-12700 costs, though it offers ECC and such support - and then you need to add a discrete GPU. Saving 20W (5gbp a month at 0.35gbp/kwh) when your several hundred over in hardware cost is false economy. Not to say it's a bad system at all, just that it's really not a fair comparison.
  9. 12400, 16 HDD and 3 nvme ssd, as well as my Asus XT4 wifi router and XB8 cable modem, I get around 120w at idle with disks spun down but services running. Edit: the above is running two PCIe HBA's and a PCIe quad port Intel NIC as well.
  10. TDP doesn't really matter except where you're running the system flat out. Would it be worth a full platform change? Probably not, if you're looking to stay at a high end system running a lot. The reality is that you'd be better off just cutting back on services. Have a UPC, or even a killawatt meter? Just try different configurations, maybe limit total system power (such as undervolt the CPU) with what you've got. But it's REALLY hard to see what actual power draw looks like from the wall with all the changing factors (workload, hardware, etc). Basically, you choose between measures of system power for mid to high end parts, or measures of power efficiency for smaller and portable parts. You just accept that power draw for high end systems under high utilization... Are high.
  11. For sure. That's why I strongly encourage everyone to run one, as it prevents a host of sometimes baffling problems that can arise out of nowhere, even if it is working fine today. Just better to avoid the potential problems.
  12. "Should" being operative here. There's a few layers of should in there. It depends on the KVM switch, really, if it edid information when off/switched to another input, or he may have an issue where sometimes it fails. It *seems* like Alder Lake doesn't care as much as previous iGPU's about having that connection to work (mine has worked without my dummy plug installed since my Alder Lake upgrade) but... Eh. It should be fine. If one has an unused video out on their motherboard, spending the $3 on Amazon for a dummy plug is great piece of mind.
  13. Noone can answer that, but obviously Plex is going to prioritize getting their software fully functional on current hardware, so probably. Soon (TM)
  14. No tone mapping, but he transcoding is fine and stable.
  15. It may even be counterproductive because the 4690 part is for a different GPU iirc, the alder Lake GPU's have a different address.
  16. With the plexinc/pms-docker:plexpass container, if you open the logs while you start it you'll see it actually downloads the binaries every time the container starts up. In fact, the container doesn't actually contain Plex Media server at all.
  17. Obviously it's only desired if clients cannot use it. But note that it doesn't disable HDR system wide: direct played or streamed media keeps its HDR. Tone mapping only happens on transcoded media. If you need to transcode and do not have an HDR capable device and no tone mapping, the end result looks *very* bad. But if you DO have tone mapping, it just looks like any other non HDR content (fine, if not ideal). You've already chosen to accept a decrease in visual fidelity for some reason (probably bandwidth or data caps) so just getting non-HDR isn't going to be a big deal.
  18. I wasn't that bad, but never made it 24 hours without a hard crash of the whole system. My uptime is showing 3d6h now, with 12 active users.
  19. Mines been running for a few days now, without a hiccup. No tone mapping is an irritant but a minor one; no instability with transcodes however.
  20. I can't get an image now, but there'll be a field "Repository" that has "plexinc/pms-docker" (or "plexinc/pms-docker:latest") you just add the tag to the end of that, so change it to "plexinc/pms-docker:plexpass"
  21. Just add the :plexpass tag to the container for the official one
  22. I've gone done this exact road so many times. "Oh look, it's working!". ... 6 hours later, once I'm at work, texts from the wife that the server is down. *Sighs*
  23. I'm definitely interested in results with the RC builds and hardware transcoding. It should all Just Work, but I don't have time to monkey with stuff if it's not working fully yet.
  24. The only discussion to the contrary are people who set it up and it initially seems to work (it will work, after all) right up till when the crashing starts. To be absolutely clear, *with the current Unraid 6.10 build*, Plex (and Jellyfin, and Emby) work fine, *so long as you don't pass them the iGPU and/or enable hardware transcoding.* This isn't debated at all.
  25. Emby or Jellyfin; generally Jellyfin is what people like. It doesn't matter though Jellyfin or Emby won't hardware transcode under Unraid either until the kernel is updated. Personally, I've got a Jellyfin instance running as well on a separate PC (also running Plex) and really? I think Plex is just better. Much more feature rich. But to each their own.