Tybio

Members
  • Posts

    610
  • Joined

  • Last visited

Everything posted by Tybio

  1. Just remember, the Video card for a gaming box is going to suck more power than the rest of the box combined (if it is a high end card!). It will also drop down to very marginal usage when you are just on the desktop or doing simple things vs playing a game. I forgot to mention that above
  2. Check out the Xeon E series, they are basically current generation core processors that have lower TDP. Also, don't confuse the rating with the actual power usage. That's heat dissipation required, the processor can use far more (generally burst when boosts kick in) or far less if not much is happening. For instance, I have a 80w TDP rated processor but I normally sit around 70w for the full system (with idle disks, 2 VMs running and about 10 dockers) and have only spiked up to ~110w during a parity check...if I were to fire up par repair during a parity check (which may have happened! I don't watch that close) then I would have the disks munching power and the processor kicking into boost levels...so usage would /spike/ but as soon as those finished it would bottom out at next to nothing. Power use is more a function of use than processor. Sure, you can move the "average" around a little, but a slower processor with a lower TDP will end up using more power if it is boosting all the time then a higher processor that is operating in its nominal state. Basically, what I'm saying is don't think of it as a strait line, it is about finding the tipping point for your use. There is no hard and fast rule...other than your choice of disk and any add-on cards will likely have a much larger impact on usage than a CPU...you can move a few w up or down with a CPU...but put in an enterprise disk with no spin down and that few wats you saved means next to nothing :). Just my thoughts mind you, I'm not an expert, I just play one on forums.
  3. It isn't going to make it into Leia, but will the release after it looks like. A very simple and effective way to have a 4k and HD version, or to deal with extended/directors/final cuts! https://forum.kodi.tv/showthread.php?tid=337992 This is really cool, and a feature that Plex doesn't have as far as I know.
  4. On the SM board used in this build, there is an impact. The SATA controler can only support 8 drives, so if you put an SATA M.2 in then you lose a MB SATA port (#3 I believe). It isn't about speed in this case, it is about the chipset SATA controller. The Gigabyte has 8xSATA on board and 2xSATA "Extra" that are using an additional SATA controller. That is why it has 10 ports (and you can see the 2 additional ports are different and not in the MB supported block, they are gold/yellow and not black). Likely what they did is add a 4xSATA controller to one lane from the chipset, that means they can add the two ports on the MB and use the other two SATA connections for SATA SSDs without impacting the on SATA ports on the MB at all. A really nice design!
  5. It seems to be platform indipendent. For instance my Nvidia Shield TV (One of the best players on the market) direct plays them fine, but the transcoded versions playing on the same device look washed out. Same thing for browser viewing. It doesn't seem to be a "client" issue, but more a transcoding issue...the transcoder is not mapping the tone between HDR and SDR..so when the client gets it they don't get the HDR metadata /and/ the default tone mapping in the transcoder is putting out crap colors. For reference: https://forums.plex.tv/t/hevc-4k-playback-washed-out-colors/220912/22 So right now, any HDR content being transcoded will look washed out...see the screen shots in that thread. 4k that isn't HDR will be fine. So I can transcode 4k HDR/SDR content at 8x+ (So always throttled in PMS), but with HDR content the playback is crap. As a note, I'm unsure if the HDR issue would be resolved on a transcoding session where the client supports HDR...as that is likely the most limited of corner cases in this day and age, I'm not sure it matters :).
  6. Odd, I'm not sure how they are doing that, I assume that the 4x ports are coming from the chipset and not the CPU...but I'm still not sure how they get 2 more SATA ports and 1 more 4x port out of it.
  7. Recommendation to all, before you upgrade for 4k transcoding..please be sure to look into tone mapping issues. Though the 2176G I have can transcode 4k, with Plex (and thus all others as far as I know as Plex is the best on the fly transcoding around) 4k HDR video content will look horrid after being transcoded. It is washed out and dark. The reason for this is the HDR translation, right now there isn't any...so unless the client can understand HDR, the picture is going to look bad. I've even seen this on my Nvidia Shield. I can direct play and it is perfectly handled by my shield TV (I have a 1080p TV right now, so everything is being converted until the super bowl sales!). When I select another quality profile, like 720p in Plex to force transcoding...the video plays fine, but is totally washed out and just dull looking. I've not found a solution as yet, so even if the processor "CAN" do 4k transcoding real time...the resulting video may not be worth the effort.
  8. No luck on the IPMI console over HTML5 or java. I'm going to open a case with SM on Monday. I got 32G of ECC RAM From Supermicro, they were $200 a stick. If you are just doing dockers and not VMs then consider 2x8G sticks, you can always add 2 more later to get to 32G....but with just dockers running there is really no need for 32G...even 16G is swimming with head room :).
  9. Ok, been running a few days and it is working flawlessly. Thanks again for the update!
  10. Let me see if I can answer! Also, what board did you go with? My investigation concluded that there was only so much they could do with the c246 chipset and PCI-E lanes from the processor. I'd love to see an option with more SATA/PCI-E. I'm going to upgrade to 2x2TB at some point, I'm doing a lot of UHD downloads at the moment and 1TB was getting over 50% used at times...I'd rather my cache drive never run into size contention over data security as everything ON the cache can be re-downloaded/restored. My VMs are on the M.2 and backed up to the array so they are also protected. Yes, I use it unassigned as I have multiple VMs on it. If I were to setup a desktop replacement VM I'd pass it through to get a little more isolation...but it wouldn't be a function of speed...they are stupid fast to start with. Linuxserver docker, with just the standard --device options and the go file to discover the iGPU. Totally generic configuration. Metadata is on Cache, mostly so the CA Backup script can capture it without having to do anything special. Plex transcoding isn't ever disk limited on an SSD...so I've left it on the cache. The only reason to move it is to prolong life of the SSD as far as I can tell. You don't need much for the transcoding location, I've not looked into it lately but IIRC the only things in there are the 1 minute "look ahead" when it is throttled. That said, I haven't bothered to go that far so am not an expert :). I assume it is, as M.2 is just flash on a different form factor, so it will suffer from the same limitations...I'm in the "Buy a cheap SSD for cache and replace it when needed" camp, but I go high quality on my M.2 as that is where the VMs are and in the future might be a desktop replacement. The unraid cache is never going to be really speed stressed, so even a degraded SATA SSD is going to out preform the rest of the setup for a good while. When it doesn't...I'll just buy the cheapest option to replace. That said, I've had SSDs for 2+ years and never noticed degridation with both nzbget post-processing and plex transcoding going on. I use SMB as my clients all support it out of the box. Mostly using Nvidia Shield's as my STBs and it is just easy. With 10G networks coming about, IMHO it is better to plan that path then to nit-pick about the protocol used...the overhead is more and more meaningless...so I just went with the one that was the most likely to be supported...and turned the others off to prevent spamming my network with useless shares. These are all just my opinions mind you, so take them with a grain of salt!
  11. Thanks to the IPMI plugin I now have solid fan control, the update makes this board 100% functional for a quit unraid server! The only outstanding issue left is with the KVM setup. I can't get any console redirection to work via HTML5 or Java applet. THink I'm going to open a case with supermicro.
  12. You need to set the lower thresholds for the fans with the sensor editor. The bmc will keep kicking them up to full speed. Set them to 200 or 300 rpms I had them fixed for all the headers (100/100/100)...it looks like the bmc reset is what was needed. I forgot I could do that from the UI!
  13. Odd, I did a full reset...meaning not just a reboot but powering down the system and unplugging it to force the BMC to reset and it seems to work now! Sorry for the false alarm....testing to make sure each side responds properly ATM.
  14. As a note, I set both the minimum PWM to 50%...as far as I know that should be fine for the FANs in my case....it was able to go lower with CPU control, so I don't think I'm hitting the erred state...it just feels like the IPMI full speed setting is being re-applied or something.
  15. Ok, I re-wored my fan headers this morning, the system side fans are all in the "CPU" ports, and the drive side is in a "SYS" header. The result is all fans on full, it sounds like they are spinning down a bit when the timer hits for the IPMI plugin, but then they spool back up to full. Is there a special switch I need to dig out to stop the MB from resetting them to 100%? 2019-01-02 06:10:05 Starting Fan Control 2019-01-02 06:10:05 Setting fans to full speed 2019-01-02 06:10:15 Fan:Temp, FAN1234(40%):HDD Temp(28°C), FANA(40%):HDD Temp(28°C) 2019-01-02 06:11:26 fan control config file updated, reloading settings 2019-01-02 06:11:26 Fan:Temp, FAN1234(43%):CPU Temp(30°C), FANA(40%):HDD Temp(28°C) 2019-01-02 06:11:46 fan control config file updated, reloading settings 2019-01-02 06:11:46 Fan:Temp, FAN1234(46%):CPU Temp(33°C), FANA(48%):HDD Temp(28°C) 2019-01-02 06:12:46 Fan:Temp, FAN1234(47%):CPU Temp(34°C), FANA(48%):HDD Temp(28°C) 2019-01-02 06:13:06 fan control config file updated, reloading settings 2019-01-02 06:13:56 fan control config file updated, reloading settings 2019-01-02 06:13:56 Fan:Temp, FAN1234(61%):CPU Temp(33°C), FANA(48%):HDD Temp(28°C) 2019-01-02 06:14:16 Fan:Temp, FAN1234(59%):CPU Temp(31°C), FANA(48%):HDD Temp(28°C) 2019-01-02 06:15:57 Fan:Temp, FAN1234(60%):CPU Temp(32°C), FANA(48%):HDD Temp(28°C) 2019-01-02 06:16:17 Fan:Temp, FAN1234(60%):CPU Temp(32°C), FANA(45%):HDD Temp(27°C) 2019-01-02 06:16:47 Fan:Temp, FAN1234(59%):CPU Temp(31°C), FANA(45%):HDD Temp(27°C) 2019-01-02 06:17:27 Fan:Temp, FAN1234(62%):CPU Temp(34°C), FANA(45%):HDD Temp(27°C) 2019-01-02 06:17:37 Fan:Temp, FAN1234(59%):CPU Temp(31°C), FANA(45%):HDD Temp(27°C)
  16. They are seperate elements. The use of one is not directly tied to the other, just like if you had an addon video card. My E-2176G transcodes 4k with ~5-10% CPU usage, but the temperature of the processor goes up as the iGPU is doing the de/encoding. However it doesn't directly impact the CPUs ability to perform tasks. I've not tried a stress test yet, only have limited devices to stream on...but I may see if I can push it up to 4 streams to check things out. I've had 2 streams going at once with no issue, obviously audio is being transcoded if needed (which it normally is) by the CPU. Also, check some reviews, the E-2186G gets nearly the exact same score as the E-2176G due to thermal limiting....for transcoding use there is literally no difference between them. The E-2186 just ups the power needs and actually may not give any testable...much less noticeable...improvement.
  17. I just updated, looks good! I'll open the system and re-position the fans today or tomorrow and turn on control to test. Thanks for the quick update!
  18. I added some Chormax parts from Noctua to the cooler and added a 250G NVMe drive, updating main post with new picture.
  19. Confirmed! I've got a 250G NVME drive in the top slot by the CPU and a 4x card installed. There is also a drive on every on-board SATA port, though that shouldn't matter as it is an NVMe. As a note, I tried it in the other M.2 port and it DID knock out the 4x slot (So I had no net)
  20. It seems this is no longer the case.
  21. Even if there were, I'm not sure Plex would support hardware transcoding with an nvidia card under linux. That may be an old limitation, but I know it existed at one point!
  22. You could look at the Define R6, just make sure you can buy the additional 5 caddies as it only comes with 6, but can support up to 11 spinners in the front stack. I use one for my gaming box and I've NEVER seen a better built case...the airflow is insane and the panels are easy to remove whenever needed....don't need the thumb screws to hold them in place unless the box is being moved. the mount is on the "Back" where the drive cables are, so you can open one side, pull the cables and then unscrew and remove the drive without even opening or really having any exposure to the other side with the MB...less chance of unintended consequences ;). The two 2.5" holders on the back of the motherboard tray are nice, but IMHO...just use M.2 (via MB or via PCI-E cards). Using 2.5s is really pointless unless you already have them (as is the case for you I believe). For now you could use the 2 mounts on the back and get a 2x2.5 to 3.5 adapter and mount the other two in the top bay. Just a thought!
  23. The drivers are built into the linux kernel, so they are linked to the unraid release. To know what kernel supports AGESA 1.0.0.6 is a little google away. I believe that entered in the 4.1x tree and unraid 6.6.5 was 4.18...so it is likely supported, but you would have to dig a bit deeper to find out....I'd hit google more but work is calling!
  24. That's a great summary @Iciclebar, the key take away is that the TDP for a processor is the /minimum/ TDP your cooler needs. Most boards/CPUs /can/ go over that if the cooling is good enough, so don't skimp and you can get more performance out of a CPU. Server boards tend to be a lot more strict in this, but IMHO, you will never go wrong with good cooling. For reference, my E-2176G which is basically an i7-8700k repackaged for the workstation market sips power when it is idle....my full system runs at~70W (and is never really idle as I have 2 VMs and like 10 dockers running!). Even with a parity rebuild I was only pushing 100W...so as long as you avoid the major pitfalls (Disks spinning all the time, addon cards that you can work around needing by using on-board functions etc) you can get crazy low power usage.