Tybio

Members
  • Posts

    610
  • Joined

  • Last visited

Everything posted by Tybio

  1. Confirmed, Supermicro has stated this is the expected behavior...you can't have a remote console and iGPU. I think I'll likely be in the market for a P2000 at some point later this year, looking toward the next release of Ryzen or TR. If I'm going to have all these limitations, I might as well go for PCI-E lanes so I can work around them....sad to say.
  2. I'll keep you updated on the IPMI...and I also agree on the VM solution, it has some down-sides, but honestly...it is looking like a VM with a P2000 would have been a much better solution for this moment in time...then I could have remote console, and gone with any CPU I might like down the road
  3. You nailed it ;). Nothing to add to the video or the stream limitation...those are the reasons!
  4. So I've been working with Supermicro on the IPMI problem, and bad news. Right now it seems you have a choice...use the BMC's Video adapter and get remote console /OR/ use the iGPU. If you use the IPMI video adapter, you don't seem to have access to the iGPU once the OS is booted... This is a REAL issue for me, one I'm working with their support team to sort out...but at this point, it looks like remote console OR hardware transcoding...but not both. I hope there is a way around that or this board was freaking pointless :).
  5. It's been rock solid on 6.6.0....erg. Perhaps this weekend I'll re-seat and try 6.7.0-rc1 again.
  6. Interesting, let me stress it out on 6.6.0. It has been working flawlessly in 6.5.0 since I finished the build over a month ago.
  7. Net dropped again, without using Plex. I'm at a loss here so going to revert back until a bright idea shows up.
  8. Enabled docker, let it sit for a bit and all seemed fine, then started playing something from Plex and network dropped again. I'm not sure what sort of causal relationship I can draw here, but I'm trying to help isolate things. storage-diagnostics-Docker-Enabled-Net-Failed.zip
  9. Ok, I rebooted...grabbed a diags before doing anything. Dissabled docker and grabbed another diag. I'm now leaving it in this state for a bit, to see if network drops...then I'll re-enable docker to confirm my working theory. storage-diagnostics-Docker-Disabled.zip storage-diagnostics-Docker-Enabled.zip
  10. Right, that was a red herring. I enabled docker and lost net again. Getting some diags before I revert.
  11. Just rebooted, and eth2 is once more "up" and the option to "Port down" is there. Is it possible I've got something miss-configured somewhere?
  12. Moving over here from the release thread. My system boots clean and comes up, then ~10 minutes into operation it loses network. I have 2 connections: 1> eth0: 10GB with unraid IP address 2> eth2: 1GB MB port shared with IPMI Under 6.6.0 eth2 was configured "down" in unraid When I upgraded to 6.7.0-rc1 I had to shut down dockers/vms and "port down" it again. I think that's the end of this, but let me reboot with it down and connected and ensure it sticks.
  13. Just lost net again, nothing in the syslog. Might have to revert back to 6.6.0, anyone have any ideas for information to gather from CLI before I do so? Edit: Odd, I have eth0 (10GB card) as my main interface, but I also have eth2 connected (IPMI is shared on this) but "down" in the network config. It came up that way, but now I'm seeing eth2 as "UP". That might explain things as now I have 2 ethernet ports active on the same segment and (perhaps?) with the bridge setup I'm creating a loop that is getting blocked.
  14. Ok, I got the upgrade done, but after about 10-20 minutes the system lost network. I've got a console hooked up now so if it happens again I can do some digging...but it looks like something went wonky in the network setup (Server could not ping anything, and nothing could ping server).
  15. Shouldn't be, Just a switch between the server and myself. Other popups work fine...like the Log window. Let me play a bit I guess, perhaps a reboot....so sad.
  16. Anyone know why I get "Connection Refused" when trying to run the update to this via the update OS Tool?
  17. Just to give you guys an option, you could drop a P2000 in your current systems (if you have an 8x slot free) and put PMS in a Windows VM. That would give you super transcode ability that is portable as you upgrade...Some people like that path as it lets them keep the MB/CPU optimized for their normal needs and offload the transcoding needs to dedicated hardware. Yes, but in transcoding you could do PERHAPS 2 4k HVEC transcodes by fully loading both of those CPUs. with the iGPU in the i7-8700/E-2176G series you would have the majority of its power even if you are transcoding due to the offload factor. I sit at about 10% CPU when transcoding on mine, I haven't done concurrent stream tests, but I have gotten up to 2 transcodes (4k HVEC -> 1080p) with only ~20% CPU usage. So I have 80% of my CPU left for more transcodes or other tasks. It is all about what component is doing what task. With the system you quoted, EVERYTHING happens on the CPU...the i7/E processors gain a huge advantage IF and only IF a big percentage of the processing is transcoding. Please remember, even when the CPU is at 20%, the iGPU is still working hard...so temp/power consumption does go up....it's not "free" ;).
  18. I don't believe so, but I didn't look into the i7 chipsets. One thing, again, to be careful of. Just because there are "slots" doesn't mean you can use them. The CPU only has 16 lanes, so if you tried to use 8x off of the chipset your ability to use them will be completely constrained. For example, even in the Asus there is only the following configurations avaliable: Lets take a look at the following diagram. What this shows is that on the Asus board, you have a single 16x OR dual 8x from the CPU. All other PCI-E is routed via the chipset and has trade offs. Firstly, there are NO slots that will function at 8x from the chipset....even if they are physically 8x slots on the MB. Secondly, you have decisions to make...You get 1 4x and 2 1x slots all the time, but to use the other 4x slot will disable 4 SATA ports. If you use an SATA M.2 you lose another one. So there does not seem to be a way to get 3 functional PCI-E 8x slots in a C246 board, the limitation is the processor which only has 16 lanes (this is the same as the i7-8700). To get more PCI-E lanes you would have to go with a Xeon or TR and then you can do anything you want, but you lose the iGPU. Using a workstation processor has limitations, PCI-E lanes is the main way they limit the growth of these systems. If I were building today, I would likely go with a cheap Ryzen 2600 and a Quatro for transcoding...then I can do whatever I want on the MB/CPU front in the future and know I'm good for transcoding....I'd at least separate the upgrade cycle a bit that way! I hope this is slightly helpful :).
  19. I really hope they will get fixed, I've also had issues transcoding for a Firestick (it wouldn't hardware transcode it for some reason) but for all other clients it seems bullet proof ATM...if you are ok with washed out colors! The only benefits (as I see it) to a E series over an i7 are the (slightly) lower TDP, and the slightly newer cut of the iGPU (Which can be a down side at the moment as Linux needs an update to 4.20 to get them fully functional). From the platform perspective, IPMI will work with both on the c246 so not really a differentiator IMHO. I'm using the SM board and it is very nice, The developer of the IPMI plugin helped with support for it already so I'm using that to control the fans (Check that thread in the plugin support forum for details).
  20. That's not a bad deal, but if you want to compare, the new E-2176 (And thus the i7-8700 it is based on) has about the same passmark score as that dual proc system...an internal GPU and if you are smart with the MB you can get the proc and MB for about the same cost. Granted, the only real benefits of that are the iGPU and lower power consumption....with power being hugely lower, 2x130TDP vs 1x80. That said, if power and iGPU aren't a concern for you having the extra PCI-E lanes and RAM slots would be nice....but only if you will use them. For reference: https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E-2176G+%40+3.70GHz&id=3336 P.S. If you consider this route, ignore the E-2186G, there is no real benifit to that processor as it gets thermal throttled anyway.
  21. Sorry about my saga in the other thread, but it is finally over! Honestly, if I were to do this again I'd seperate transcoding from the processor totally. I'd get a P2000 or P4000 and run it in a VM for PMS and then I could focus my processor on what I need for other things. Finding a single solution with the hardware today is just not going to really happen, even the landscape is changing so quickly (like with more content on Dolby Vision and HDR+ etc). That would have also opened up the AMD side of the house, and there is a lot to like about their processors from a PCI-E lane perspective, which is the biggest trade off I'm suffering with at the moment. Anyway, as I don't think there is a good answer right now...why not see if brute power will work, I think it will quickly break down, but what do I know?!
  22. Also, be wary of 4k transcoding, even if hardware is supported the tone mapping of HDR/DV content is questionable at best.
  23. Be careful with C246 boards, the processors themselves only support 16 lanes of PCI-E, so anything above that is via the chipset and often means you have to make a choice (Like one of the M.2 slots and a x4 PCI-E slot sharing lanes, so only one works at a time). Check out the thread on the E-2186G, especially the last few pages..lot of conversation on this topic in there. The E-2186/76G are basically i7-8700s that support ECC and have lower TDP. In benchmarks there is little to no value in the highest end as the processors thermal throttle holds them back anyway.
  24. Man, I had no idea Plex supported that! This means that when Kodi does I can finally merge my 4k Library with my SD & HD library
  25. I believe it will draw minimum power, when the system is on I don't think a PCI-E card ever drops to "0" as it has to be on the bus to be used. I'm not sure how low it goes in that case, and likely depends on the card (so no generic answer).