Jump to content

saarg

Community Developer
  • Posts

    5,374
  • Joined

  • Last visited

  • Days Won

    8

Report Comments posted by saarg

  1. 15 minutes ago, jbartlett said:

    This is largely a case-by-case basis and depends on the motherboard, it's physical PCI slot to CPU/NUMA/device connections, and even BIOS version.

     

    If you populate a video card in a given PCIe slot and it shows up in its own IOMMU group, you should be able to pass it to different VM/Dockers. If that is true for multiple PCIe slots, you should be able to populate all of those PCIe slots with a different graphics card and pass each one to a different VM/Docker.

     

    In theory.

     

    You can have many VM/Dockers set up to use a given video card but only one of those can be active at any given time.

    Not 100% correct. You can have multiple containers share the GPU, but you can't have a VM and containers use the GPU at the same time.

    To be able use the GPU for a container you have to unbind it from vfio so the drivers can be loaded.

  2. 5 hours ago, mbuboltz said:

    I really going to miss being able to connect to my Unraid shares from work and where ever I go. SMB does not work over the internet and VPN's are too slow. Really sad to see Unraid going this direction and dropping support for something that still works great.

    Vpn is not slow, it's only your setup that is slow. You might have slow speeds if your vpn server is on a slow device.

    • Like 1
  3. 1 hour ago, limetech said:

    That's a little unfair.  Individual USB assignment has always been problematic - on again/off again.  It stems from interaction between qemu/kvm, both of which we keep up with latest releases, both of which we have no control over.  I told you the most reliable method to get USB devices to work is to pass through the entire controller.  It's exactly what I do with the VM running win10 that I'm typing this with, because I too have seen cases where a logitech wireless receiver simply didn't work reliably, but passing through the controller itself is very reliable.

    Wait a moment... You use win10?! I always imaged you only running Slackware command line only 😛

  4. 7 hours ago, joshbgosh10592 said:

    No worries, thank you!

    So, just to be sure, I'd add the kvm-intel.nested=1  in the Unraid OS section, so it's exactly as below?

    
    kernel /bzimage
    append initrd=/bzroot kvm-intel.nested=1
    
    

    But then where do you go to edit the VM's CPU section? LIke, where are the config files for them? I'm assuming /etc/libvirt/qemu/VMName.xml, correct?

    There is a toggle in the VM template in the top right corner that switch to xml view.

  5. 2 hours ago, bonienl said:

    Eh, no when no static IP address is set, the container will get an IP address assigned by DHCP.

    In this case it is necessary to set a correct DHCP pool, which does not conflict with the DHCP of your router.

    Port exposure doesn't change

    I probably didn't explain it well enough.

    I mean when you use a custom bridge. Then docker is giving the container an IP.

    If one create the custom network with the macvlan switch it works as you say.

  6. I believe that is only true if you set an IP for the container. If you do not set an IP on a custom network the port mapping must be set or no ports are published.

    The container is then still available through unraids IP and the port in the port mapping.

     

    1 hour ago, bonienl said:

    When using custom networks (macvlan) all ports of the container are always exposed.

    You may want to list the ports in use by the container for clarification, but it won't have any effect on the operation whether this list is present or not.

     

     

  7. I think there are very few cases where one needs to edit the container port. I think we might have one container where it's needed to edit the container port. So in my opinion it would be nice to be able to when you are not using the default bridge.

    You can work around it by adding a new port mapping. Then you will have access to set the container port. At least on 6.7.2.

     

     

    The port mappings do work even on custom bridges. It seems to me that you are both misunderstanding each other.

    Marshalleq are talking about custom bridge and bonienl are talking about setting a dedicated IP on a custom network?

     

     

     

    • Like 1
  8. 10 hours ago, markinsutton said:

     I did try this, all containers reported they where updated but when I opened home assistant it wasn't the latest version,  I will try again maybe I missed a step 

    This bug doesn't affect the ability to update the containers. It shows an update always available even if there wasn't one. If there would have been an update, the container would be updated.

    So if you container isn't running the latest version, it's not caused by this bug.

  9. 18 minutes ago, dalben said:

    I dont use Mover. The stalls come when one of the media download dockers pulls down a file, into cache, then moves into the library, which is on the array. 

     

    Or if I am just working on other stuff and accidentally start a file copy or move while media is streaming. 

    Stop stealing movies then until 6.8 is out.

    • Haha 4
  10. I don't know how others build their container, but we build all in Jenkins (might be an exception or two) and I'm not sure if everyone else is doing that. So that might be the difference. Might be something happened with the api regarding pushed images.

     

    However, I don't see that this is something we could fix as we do not host any images ourself and everything is on docker hub, so any issues with checking for updates is not under our control. It's either docker that have an issue or unraids update check (change in api in newer versions?)

     

     

  11. 5 hours ago, switch said:

    I know that ARK does not specifically mention QuickSync support, BUT... J4105 has a UHD600 graphics built in and UHD600 does indeed support QuickSync. This is further evidenced by HW transcoding working perfectly fine on unRAID 6.6.7

     

    https://en.wikichip.org/wiki/intel/celeron/j4105#Graphics

    https://en.wikichip.org/wiki/intel/uhd_graphics/600#Hardware_Accelerated_Video

     

    Then it's probably kernel related. Do you get the entr  in the plex log that the hybrid_drv_video.so is missing?

     

    You have confirmed in 6.6.7 that cpu usage is less and it's not only plex thinking it uses hw?

    in case the chip doesn't support quicksync and there was an error in the earlier kernel.

  12. 15 hours ago, nraygun said:

    If I'm on 6.7.1 with a cache drive and am experiencing no issues, is it advisable to go to 6.7.2? Or should I leave well enough alone given this investigation will probably yield yet another update?

    If you have a cache disk and use /mnt/cache for the appdata, you will have no problem updating. Those that have issues got it from 6.7.

    • Like 1
  13. If you get a "Failed to wrapper hybrid_drv_video.so", this is because plex didn't include the library in their binary package.

    This error we also got from the users of our plex container and not running unraid. So it's a plex issue and should be reported there.

     

    @Maxrad

    Your post doesn't really belong in this bug report. Better to open your own normal thread. But you are probably not adding modprobe i915 in your go file.

     

    As for you guys using the J4105, it doesn't look like it support quick sync from a quick Google search.

×
×
  • Create New...