TheGrownUpGeek

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by TheGrownUpGeek

  1. Hi All, having an issue with the Nginx-Proxy-Manager-Official container. I have the container running on a custom network (br0) with a static private ip set. When launching the container i change the values for the http and https port to 8080 and 80443 respectively, however when the container runs it is still opening these ports as their default 80 & 443 values. Has anyone seen this issue before and if so what was the fix?
  2. Thanks rich and econaut, thats solved it transcoding is seamless now and cpu usage has dropped right down so it appears to be working. Thanks very much for the help.
  3. I get the following error when starting the container. Did you need to do anything in advance to isolate the APU or add drivers or did it just work? docker: Error response from daemon: error gathering device information while adding custom device "/dev/dri": no such file or directory.
  4. @econaut could you share in full the steps required to get this working in emby for HW transcoding on a 5700?
  5. Hi @Asian23, this is the use case I am looking at, would you mind sharing the steps you took to pass through the APU to the plex docker for transcoding and also confirm that was on the x570? What is the transcoding performance like compared to before in Plex?
  6. Hi I have noticed an issue when using this container in custom Bridged mode. The container starts and the vpn connection is established however I am unable to access the deluge webui. If I set my browser to use privoxy as a proxy I am able to access the webui fine, however when I remove the proxy config from my browser I am again unable to access the webui for deluge. All other services continue to function as normal and a curl of ifconfig.io shows that all traffic is routed through the VPN service as expected. How can this be resolved? Happy to provide any further info as needed. It seems crazy to have to enable the proxy in my browser just to get to the webui. Checking the supervisord it appears as below that the webui is being mapped to the internal docker network address and not 192.168.1.1 as it should be [debug] VPN IP is 10.7.0.6 [debug] Deluge IP is 10.7.0.6
  7. Hi all, I have been searching for a while and have found conflicting answers so trying to get a definitive response, My unraid server is headless and i only connect to the admin gui remotely or via SSH so my understanding is that it is possible to pass through the integrated video controller to a VM (I do have a discreet graphics card but that is already in use in a gaming VM, i would like to pass through the integrated GPu for some video rendering and conversion tasks in a separate VM.) I have a ryzen 3400G CPU that has built in vega 11 graphics i wondered if it would be possible to pass this through to a VM. Having looked at the awesome @SpaceInvaderOne video on breaking up IOMMU Groups i think this may be possible but thought i would ask first as i want to avoid breaking anything in the array. I have tried the simple option of enabling PCIe ACS override: in the VM Manager Settings window however that does not split out the adapter into its own group which prevents me starting Below is the IOMMU group that contains the Vega VGA adapter IOMMU group 1: [1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:15db] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Internal PCIe GPP Bridge 0 to Bus A [1022:15dc] 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Raven/Raven2 Internal PCIe GPP Bridge 0 to Bus B [1002:15d8] 06:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Picasso (rev c8) [1002:15de] 06:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Raven/Raven2/Fenghuang HDMI/DP Audio Controller [1022:15df] 06:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor [1022:15e0] 06:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Raven USB 3.1 [1022:15e1] 06:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Raven USB 3.1 [1022:15e3] 06:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) HD Audio Controller [1022:7901] 07:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 61) I have 2 questions; 1. If i were to append the syslinux with the command pcie_acs_override=id:1022:15d8 would that break out the device so that it can be passed through to a VM and what if any are the downsides of breaking the IOMMU groups assigned by the system? 2. Is there a different method that will allow the Integrated GPU to be used and if so how can this be achieved? Thanks in advance and if you require any further info let me know.
  8. I know this is a slightly old thread but am in the same situation while i wait for a replacement discrete GPU to arrive. Is it still not possible to pass through a ryzen intergrated GPU to a VM/container?
  9. Ok cool thanks i will do it that way, thanks for the reply
  10. Hi I have an issue when deploying this container. When deployed using the bridge interface all works and the web ui is accessible. If i use a custom bridge interface and specify a manual ip the container runs and functions as expected (privoxy works, downloads added via other sources work when in the console curl ifconfig.io returns the address of the VPN server i am connecting to). However when using a custom bridge interface to allow the container to use its own address the web ui cannot be accessed either from within unraid or directly via http://[containerIP]:8112. Should this container support the use of custom IP address on a container for accessing the web ui?
  11. Hi I have an issue when deploying this container. When deployed using the bridge interface all works and the web ui is accessible. If i use a custom bridge interface and specify a manual ip the container runs and functions as expected (privoxy works, downloads added via other sources work when in the console curl ifconfig.io returns the address of the VPN server i am connecting to). However when using a custom bridge interface to allow the container to use its own address the web ui cannot be accessed either from within unraid or directly via http://[containerIP]:9080. Should this container support the use of custom IP address on a container for accessing the web ui?