Jump to content

ZooMass

Members
  • Content Count

    23
  • Joined

Community Reputation

3 Neutral

About ZooMass

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. My question is very similar to this one. I have an arch-rtorrentvpn container (VPN disabled) using the network stack of a dedicated arch-privoxyvpn container using --net=container:vpn parameter. I am trying to set up port forwarding on the vpn container for rtorrent. On the arch-rtorrentvpn container, it just automatically acquires the forwarded port when the PIA endpoint being used supports it. I am aware of PIA's next-gen upgrades disabling port forwarding and I am primarily using their Israel, Romania, and CA Montreal servers. The arch-privoxyvpn container connects to those endpoints successfully, but it doesn't do the same automatic port forwarding that the arch-rtorrentvpn and arch-delugevpn containers do. Is there a setting to force this? I assume that the container supports it due to sharing the same container startup procedure across the binhex containers. Manually creating a STRICT_PORT_FORWARD variable in arch-privoxyvpn (like in the other two containers) has no effect. Even though I am using PIA, there is a log line that says: 2020-09-16 15:23:54,195 DEBG 'start-script' stdout output: [info] Application does not require port forwarding or VPN provider is != pia, skipping incoming port assignment Is using the ADDITIONAL_PORTS variable equivalent to just adding a new port to the template? Is the vpn_options variable just extra parameters for the /usr/bin/openvpn command?
  2. ZooMass

    Jitsi?

    +1 demand unit Help us SpaceInvaderOne, you are our only hope (not tagging you because I'm sure you're already annoyed with the last three @'s in this topic).
  3. Having the same problem accessing web UI, I am using the manually created "docker network create container:vpn" and not "--net=container:vpn" extra parameter on unRAID 6.8.3 with Docker version 19.03.5.
  4. I have been experiencing the same issue with Jackett and LazyLibrarian. There has been some discussion of this web UI issue over on binhex-privoxyvpn (I lay out my details there). For the record, a lot of people are using that container instead of binhex-delugevpn as a dedicated VPN container. Any ideas or advice would be useful!
  5. Thank you for the quick response! My setup looks essentially the same as yours, with the VPN container named simply vpn, and unfortunately I still cannot access the web UI, just a 404. One thing I tried changing was that I changed the network from a custom public Docker network I have (to isolate from non-public-facing containers) to simply the bridge network like yours. Client container still receives the VPN IP, but I still can't access the web UI. I tried disabling my adblocker even though it should have no effect, and it in fact does not. The container is named jackettvpn because I modified my existing container, but that container's VPN is disabled.
  6. Thank you for these very clear instructions! I was just looking for something like this after hitting my VPN device license limit, and SpaceInvader One released this timely video. Like a lot of you guys I wanted to use a dedicated container instead of binhex-delugevpn, and this binhex-privoxyvpn is perfect for the job. However, I'm unable to access the client container web UI. I've now tested with linuxserver/lazylibrarian (to hide libgen direct downloads) and linuxserver/jackett (migrating from dyonr/jackettvpn, but also tried with clean image). I'm on unRAID 6.8.3 and I've tried both "docker create network container:vpn" and "--net=container:vpn" extra parameters. (also, for the record, "docker run" complains when you set a custom network:container in the dropdown and also have translated ports, so be sure to remove ports at the same time you change the network). I've added the ports for the client containers (in my two test containers those 5299 and 9117 respectively) to the binhex-privoxyvpn container named vpn, restarted vpn, and rebuilt & restarted the client containers. Still can't reach container web UI on [host IP]:5299 or [host IP]:9117. In the client containers, I can curl ifconfig.io and I receive my VPN IP, so the container networking seems to work fine. The client web UI seems to be the only issue. I've seen a couple people in the comments on SpaceInvader One's video report the same issue. Has anyone else experienced this or fixed it? Would love to have this setup work out!
  7. I'm having trouble using the "Tiered FFMPEG NVENC settings depending on resolution" plugin with ID "Tdarr_Plugin_d5d3_iiDrakeii_FFMPEG_NVENC_Tiered_MKV". It says it can't find my GPU. Command: /home/Tdarr/Tdarr/bundle/programs/server/assets/app/ffmpeg/ffmpeg42/ffmpeg -c:v h264_cuvid -i '/home/Tdarr/Media/Television/Stranger Things/Season 03/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p].mkv' -map 0 -dn -c:v hevc_nvenc -pix_fmt p010le -rc:v vbr_hq -qmin 0 -cq:V 31 -b:v 2500k -maxrate:v 5000k -preset slow -rc-lookahead 32 -spatial_aq:v 1 -aq-strength:v 8 -a53cc 0 -c:a copy -c:s copy '/home/Tdarr/cache/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p]-TdarrCacheFile-p1cwX-Dg.mkv' ffmpeg version N-95955-g12bbfc4 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1) configuration: --prefix=/home/z/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/z/ffmpeg_build/include --extra-ldflags=-L/home/z/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/z/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree libavutil 56. 36.101 / 56. 36.101 libavcodec 58. 64.101 / 58. 64.101 libavformat 58. 35.101 / 58. 35.101 libavdevice 58. 9.101 / 58. 9.101 libavfilter 7. 67.100 / 7. 67.100 libswscale 5. 6.100 / 5. 6.100 libswresample 3. 6.100 / 3. 6.100 libpostproc 55. 6.100 / 55. 6.100 Guessed Channel Layout for Input Stream #0.1 : 5.1 Input #0, matroska,webm, from '/home/Tdarr/Media/Television/Stranger Things/Season 03/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p].mkv': Metadata: encoder : libebml v1.3.5 + libmatroska v1.4.8 creation_time : 2019-07-04T07:03:27.000000Z Duration: 00:50:33.63, start: 0.000000, bitrate: 7850 kb/s Chapter #0:0: start 306.015000, end 354.521000 Metadata: title : Intro start Chapter #0:1: start 354.521000, end 3033.632000 Metadata: title : Intro end Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default) Metadata: BPS-eng : 7205368 DURATION-eng : 00:50:33.573000000 NUMBER_OF_FRAMES-eng: 72733 NUMBER_OF_BYTES-eng: 2732251549 _STATISTICS_WRITING_APP-eng: mkvmerge v21.0.0 ('Tardigrades Will Inherit The Earth') 64-bit _STATISTICS_WRITING_DATE_UTC-eng: 2019-07-04 07:03:27 _STATISTICS_TAGS-eng: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES Stream #0:1(eng): Audio: eac3, 48000 Hz, 5.1, fltp (default) ... Stream #0:29 -> #0:29 (copy) Stream #0:30 -> #0:30 (copy) Stream #0:31 -> #0:31 (copy) Stream #0:32 -> #0:32 (copy) Press [q] to stop, [?] for help [hevc_nvenc @ 0x55aaaad84e40] Codec not supported [hevc_nvenc @ 0x55aaaad84e40] No capable devices found Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed! I have an EVGA GeForce GTX 760, obv an older card. nvidia-smi doesn't support it. Tue Mar 10 13:54:11 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.59 Driver Version: 440.59 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 760 Off | 00000000:08:00.0 N/A | N/A | | 0% 35C P0 N/A / N/A | 0MiB / 1997MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+ However my linuxserver/plex and linuxserver/emby containers do manage to use it for hardware transcoding. I made sure to set all the correct Docker template variables including --runtime=nvidia, NVIDIA_DRIVER_CAPABILITIES=all, NVIDIA_VISIBLE_DEVICES=<GPU ID>, I have Linuxserver Unraid Nvidia 6.8.3 installed. Any tips? I would really like to be able to transcode on the GPU, I've been brutally punishing my CPU for days slowly transcoding on Unmanic
  8. Unmanic is another good container. It's dead simple, you just point it at a directory and it converts x264 video files to HEVC.
  9. Super happy to see that we have resolved the issue! Thank you @Rich Minear @limetech and everyone else! I look forward to finally confidently upgrading from 6.6.7!
  10. That was it! Yes, I am running 6.6.7 because on 6.7+ my machine experienced the SQLite data corruption bug being investigated and 6.6.7 did not. Thanks for your help!
  11. Just ran the container, tried to start the VM, got this error: internal error: process exited while connecting to monitor: 2019-10-27T19:31:11.597434Z qemu-system-x86_64: -machine pc-q35-3.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off: unsupported machine type Use -machine help to list supported machines I am using the --catalina OS flag. Only change I made to the container is that I changed the VM images location to /mnt/cache/domains, but that should be the same as /mnt/user/domains anyway. Anybody seen anything like this? My XML: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>MacinaboxCatalina</name> <uuid>51e88f1c-a2ab-43af-981f-80483d6600c8</uuid> <description>MacOS Catalina</description> <metadata> <vmtemplate xmlns="unraid" name="MacOS" icon="/mnt/user/domains/MacinaboxCatalina/icon/catalina.png" os="Catalina"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/Clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/Catalina-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/macos_disk.img'/> <target dev='hde' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:71:f5:95'/> <source bridge='br0'/> <model type='vmxnet3'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-usb'/> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd,bus=usb-bus.0'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=IAMZOOMASSANDIREMOVEDTHEMAGICWORDS'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/> </qemu:commandline> </domain>
  12. Just updated my Asus Prime B450-A to 7/29/2019 BIOS version 1607 which says it updates AGESA 1.0.0.3 AB. Just tried to pass through an GeForce 760 to a Windows 10 VM, but I still get the same D3 error. EDIT: My CPU is an AMD Ryzen 2700.
  13. I'm trying to access the web UI through a reverse proxy, but when I go to https://mydomain.com/thelounge it just responds with a page that says "Cannot GET /thelounge/". I'm using update 15.05.19, the latest image as of posting this, and I made no changes to the default template except set the network to my public Docker network (where containers can resolve each other's hostnames). I made no changes to /config/config.js except set "reverseProxy: true,". I use jselage/nginxproxymanager container, here is my location block: location /thelounge { proxy_set_header Host $host; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://thelounge:9000; auth_request /auth-0; } It's a bit different from the block in https://thelounge.chat/docs/guides/reverse-proxies#nginx but I also tried adding the suggested lines and they made no difference. I also tried using this block from the linuxserver config even though I don't use the linuxserver/letsencrypt container, same thing. # thelounge does not require a base url setting location /thelounge { return 301 $scheme://$host/thelounge/; } location ^~ /thelounge/ { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_thelounge thelounge; rewrite /thelounge(.*) $1 break; proxy_pass http://$upstream_thelounge:9000; } Any other ideas?
  14. Also can't pass through GPU to Windows 10 VM. Ryzen 7 2700 Asus PRIME B40M-A/CSM version 1201 (can't downgrade, there was an update 1607 on 6/25/2019 that I haven't tried yet) unRAID both 6.7.0 and 6.6.7 EVGA GeForce GTX 760
  15. This may be specific to Code Server itself and not this container, but how do we tell VSCode the subdirectory path it's located on in a reverse proxy?