Jump to content

ZooMass

Members
  • Posts

    31
  • Joined

Posts posted by ZooMass

  1. I am also suddenly experiencing complete system lockups as soon as the Docker service starts. I'm running Unraid 6.12.6 with Docker networks ipvlan. When I manually edit Unraid USB /config/docker.cfg to ENABLE_DOCKER="no" and reboot, the system is ok. As soon as I enable Docker, web UI hangs. Same behavior when VMs are enabled or disabled, and whether in safe mode or not. When I reboot I immediately manually start SSH tail streams for tail syslog, zpool iostat, and htop, all are ok and then freeze when Docker starts.

    Here is my syslog until it freezes:

     

    Jan 30 00:00:37 Tower root: Starting Samba:  /usr/sbin/smbd -D
    Jan 30 00:00:37 Tower root:                  /usr/sbin/nmbd -D
    Jan 30 00:00:37 Tower root:                  /usr/sbin/wsdd2 -d -4
    Jan 30 00:00:37 Tower root:                  /usr/sbin/winbindd -D
    Jan 30 00:00:37 Tower wsdd2[8123]: starting.
    Jan 30 00:00:37 Tower emhttpd: shcmd (162): /etc/rc.d/rc.avahidaemon restart
    Jan 30 00:00:37 Tower root: Stopping Avahi mDNS/DNS-SD Daemon: stopped
    Jan 30 00:00:37 Tower avahi-daemon[7498]: Got SIGTERM, quitting.
    Jan 30 00:00:37 Tower avahi-dnsconfd[7507]: read(): EOF
    Jan 30 00:00:37 Tower avahi-daemon[7498]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.1.100.
    Jan 30 00:00:37 Tower avahi-daemon[7498]: avahi-daemon 0.8 exiting.
    Jan 30 00:00:37 Tower root: Starting Avahi mDNS/DNS-SD Daemon: /usr/sbin/avahi-daemon -D
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214).
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Successfully dropped root privileges.
    Jan 30 00:00:37 Tower avahi-daemon[8170]: avahi-daemon 0.8 starting up.
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Successfully called chroot().
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Successfully dropped remaining capabilities.
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Loading service file /services/sftp-ssh.service.
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Loading service file /services/smb.service.
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Loading service file /services/ssh.service.
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.100.
    Jan 30 00:00:37 Tower avahi-daemon[8170]: New relevant interface br0.IPv4 for mDNS.
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Network interface enumeration completed.
    Jan 30 00:00:37 Tower avahi-daemon[8170]: Registering new address record for 192.168.1.100 on br0.IPv4.
    Jan 30 00:00:37 Tower emhttpd: shcmd (163): /etc/rc.d/rc.avahidnsconfd restart
    Jan 30 00:00:37 Tower root: Stopping Avahi mDNS/DNS-SD DNS Server Configuration Daemon: stopped
    Jan 30 00:00:37 Tower root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon:  /usr/sbin/avahi-dnsconfd -D
    Jan 30 00:00:37 Tower avahi-dnsconfd[8179]: Successfully connected to Avahi daemon.
    Jan 30 00:00:37 Tower emhttpd: shcmd (171): /usr/local/sbin/mount_image '/mnt/cache/system/docker/docker-xfs.img' /var/lib/docker 128
    Jan 30 00:00:37 Tower kernel: loop3: detected capacity change from 0 to 268435456
    Jan 30 00:00:37 Tower kernel: XFS (loop3): Mounting V5 Filesystem
    Jan 30 00:00:37 Tower kernel: XFS (loop3): Ending clean mount
    Jan 30 00:00:37 Tower root: meta-data=/dev/loop3             isize=512    agcount=4, agsize=8388608 blks
    Jan 30 00:00:37 Tower root:          =                       sectsz=512   attr=2, projid32bit=1
    Jan 30 00:00:37 Tower root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    Jan 30 00:00:37 Tower root:          =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
    Jan 30 00:00:37 Tower root: data     =                       bsize=4096   blocks=33554432, imaxpct=25
    Jan 30 00:00:37 Tower root:          =                       sunit=0      swidth=0 blks
    Jan 30 00:00:37 Tower root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    Jan 30 00:00:37 Tower root: log      =internal log           bsize=4096   blocks=16384, version=2
    Jan 30 00:00:37 Tower root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    Jan 30 00:00:37 Tower root: realtime =none                   extsz=4096   blocks=0, rtextents=0
    Jan 30 00:00:37 Tower emhttpd: shcmd (173): /etc/rc.d/rc.docker start
    Jan 30 00:00:37 Tower root: starting dockerd ...
    Jan 30 00:00:38 Tower avahi-daemon[8170]: Server startup complete. Host name is Tower.local. Local service cookie is 2005410125.
    Jan 30 00:00:39 Tower avahi-daemon[8170]: Service "Tower" (/services/ssh.service) successfully established.
    Jan 30 00:00:39 Tower avahi-daemon[8170]: Service "Tower" (/services/smb.service) successfully established.
    Jan 30 00:00:39 Tower avahi-daemon[8170]: Service "Tower" (/services/sftp-ssh.service) successfully established.
    Jan 30 00:00:41 Tower kernel: Bridge firewalling registered
    Jan 30 00:00:41 Tower kernel: Initializing XFRM netlink socket

     

    Some text anonymized. Diagnostics attached.

    This began after setting up Immich with Docker Compose Manager plugin and heavy workload on 3-way mirror importing files.

    I'm aware of disk1 ZFS specific file corruption that has been benign for months and I am rather certain is irrelevant. Have yet to run memtest, but RAM sticks are a couple months old out of the manufacturer box.

    Tower-diagnostics-20240130-0021.zip

     

    EDIT: My Docker image was most likely full. I resolved this by replacing my system/docker.img with a new vdisk double the size, and fresh pulling my template images.

     

    2ND EDIT: As is the law of the universe, after an entire evening of re-pulling images, as I'm declaring victory and about to go to bed, the instant after I publish my edit, the full system halt returns at the exact same log line as above. My docker.img is absolutely not full. I manually isolate container CPUs to all except the first core (and its second thread) and this started after running a container with that cpusets change. There is something poisonous about Unraid 6.12.6 ZFS with an XFS vDisk Docker.img. It infuriates and terrifies me. Pray for me.

  2. I am also experiencing this issue on Unraid 6.12.4 with Docker image data in an individual share on a ZFS disk. Cannot remove a container through CLI or Force Update through GUI, so the container is stuck.

     

    $ zfs version
    zfs-2.1.12-1
    zfs-kmod-2.1.12-1
    $ docker rm -f my-app
    Error response from daemon: container 3ed55f07dde27c39b475b232e8a06f248c19fc09f6464fbaf0276b8c81cab4ff: driver "zfs" failed to remove root filesystem: exit status 1: "/usr/sbin/zfs fs destroy -r cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888" => cannot open 'cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888': dataset does not exist
    $ zfs list | grep 503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888
    cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888-init   136K   863G     91.4M  legacy
    $ zfs unmount cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888
    cannot open 'cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888': dataset does not exist
    $ zfs destroy cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888
    cannot open 'cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888': dataset does not exist
    

     

    Some relevant GitHub issue discussions:

    2015-09-07 moby/moby not exactly the same error but relevant, and I had the same one previously (nuked all Docker image data to solve)

    2017-02-13 moby/moby

    2019-10-24 moby/moby

    2020-06-02 moby/moby

    2021-12-13 moby/moby references above issue

     

    Based on the 2017-02-13 issue, I tried stopping Docker service and `rm /var/lib/docker` and restarting service, no change.

    The 2019-10-24 issue says that ZFS 2.2 may introduce a fix.

    The 2020-06-02 issue and Unraid user BVD recommend creating a zvol virtual disk with a non-ZFS filesystem inside.

    May have minor performance impact (another filesystem abstraction layer) and also limits the size of the docker.img (I changed to directory image data in the first place because I wanted no limit besides bare metal disk space).

    Hope that Unraid promptly upgrades to ZFS 2.2 when it is released.

     

    Attached diagnostics.

    tower-diagnostics-20230906-2033.zip

  3. I have the same issue. If I don't type anything for 30 seconds, or if I change the focused window, the Unraid web terminal (web path https://unraid.example.com/webterminal/ttyd/ ) for a split second says Reconnected and the whole terminal session resets as if in a new window.

     

    I use Unraid 6.11.5, pfSense 2.7.0, and HAProxy package for pfSense 0.61_11.

    I use the Unbound DNS Resolver to resolve unraid.example.com to my pfSense IP.

    HAProxy frontend listens on pfSense port 80/443 on my LAN and serves a wildcard certificate for *.example.com (substituting my actual domain I own) that pfSense ACME signs with LetsEncrypt CA.

     

    Here are relevant parts of my HAProxy config file on pfSense at /var/etc/haproxy/haproxy.cfg

     

    frontend COM_EXAMPLE_LOCAL-merged
            bind                    192.168.1.1:443 name 192.168.1.1:443   ssl crt-list /var/etc/haproxy/CO_BEREZIN_LOCAL.crt_list
            mode                    http
            log                     global
            option                  socket-stats
            option                  http-keep-alive
            timeout client          30000
            acl                     aclcrt_COM_EXAMPLE_LOCAL var(txn.txnhost) -m reg -i ^([^\.]*)\.example\.com(:([0-9]){1,5})?$
            acl                     unraid.example.com       var(txn.txnhost) -m beg -i unraid.example.com
            use_backend Unraid_Web_UI_ipv4  if  unraid.example.com
            
    backend Unraid_Web_UI_ipv4
            mode                    http
            id                      12345
            log                     global
            timeout connect         30000
            timeout server          30000
            retries                 3
            source ipv4@ usesrc clientip
            server                  Unraid_UI 192.168.1.100:80 id 10110 check inter 1000

     

    It seems to me that the web terminal simply hits the HAProxy timeout limit. Does any HAProxy guru have a handy ACL or other kind of config to make an exception of timeout just for the web terminal route?

     

  4. 5 hours ago, PTRFRLL said:

    Ok. Latest image reverts T-Rex to 0.24.2, the last known working version without the NVML warnings. I also added support for the API Key. By default the password for the webUI is Password1

     

    You can (and should) override this by adding the API_PASSWORD environment variable to the container in the Unraid UI:

    password.png.c24aceced25f973b0789c4c17f348d84.png

    Just pulled the latest image as of 2021-10-27, T-Rex miner version 0.24.2, it works again with no Nvidia warnings! Thank you for the quick rollback.

  5. 12 hours ago, PTRFRLL said:

     

    The only thing I can think is the extra params on the container didn't get copied over. Make sure your extra params (enable advanced view) on the container contains:

    --runtime=nvidia

     

    I do have --runtime=nvidia in my extra parameters. I I have the latest ptrfrll/nv-docker-trex:cuda11 as of 2021-10-25. Here is my full docker run command:

    /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='trex' --net='container:vpn' --privileged=true -e TZ="xxxxxxxxxx/xxxxxxxxxx" -e HOST_OS="Unraid" -e 'WALLET'='xxxxxxxxxx' -e 'SERVER'='stratum2+tcp://xxxxxxxxxx.ethash.xxxxxxxxxx.xxx:xxxxx' -e 'WORKER'='1080ti' -e 'ALGO'='ethash' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx' -e 'PASS'='xxxxxxxxxx' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -v '/mnt/user/appdata/trex':'/config':'rw' --runtime=nvidia 'ptrfrll/nv-docker-trex:cuda11'

    And here is my config.json

    {
            "ab-indexing" : false,
            "algo" : "ethash",
            "api-bind-http" : "0.0.0.0:4067",
            "api-bind-telnet" : "127.0.0.1:4068",
            "api-read-only" : false,
            "autoupdate" : false,
            "back-to-main-pool-sec" : 600,
            "coin" : "",
            "cpu-priority" : 2,
            "dag-build-mode" : "0",
            "devices" : "0",
            "exit-on-connection-lost" : false,
            "exit-on-cuda-error" : true,
            "exit-on-high-power" : 0,
            "extra-dag-epoch" : "-1",
            "fan" : "t:xx",
            "gpu-init-mode" : 0,
            "gpu-report-interval" : 30,
            "gpu-report-interval-s" : 0,
            "hashrate-avr" : 60,
            "hide-date" : false,
            "intensity" : "0",
            "keep-gpu-busy" : false,
            "kernel" : "0",
            "lhr-low-power" : false,
            "lhr-tune" : "0",
            "lock-cclock" : "0",
            "log-path" : "",
            "low-load" : "0",
            "monitoring-page" :
            {
                    "graph_interval_sec" : 3600,
                    "update_timeout_sec" : 10
            },
            "mt" : "0",
            "no-color" : false,
            "no-hashrate-report" : false,
            "no-nvml" : false,
            "no-strict-ssl" : false,
            "no-watchdog" : false,
            "pci-indexing" : false,
            "pl" : "xxxW",
            "pools" :
            [
                    {
                            "pass" : "xxxxxxxxxx",
                            "url" : "stratum2+tcp://xxxxxxxxxx.ethash.xxxxxxxxxx.xxx:xxxxx",
                            "user" : "xxxxxxxxxx",
                            "worker" : "1080ti"
                    }
            ],
            "protocol-dump" : false,
            "reconnect-on-fail-shares" : 10,
            "retries" : 3,
            "retry-pause" : 10,
            "script-crash" : "",
            "script-epoch-change" : "",
            "script-exit" : "",
            "script-low-hash" : "",
            "script-start" : "",
            "send-stales" : false,
            "sharerate-avr" : 600,
            "temperature-color" : "67,77",
            "temperature-limit" : 0,
            "temperature-start" : 0,
            "time-limit" : 0,
            "timeout" : 300,
            "validate-shares" : false,
            "watchdog-exit-mode" : "",
            "worker" : "1080ti"
    }

     

  6. Hi, I'm running ptrfrll/nv-docker-trex:cuda11 on Unraid 6.9.2 with Unraid Nvidia driver 495.29.05 with only a 1080 Ti, not stubbed. The container used to work fine, until I reinstalled it from CA using the same template I had before, same GPU ID. Now T-Rex repeatedly fails with this warning.

     

    20211025 04:07:46 WARN: Can't load NVML library, dlopen(25): failed to load libnvidia-ml.so, libnvidia-ml.so: cannot open shared object file: No such file or directory
    20211025 04:07:46 WARN: NVML error, code 12
    20211025 04:07:46 WARN: Can't initialize NVML. GPU monitoring will be disabled.
    20211025 04:07:47
    20211025 04:07:47 NVIDIA Driver version N/A

    Any idea what might be causing this missing shared Nvidia library? I can run nvidia-smi just fine on my host. Tried rebooting. 

  7. Hi, my syslog gets spammed and 99% filled within minutes of booting up with millions of lines like this

    Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref]
    Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref]
    Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref]

    I am stubbing my graphics card with this plugin on unRAID 6.8.3. The address 09:00.0 is the device "VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)".

    HVM and IOMMU are enabled. PCIe ACS override is disabled.

     

    The graphics card passthrough (with dumped vbios rom) works in a VM, but on fixed 800x600 resolution (Nvidia drivers installed, Windows VM says there's a driver error code 43), but the VM logs say

    2021-01-19T21:57:24.002296Z qemu-system-x86_64: -device vfio-pci,host=0000:09:00.0,id=hostdev0,bus=pci.0,addr=0x5,romfile=/mnt/disk5/isos/vbios/EVGA_GeForce_GTX_1070.vbios: Failed to mmap 0000:09:00.0 BAR 3. Performance may be slow

    Anybody seen this before? Can't find anything like it on the forum.

     

    EDIT: Found some more info. According to 

     booting the server without the HDMI plugged in removed the spamming line. However, after plugging the HDMI back in and booting the VM, the VM logs are repeating lines like

    2021-01-19T22:17:27.637837Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x101afe, 0x0,1) failed: Device or resource busy
    2021-01-19T22:17:27.637849Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x101aff, 0x0,1) failed: Device or resource busy
    2021-01-19T22:17:27.648663Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x4810, 0x1fef8c01,8) failed: Device or resource busy
    2021-01-19T22:17:27.648690Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x4810, 0x1fef8c01,8) failed: Device or resource busy
    2021-01-19T22:17:27.648784Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x102000, 0xabcdabcd,4) failed: Device or resource busy
    2021-01-19T22:17:27.648798Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x102004, 0xabcdabcd,4) failed: Device or resource busy

    Windows device manager still says there are driver errors, and there are console-like artifacts horizontally across the screen, including a blinking cursor, on top of Windows. It seems like the unraid console and the Windows VM (or is it VFIO stubbing?) fight for the GPU. I have yet to try the recommendation in the above linked post to unbind the console at boot with the go script.

  8. On 8/19/2020 at 10:02 PM, marlouiegene18 said:

    Hello, just installed Jitsi server and i'm getting this warning on CA Fix Problems. I tried to click Apply Fix and gave me an Error: Missing Template. Any thoughts? I tried installing the Jitsi server with encryption but failed so I had to install it without encryption. If that information help. Though if you do know how to install it with working encryption, please let me know as well. I followed Spaceinvader one's video to the tee minus Let's Encrypt as I used Nginx Proxy Manager instead.

     

    Thank you in advance for any help!

    Screen Shot 2020-08-19 at 7.51.31 PM.png

    Screen Shot 2020-08-19 at 7.58.02 PM.png

    Bumping this post because I am dealing with the same issue. I have the same four containers with these template missing warnings, pointing to the same A75G templates, for which Apply Fix shows the same error.

    I have the following templates on my USB:

    $ ls -lhAF /boot/config/plugins/dockerMan/templates/templates/ | grep jitsi
    -rw------- 1 root root 4.3K Apr 25  2020 jitsi-jicofo.xml
    -rw------- 1 root root 4.0K Apr 25  2020 jitsi-jvb.xml
    -rw------- 1 root root  13K Apr 25  2020 jitsi-prosody.xml
    -rw------- 1 root root 7.2K Apr 25  2020 jitsi-web.xml
    $ ls -lhAF /boot/config/plugins/dockerMan/templates-user/ | grep jitsi
    -rw------- 1 root root  4066 Nov 10 10:36 my-jitsi_bridge.xml
    -rw------- 1 root root  4336 Nov 10 10:37 my-jitsi_focus.xml
    -rw------- 1 root root  7276 Nov 10 10:09 my-jitsi_web.xml
    -rw------- 1 root root 12837 Nov 10 10:35 my-jitsi_xmpp.xml

    I renamed my containers according to the filenames in the templates-user folder.

  9. On 5/8/2020 at 2:26 AM, splerman said:

    Whereas binhex containers for delugevpn, qbittorrentvpn, etc have STRICT mode option parameters (as mentioned in Q6/A6 of binhex’s VPN FAQ). I don’t see it in the standalone privoxyvpn container. I prefer to separate the OpenVPN/Privoxy from the client app so I can interchange client apps without reconfiguring any other containers that route through the container for access to the VPN tunnel. I’m using one of the PIA servers that provide port forwarding. My current lsio qbittorrent container routes through privoxyvpn (I.e., Network Type None, Extra Parameter —net=container:privoxyvpn, Added port mappings for 6881/udp, 6881/tcp, and 8080/tcp to privoxyvpn for qbittorrent).

     

    Do I need to enable strict mode for optimal downloads? If so, how with the privoxyvpn container? Can I just add a new variable to the template to set STRICT_MODE to yes?

    What is the Additional_Ports variable used for?

    What VPN_Options, if any, are useful?

    Is my current method of routing the qbittorrent traffic to privoxyvpn recommended over using the microsocks socks5 proxy or is microsocks recommended?

     

    Thanks for any/all input!

    My question is very similar to this one. I have an arch-rtorrentvpn container (VPN disabled) using the network stack of a dedicated arch-privoxyvpn container using --net=container:vpn parameter. I am trying to set up port forwarding on the vpn container for rtorrent. On the arch-rtorrentvpn container, it just automatically acquires the forwarded port when the PIA endpoint being used supports it. I am aware of PIA's next-gen upgrades disabling port forwarding and I am primarily using their Israel, Romania, and CA Montreal servers. The arch-privoxyvpn container connects to those endpoints successfully, but it doesn't do the same automatic port forwarding that the arch-rtorrentvpn and arch-delugevpn containers do. Is there a setting to force this? I assume that the container supports it due to sharing the same container startup procedure across the binhex containers. Manually creating a STRICT_PORT_FORWARD variable in arch-privoxyvpn (like in the other two containers) has no effect. Even though I am using PIA, there is a log line that says:

    2020-09-16 15:23:54,195 DEBG 'start-script' stdout output:
    [info] Application does not require port forwarding or VPN provider is != pia, skipping incoming port assignment

    Is using the ADDITIONAL_PORTS variable equivalent to just adding a new port to the template?

    Is the vpn_options variable just extra parameters for the /usr/bin/openvpn command?

  10. +1 demand unit

     

    Help us SpaceInvaderOne, you are our only hope (not tagging you because I'm sure you're already annoyed with the last three @'s in this topic).

    • Like 1
  11. 1 minute ago, IceNine451 said:

    Can you also check your Docker settings? The Docker version changed at 6.8, your screenshots look like you might still be running an older version of Docker, which can be changed in the settings. Under Settings -> Docker it will show you the version you are using, which can only be changed when Docker is disabled.

     

    image.thumb.png.42278e3d5d6111b4b8ae7f8ffaf886cb.png

    Having the same problem accessing web UI, I am using the manually created "docker network create container:vpn" and not "--net=container:vpn" extra parameter on unRAID 6.8.3 with Docker version 19.03.5.

  12. 40 minutes ago, IceNine451 said:

    I feel like I ran into this same issue when I first was getting this running, but I can't remember for sure. First note, the "--net=container=vpn" definitely doesn't work on 6.8.3.

     

    It does sound like you have the custom network set up properly if you can curl the VPN IP on the console for your client containers, one thing I wanted to make sure was that the VPN container was still set to Bridge for the networking mode, not the custom network you created. Only the "client" containers need to be set to the custom VPN network. On the main Docker tab for UnRAID the client containers should have nothing show up in the Port Mappings column.

     

    Here is a screenshot of my setup with the Binhex VPN container and three client containers, yours should look similar if you are set up correctly. Hopefully this helps!

     

    image.thumb.png.71d560f47000e65c1232deb4649c05fd.png

    Thank you for the quick response! My setup looks essentially the same as yours, with the VPN container named simply vpn, and unfortunately I still cannot access the web UI, just a 404. One thing I tried changing was that I changed the network from a custom public Docker network I have (to isolate from non-public-facing containers) to simply the bridge network like yours. Client container still receives the VPN IP, but I still can't access the web UI. I tried disabling my adblocker even though it should have no effect, and it in fact does not.

     

    1.thumb.png.9ebf99f4167d1323d47af3dff35e5a6f.png

    2.thumb.png.c9a2759d3de16a9c1811546260c905f2.png

    The container is named jackettvpn because I modified my existing container, but that container's VPN is disabled.

  13. On 3/23/2020 at 9:48 AM, IceNine451 said:

    I don't know of a way to use a proxy with Plex either, but you can do what I have done with some of my containers and run *all* of the Plex traffic through a VPN container. Since you won't be doing remote access I don't see any issues with this myself, but keep in mind I haven't actually tried Plex specifically.

     

    The method for doing this is a bit different between UnRAID 6.7.x and 6.8.x, it works best on the latest version of UnRAID (6.8.3 as of this post) because they have added some quality-of-life fixes to the Docker code.

     

    I figured out how to do this through these two posts (https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ and https://hub.docker.com/r/eafxx/rebuild-dndc) but I will summarize here since neither completely cover what you need to do.

     

    1) Have a VPN container like Binhex's up and running.

     

    2) Create a dedicated "VPN network" for Plex and anything else you want to run through the VPN on.

       - Open the UnRAID terminal or connect via SSH, then run the command 

    
    docker network create container:master_container_name

    where "master_container_name" is the name of your VPN container, so "binhex-privoxyvpn" in my case. This name should be all lowercase, if it isn't than change it before creating the new network.

     

    3) Edit your Plex container and change the network type to "Custom: container:binhex-privoxyvpn" if you are on UnRAID 6.8.x. If you are on 6.7.x then change the network type to "Custom" and add "--net=container:binhex-privoxyvpn" to the Extra Parameters box.

     

    4) Remove all the Host Port settings from the Plex container, so by default on my setup there are ports TCP 3005, 32400 and 32469 and UDP ports 1900, 32410, 32412, 32413 and 32414. 

     

    5) Edit your VPN container and add the Plex required ports to the VPN container. You can probably get away with just TCP ports 3005 and 3005 and UDP port 1900 and have it work, but probably safer to add them all again. Leave the VPN containers' network type to what it is now, probably Bridge.

     

    6) Do a forced upgrade (seen with Advanced View turned on) on the VPN container first and then the Plex container. You should still be able to reach your Plex containers web UI now, with the VPN container acting as a "gateway". Now all external traffic will go through the VPN.

     

    There are some things to remember with this kind of setup, like if the VPN container goes down you will be unable to reach Plex at all even if it is running. Also, if the VPN container is updated the Plex container will lose connectivity until it is also updated. There is code in UnRAID 6.8.3 to do this update automatically when you load the Docker tab in the UnRAID UI.

     

    Hopefully all that is clear, let me know if you have any questions!

     

    Thank you for these very clear instructions! I was just looking for something like this after hitting my VPN device license limit, and SpaceInvader One released this timely video. Like a lot of you guys I wanted to use a dedicated container instead of binhex-delugevpn, and this binhex-privoxyvpn is perfect for the job.

     

    However, I'm unable to access the client container web UI. I've now tested with linuxserver/lazylibrarian (to hide libgen direct downloads) and linuxserver/jackett (migrating from dyonr/jackettvpn, but also tried with clean image). I'm on unRAID 6.8.3 and I've tried both "docker create network container:vpn" and "--net=container:vpn" extra parameters. (also, for the record, "docker run" complains when you set a custom network:container in the dropdown and also have translated ports, so be sure to remove ports at the same time you change the network). I've added the ports for the client containers (in my two test containers those 5299 and 9117 respectively) to the binhex-privoxyvpn container named vpn, restarted vpn, and rebuilt & restarted the client containers. Still can't reach container web UI on [host IP]:5299 or [host IP]:9117.

    In the client containers, I can curl ifconfig.io and I receive my VPN IP, so the container networking seems to work fine. The client web UI seems to be the only issue. I've seen a couple people in the comments on SpaceInvader One's video report the same issue.

     

    Has anyone else experienced this or fixed it? Would love to have this setup work out!

  14. I'm having trouble using the "Tiered FFMPEG NVENC settings depending on resolution" plugin with ID "Tdarr_Plugin_d5d3_iiDrakeii_FFMPEG_NVENC_Tiered_MKV". It says it can't find my GPU.

     

    Command:
    
    /home/Tdarr/Tdarr/bundle/programs/server/assets/app/ffmpeg/ffmpeg42/ffmpeg -c:v h264_cuvid -i '/home/Tdarr/Media/Television/Stranger Things/Season 03/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p].mkv' -map 0 -dn -c:v hevc_nvenc -pix_fmt p010le -rc:v vbr_hq -qmin 0 -cq:V 31 -b:v 2500k -maxrate:v 5000k -preset slow -rc-lookahead 32 -spatial_aq:v 1 -aq-strength:v 8 -a53cc 0 -c:a copy -c:s copy '/home/Tdarr/cache/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p]-TdarrCacheFile-p1cwX-Dg.mkv'
    
    ffmpeg version N-95955-g12bbfc4 Copyright (c) 2000-2019 the FFmpeg developers
    
    built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
    
    configuration: --prefix=/home/z/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/z/ffmpeg_build/include --extra-ldflags=-L/home/z/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/z/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree
    
    libavutil 56. 36.101 / 56. 36.101
    
    libavcodec 58. 64.101 / 58. 64.101
    
    libavformat 58. 35.101 / 58. 35.101
    
    libavdevice 58. 9.101 / 58. 9.101
    
    libavfilter 7. 67.100 / 7. 67.100
    
    libswscale 5. 6.100 / 5. 6.100
    
    libswresample 3. 6.100 / 3. 6.100
    
    libpostproc 55. 6.100 / 55. 6.100
    
    Guessed Channel Layout for Input Stream #0.1 : 5.1
    
    Input #0, matroska,webm, from '/home/Tdarr/Media/Television/Stranger Things/Season 03/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p].mkv':
    
    Metadata:
    
    encoder :
    
    libebml v1.3.5 + libmatroska v1.4.8
    
    creation_time : 2019-07-04T07:03:27.000000Z
    
    Duration: 00:50:33.63, start: 0.000000, bitrate: 7850 kb/s
    
    Chapter #0:0: start 306.015000, end 354.521000
    
    Metadata:
    
    title : Intro start
    
    Chapter #0:1: start 354.521000, end 3033.632000
    
    Metadata:
    
    title : Intro end
    
    Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
    
    Metadata:
    
    BPS-eng : 7205368
    
    DURATION-eng : 00:50:33.573000000
    
    NUMBER_OF_FRAMES-eng: 72733
    
    NUMBER_OF_BYTES-eng: 2732251549
    
    _STATISTICS_WRITING_APP-eng: mkvmerge v21.0.0 ('Tardigrades Will Inherit The Earth') 64-bit
    
    _STATISTICS_WRITING_DATE_UTC-eng: 2019-07-04 07:03:27
    
    _STATISTICS_TAGS-eng: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES
    
    Stream #0:1(eng): Audio: eac3, 48000 Hz, 5.1, fltp (default)
    
    ...
    
    Stream #0:29 -> #0:29 (copy)
    
    Stream #0:30 -> #0:30 (copy)
    
    Stream #0:31 -> #0:31 (copy)
    
    Stream #0:32 -> #0:32 (copy)
    
    Press [q] to stop, [?] for help
    
    [hevc_nvenc @ 0x55aaaad84e40] Codec not supported
    
    [hevc_nvenc @ 0x55aaaad84e40] No capable devices found
    
    Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
    
    Conversion failed!

     

    I have an EVGA GeForce GTX 760, obv an older card. nvidia-smi doesn't support it.

    Tue Mar 10 13:54:11 2020
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 440.59       Driver Version: 440.59       CUDA Version: 10.2     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  GeForce GTX 760     Off  | 00000000:08:00.0 N/A |                  N/A |
    |  0%   35C    P0    N/A /  N/A |      0MiB /  1997MiB |     N/A      Default |
    +-------------------------------+----------------------+----------------------+
    
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID   Type   Process name                             Usage      |
    |=============================================================================|
    |    0                    Not Supported                                       |
    +-----------------------------------------------------------------------------+

    However my linuxserver/plex and linuxserver/emby containers do manage to use it for hardware transcoding. I made sure to set all the correct Docker template variables including --runtime=nvidia, NVIDIA_DRIVER_CAPABILITIES=all, NVIDIA_VISIBLE_DEVICES=<GPU ID>, I have Linuxserver Unraid Nvidia 6.8.3 installed. Any tips? I would really like to be able to transcode on the GPU, I've been brutally punishing my CPU for days slowly transcoding on Unmanic :(

  15. 4 hours ago, zerolim1t said:

    What do you all use to convert files with this plugin? I tried handbrake but it doesn't support it. Could someone point me to the right direction i looked everywhere and can't find anything

    Unmanic is another good container. It's dead simple, you just point it at a directory and it converts x264 video files to HEVC.

  16. On 10/27/2019 at 4:17 PM, Squid said:

    @ZooMass, I can probably say that you're running unRaid 6.6.x  In which case machine type 3.1 doesn't exist.  You can probably edit the xml to state 

     

    
    <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>

     

    That was it! Yes, I am running 6.6.7 because on 6.7+ my machine experienced the SQLite data corruption bug being investigated and 6.6.7 did not. Thanks for your help!

  17. Just ran the container, tried to start the VM, got this error:

    internal error: process exited while connecting to monitor: 2019-10-27T19:31:11.597434Z qemu-system-x86_64: -machine pc-q35-3.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off: unsupported machine type Use -machine help to list supported machines

    I am using the --catalina OS flag. Only change I made to the container is that I changed the VM images location to /mnt/cache/domains, but that should be the same as /mnt/user/domains anyway. Anybody seen anything like this?

    My XML:

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>MacinaboxCatalina</name>
      <uuid>51e88f1c-a2ab-43af-981f-80483d6600c8</uuid>
      <description>MacOS Catalina</description>
      <metadata>
        <vmtemplate xmlns="unraid" name="MacOS" icon="/mnt/user/domains/MacinaboxCatalina/icon/catalina.png" os="Catalina"/>
      </metadata>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>2</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
        <vcpupin vcpu='1' cpuset='1'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader>
        <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='2' threads='1'/>
      </cpu>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/user/domains/MacinaboxCatalina/Clover.qcow2'/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/MacinaboxCatalina/Catalina-install.img'/>
          <target dev='hdd' bus='sata'/>
          <address type='drive' controller='0' bus='0' target='0' unit='3'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/MacinaboxCatalina/macos_disk.img'/>
          <target dev='hde' bus='sata'/>
          <address type='drive' controller='0' bus='0' target='0' unit='4'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x10'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x11'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0x12'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:71:f5:95'/>
          <source bridge='br0'/>
          <model type='vmxnet3'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>
        <memballoon model='virtio'>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </memballoon>
      </devices>
      <qemu:commandline>
        <qemu:arg value='-usb'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='usb-kbd,bus=usb-bus.0'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='isa-applesmc,osk=IAMZOOMASSANDIREMOVEDTHEMAGICWORDS'/>
        <qemu:arg value='-smbios'/>
        <qemu:arg value='type=2'/>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/>
      </qemu:commandline>
    </domain>

     

  18. I'm trying to access the web UI through a reverse proxy, but when I go to https://mydomain.com/thelounge it just responds with a page that says "Cannot GET /thelounge/".

     

    I'm using update 15.05.19, the latest image as of posting this, and I made no changes to the default template except set the network to my public Docker network (where containers can resolve each other's hostnames).

     

    I made no changes to /config/config.js except set "reverseProxy: true,".

     

    I use jselage/nginxproxymanager container, here is my location block:

      location /thelounge {
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Scheme $scheme;
        proxy_set_header X-Forwarded-Proto  $scheme;
        proxy_set_header X-Forwarded-For    $remote_addr;
        proxy_pass       http://thelounge:9000;
        auth_request /auth-0;
      }

    It's a bit different from the block in https://thelounge.chat/docs/guides/reverse-proxies#nginx but I also tried adding the suggested lines and they made no difference.

     

    I also tried using this block from the linuxserver config even though I don't use the linuxserver/letsencrypt container, same thing.

     

    # thelounge does not require a base url setting
    
    location /thelounge {
        return 301 $scheme://$host/thelounge/;
    }
    location ^~ /thelounge/ {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;
    
        # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf
        #auth_request /auth;
        #error_page 401 =200 /login;
    
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_thelounge thelounge;
        rewrite /thelounge(.*) $1 break;
        proxy_pass http://$upstream_thelounge:9000;
    }

    Any other ideas?

  19. I'm with rm and runraid on this one. I'm still experiencing database corruptions in Plex and restoring from backups daily, with /config on /mnt/disk1/appdata/plex like it has been suggested, on unRAID 6.7.0. This is a difficult problem to isolate, some people on the Plex forums have been discussing it with little progress, but I agree with runraid that it feels like the devs aren't giving this issue the attention it deserves. It's still happening to many people. I only started using unRAID in the past month and I'm still on the trial key. I've been meaning to finally purchase a key but I can't justify it until we figure out what's causing these database corruptions.

    • Upvote 1
  20. 20 hours ago, CHBMB said:

    @ZooMass

    
    use=web, web=dynamicdns.park-your-domain.com/getip
    protocol=namecheap
    server=dynamicdns.park-your-domain.com
    login=yourdomain.com
    password=your dynamic dns password
    @,www

     

    Hey, that worked, thank so much! Silly of me not to try the line from the article itself.

  21. I keep getting this warning no matter how I set my ddclient.conf:

    Setting up watches.
    Watches established.
    /config/ddclient.conf MODIFY
    ddclient has been restarted
    Setting up watches.
    Watches established.
    WARNING: found neither ipv4 nor ipv6 address

    I'm using Namecheap. I've tried configuring the following ways by uncommenting the Namecheap section (I made no other changes to the default ddclient.conf):

    ##
    ## NameCheap (namecheap.com)
    ##
    protocol=namecheap                               
    server=dynamicdns.park-your-domain.com   
    login=**********.***                         
    password=**********                
    @
    ##
    ## NameCheap (namecheap.com)
    ##
    protocol=namecheap,                             \
    server=dynamicdns.park-your-domain.com, \
    login=**********.***,                       \
    password=**********               \
    @

    I made no changes to the default linuxserver/ddclient Docker template.

     

    I read this article for reference on how to configure ddclient for Namecheap: https://www.namecheap.com/support/knowledgebase/article.aspx/583/11/how-do-i-configure-ddclient

×
×
  • Create New...