SeeGee

Members
  • Posts

    84
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SeeGee's Achievements

Rookie

Rookie (2/14)

11

Reputation

  1. I use a ramdisk for transcoding. Prevents necessary wear and tear on the drives. Also responds quicker. One thing you may want to check is to see if plex is scanning your media and creating thumbnail previews... Those can occupy a lot of space on a large library. https://support.plex.tv/articles/202197528-video-preview-thumbnails/
  2. Personally, I'm not comfortable exposing my *arrs to the internet. I instead created a split tunnel vpn using wireguard into my LAN, and am able to access anything behind my firewall as though I am local. If you're concerned about internet security, this is a much safer option. Expose as little attack surface as possible. My router has one port(non-default) exposed for plex, and all the rest of my services are behind the firewall.
  3. I have a pair of Xeon E5-2690v3. They're old Haswell chips and actually suck at transcoding. If you can use a gpu, it's recommended. I use an old GTX 970 and it can handle quite a few simultaneous transcodes without issue
  4. I tried that with no success. Out of frustration, I decided to install a fresh template and start from scratch. Dumped the .conf from vpn unlimited into the wireguard folder and it fired up nice and clean. The original container was set up using openvpn, and I was trying to migrate to wireguard. I suspect that something dirty from the old openvpn setup was interfering with the wireguard tunnel. Never found out what it was, but we're up and running now. Thank you @binhex
  5. I am still having the issue above. It has been over a month and no posts in this thread. @binhex is this a dead project?
  6. I have a couple of these on a Supermicro X10+ System. I had issues seeing the ssd's until I realized that there was a logic to the madness.... In my case: 1x nvme: bios bifurcation = 16x nvme slot = 1 2x nvme: bifurcation = 8x/8x nvme slot = 1 and 3 3x nvme: bifurcation = 8x/4x/4x nvme slot = 1, 3 and 4 4x nvme: bifurcation = 4x/4x/4x/4x nvme slot = 1, 2, 3, 4 Any other configuration than this and it usually resulted in unhappy results. You would think that simply setting it to 4x/4x/4x/4x and adding any number of nvme drives would just work, but it did not.
  7. I'm having some DNS resolution issues using wireguard. I've struggled with this for days now, and it's really frustrating. I do not know where to go from here... From the container console, I am able to ping devices on the local lan, but I can not ping or resolve any external IP or domain. With debug enabled, this is what I am getting from supervisord.log: 2023-04-15 23:08:50.319465 [info] Host is running unRAID 2023-04-15 23:08:50.349622 [info] System information Linux 3414a6b7f1c3 5.19.17-Unraid #2 SMP PREEMPT_DYNAMIC Wed Nov 2 11:54:15 PDT 2022 x86_64 GNU/Linux 2023-04-15 23:08:50.385046 [info] OS_ARCH defined as 'x86-64' 2023-04-15 23:08:50.420122 [info] PUID defined as '99' 2023-04-15 23:08:50.466915 [info] PGID defined as '100' 2023-04-15 23:08:50.520705 [info] UMASK defined as '000' 2023-04-15 23:08:50.552389 [info] Permissions already set for '/config' 2023-04-15 23:08:50.589095 [info] Deleting files in /tmp (non recursive)... 2023-04-15 23:08:50.637891 [info] VPN_ENABLED defined as 'yes' 2023-04-15 23:08:50.673953 [info] VPN_CLIENT defined as 'wireguard' 2023-04-15 23:08:50.705869 [info] VPN_PROV defined as 'custom' 2023-04-15 23:08:50.747555 [info] WireGuard config file (conf extension) is located at /config/wireguard/wg0.conf 2023-04-15 23:08:50.801073 [info] VPN_REMOTE_SERVER defined as '162.221.202.17' 2023-04-15 23:08:50.836787 [info] VPN_REMOTE_PORT defined as '51820' 2023-04-15 23:08:50.864197 [info] VPN_DEVICE_TYPE defined as 'wg0' 2023-04-15 23:08:50.892954 [info] VPN_REMOTE_PROTOCOL defined as 'udp' 162.221.202.17 2023-04-15 23:08:50.931615 [debug] iptables kernel module 'ip_tables' available, setting policy to drop... 2023-04-15 23:08:50.967869 [debug] ip6tables kernel module 'ip6_tables' available, setting policy to drop... 2023-04-15 23:08:51.009647 [debug] Docker interface defined as eth0 2023-04-15 23:08:51.057866 [info] LAN_NETWORK defined as '10.5.5.0/24,10.3.3.0/24' 2023-04-15 23:08:51.089893 [info] NAME_SERVERS defined as '10.5.5.1,1.1.1.1,1.0.0.1,8.8.8.8' 2023-04-15 23:08:51.122678 [info] VPN_USER defined as 'private' 2023-04-15 23:08:51.158368 [info] VPN_PASS defined as 'also_private' 2023-04-15 23:08:51.193736 [info] ENABLE_PRIVOXY defined as 'yes' 2023-04-15 23:08:51.232310 [warn] ADDITIONAL_PORTS DEPRECATED, please rename env var to 'VPN_INPUT_PORTS' 2023-04-15 23:08:51.264660 [info] ADDITIONAL_PORTS defined as '7878,8090,8686,8989,9117' 2023-04-15 23:08:51.299035 [info] VPN_OUTPUT_PORTS not defined (via -e VPN_OUTPUT_PORTS), skipping allow for custom outgoing ports 2023-04-15 23:08:51.335565 [info] ENABLE_SOCKS defined as 'no' 2023-04-15 23:08:51.370607 [info] ENABLE_PRIVOXY defined as 'yes' 2023-04-15 23:08:51.406926 [info] Starting Supervisor... 2023-04-15 23:08:51,695 INFO Included extra file "/etc/supervisor/conf.d/privoxy.conf" during parsing 2023-04-15 23:08:51,695 INFO Set uid to user 0 succeeded 2023-04-15 23:08:51,697 INFO supervisord started with pid 7 2023-04-15 23:08:52,701 INFO spawned: 'start-script' with pid 192 2023-04-15 23:08:52,704 INFO spawned: 'watchdog-script' with pid 193 2023-04-15 23:08:52,705 INFO reaped unknown pid 8 (exit status 0) 2023-04-15 23:08:52,713 DEBG 'start-script' stdout output: [info] VPN is enabled, beginning configuration of VPN [debug] Environment variables defined as follows ADDITIONAL_PORTS=7878,8090,8686,8989,9117 APPLICATION=privoxy BASH=/bin/bash BASHOPTS=checkwinsize:cmdhist:complete_fullquote:extquote:force_fignore:globasciiranges:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath BASH_ALIASES=() BASH_ARGC=() BASH_ARGV=() BASH_CMDS=() BASH_LINENO=([0]="0") BASH_SOURCE=([0]="/root/start.sh") BASH_VERSINFO=([0]="5" [1]="1" [2]="16" [3]="1" [4]="release" [5]="x86_64-pc-linux-gnu") BASH_VERSION='5.1.16(1)-release' DEBUG=true DIRSTACK=() ENABLE_PRIVOXY=yes ENABLE_SOCKS=no EUID=0 GROUPS=() HOME=/home/nobody HOSTNAME=3414a6b7f1c3 HOSTTYPE=x86_64 HOST_CONTAINERNAME=binhex-privoxyvpn HOST_HOSTNAME=Saturn HOST_OS=Unraid IFS=$' \t\n' LANG=en_GB.UTF-8 LAN_NETWORK=10.5.5.0/24,10.3.3.0/24 MACHTYPE=x86_64-pc-linux-gnu NAME_SERVERS=10.5.5.1,1.1.1.1,1.0.0.1,8.8.8.8 OPTERR=1 OPTIND=1 OSTYPE=linux-gnu OS_ARCH=x86-64 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PGID=100 PIPESTATUS=([0]="0") PPID=7 PS4='+ ' PUID=99 PWD=/ SHELL=/bin/bash SHELLOPTS=braceexpand:hashall:interactive-comments SHLVL=1 SOCKS_PASS=socks SOCKS_USER=admin SUPERVISOR_ENABLED=1 SUPERVISOR_GROUP_NAME=start-script SUPERVISOR_PROCESS_NAME=start-script TERM=xterm TZ=America/Los_Angeles UID=0 UMASK=000 VPN_CLIENT=wireguard VPN_CONFIG=/config/wireguard/wg0.conf VPN_DEVICE_TYPE=wg0 VPN_ENABLED=yes VPN_INPUT_PORTS=7878,8090,8686,8989,9117 VPN_OPTIONS= VPN_OUTPUT_PORTS= VPN_PASS= VPN_PROV=custom VPN_REMOTE_PORT=51820 VPN_REMOTE_PROTOCOL=udp VPN_REMOTE_SERVER=162.221.202.17 VPN_USER= _='[debug] Environment variables defined as follows' [debug] Directory listing of files in /config/wireguard/ as follows 2023-04-15 23:08:52,713 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2023-04-15 23:08:52,713 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2023-04-15 23:08:52,719 DEBG 'start-script' stdout output: total 4 drwxrwxr-x 1 nobody users 16 Apr 15 22:43 . drwxrwxr-x 1 nobody users 218 Apr 15 23:08 .. -rwxrwxr-x 1 nobody users 364 Apr 15 22:43 wg0.conf 2023-04-15 23:08:52,719 DEBG 'start-script' stdout output: [debug] Contents of WireGuard config file '/config/wireguard/wg0.conf' as follows... 2023-04-15 23:08:52,721 DEBG 'start-script' stdout output: [Interface] PostUp = '/root/wireguardup.sh' PostDown = '/root/wireguarddown.sh' PrivateKey = <my_private_key> ListenPort = 51820 Address = 10.101.95.63/32 [Peer] PublicKey = <my_public_key> PresharedKey = <my_preshared_key> AllowedIPs = 0.0.0.0/0 Endpoint = 162.221.202.17:51820 2023-04-15 23:08:52,727 DEBG 'start-script' stdout output: [info] Adding 10.5.5.1 to /etc/resolv.conf 2023-04-15 23:08:52,732 DEBG 'start-script' stdout output: [info] Adding 1.1.1.1 to /etc/resolv.conf 2023-04-15 23:08:52,738 DEBG 'start-script' stdout output: [info] Adding 1.0.0.1 to /etc/resolv.conf 2023-04-15 23:08:52,745 DEBG 'start-script' stdout output: [info] Adding 8.8.8.8 to /etc/resolv.conf 2023-04-15 23:08:52,755 DEBG 'start-script' stdout output: [debug] Show name servers defined for container 2023-04-15 23:08:52,756 DEBG 'start-script' stdout output: nameserver 10.5.5.1 nameserver 1.1.1.1 nameserver 1.0.0.1 nameserver 8.8.8.8 2023-04-15 23:08:52,757 DEBG 'start-script' stdout output: [debug] Show contents of hosts file 2023-04-15 23:08:52,758 DEBG 'start-script' stdout output: 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 3414a6b7f1c3 2023-04-15 23:08:52,788 DEBG 'start-script' stdout output: [debug] Docker interface defined as eth0 2023-04-15 23:08:52,795 DEBG 'start-script' stdout output: [info] Default route for container is 172.17.0.1 2023-04-15 23:08:52,801 DEBG 'start-script' stdout output: [debug] Docker IP defined as 172.17.0.2 2023-04-15 23:08:52,807 DEBG 'start-script' stdout output: [debug] Docker netmask defined as 255.255.0.0 2023-04-15 23:08:52,969 DEBG 'start-script' stdout output: [info] Docker network defined as 172.17.0.0/16 2023-04-15 23:08:52,975 DEBG 'start-script' stdout output: [info] Adding 10.5.5.0/24 as route via docker eth0 2023-04-15 23:08:52,985 DEBG 'start-script' stdout output: [info] Adding 10.3.3.0/24 as route via docker eth0 2023-04-15 23:08:52,987 DEBG 'start-script' stdout output: [info] ip route defined as follows... -------------------- 2023-04-15 23:08:52,989 DEBG 'start-script' stdout output: default via 172.17.0.1 dev eth0 10.3.3.0/24 via 172.17.0.1 dev eth0 10.5.5.0/24 via 172.17.0.1 dev eth0 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2 local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1 local 172.17.0.2 dev eth0 table local proto kernel scope host src 172.17.0.2 2023-04-15 23:08:52,989 DEBG 'start-script' stdout output: broadcast 172.17.255.255 dev eth0 table local proto kernel scope link src 172.17.0.2 2023-04-15 23:08:52,990 DEBG 'start-script' stdout output: -------------------- [debug] Modules currently loaded for kernel 2023-04-15 23:08:52,995 DEBG 'start-script' stdout output: Module Size Used by xt_connmark 16384 0 xt_comment 16384 0 iptable_raw 16384 0 wireguard 73728 0 curve25519_x86_64 32768 1 wireguard libcurve25519_generic 49152 2 curve25519_x86_64,wireguard libchacha20poly1305 16384 1 wireguard chacha_x86_64 28672 1 libchacha20poly1305 poly1305_x86_64 28672 1 libchacha20poly1305 ip6_udp_tunnel 16384 1 wireguard udp_tunnel 20480 1 wireguard libchacha 16384 1 chacha_x86_64 tun 53248 2 ipvlan 36864 0 xt_mark 16384 1 veth 32768 0 xt_CHECKSUM 16384 0 ipt_REJECT 16384 0 nf_reject_ipv4 16384 1 ipt_REJECT ip6table_mangle 16384 1 ip6table_nat 16384 1 nvidia_uvm 1355776 2 vhost_iotlb 16384 0 macvlan 28672 0 xt_nat 16384 10 xt_tcpudp 16384 35 xt_conntrack 16384 8 xt_MASQUERADE 16384 18 nf_conntrack_netlink 49152 0 nfnetlink 16384 2 nf_conntrack_netlink xfrm_user 36864 1 xt_addrtype 16384 2 iptable_nat 16384 1 nf_nat 49152 4 ip6table_nat,xt_nat,iptable_nat,xt_MASQUERADE nf_conntrack 139264 6 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_connmark,xt_MASQUERADE nf_defrag_ipv6 16384 1 nf_conntrack nf_defrag_ipv4 16384 1 nf_conntrack br_netfilter 32768 0 xfs 1654784 11 dm_crypt 45056 15 dm_mod 126976 31 dm_crypt dax 36864 1 dm_mod md_mod 53248 10 iptable_mangle 16384 2 ipmi_devintf 16384 0 efivarfs 16384 1 ip6table_filter 16384 1 ip6_tables 28672 3 ip6table_filter,ip6table_nat,ip6table_mangle iptable_filter 16384 3 ip_tables 28672 6 iptable_filter,iptable_raw,iptable_nat,iptable_mangle x_tables 45056 19 ip6table_filter,xt_conntrack,iptable_filter,ip6table_nat,xt_tcpudp,xt_addrtype,xt_CHECKSUM,xt_nat,xt_comment,ip6_tables,ipt_REJECT,xt_connmark,iptable_raw,ip_tables,iptable_nat,ip6table_mangle,xt_MASQUERADE,iptable_mangle,xt_mark af_packet 49152 0 8021q 32768 0 garp 16384 1 8021q mrp 16384 1 8021q bridge 262144 1 br_netfilter stp 16384 2 bridge,garp llc 16384 3 bridge,stp,garp bonding 151552 0 tls 106496 1 bonding ixgbe 286720 0 xfrm_algo 16384 2 xfrm_user,ixgbe mdio 16384 1 ixgbe nvidia_drm 65536 0 nvidia_modeset 1204224 1 nvidia_drm ast 57344 0 drm_vram_helper 20480 1 ast i2c_algo_bit 16384 1 ast drm_ttm_helper 16384 2 drm_vram_helper,ast x86_pkg_temp_thermal 16384 0 nvidia 56016896 14 nvidia_uvm,nvidia_modeset intel_powerclamp 16384 0 ttm 73728 2 drm_vram_helper,drm_ttm_helper coretemp 16384 0 drm_kms_helper 159744 5 drm_vram_helper,ast,nvidia_drm mpt3sas 282624 11 drm 475136 8 drm_kms_helper,drm_vram_helper,ast,nvidia,drm_ttm_helper,nvidia_drm,ttm i2c_i801 24576 0 raid_class 16384 1 mpt3sas i2c_smbus 16384 1 i2c_i801 crct10dif_pclmul 16384 1 crc32_pclmul 16384 0 crc32c_intel 24576 3 ghash_clmulni_intel 16384 0 aesni_intel 380928 30 crypto_simd 16384 1 aesni_intel cryptd 24576 17 crypto_simd,ghash_clmulni_intel rapl 16384 0 nvme 49152 3 ipmi_ssif 32768 0 agpgart 40960 1 ttm backlight 20480 2 drm,nvidia_modeset intel_cstate 20480 0 intel_uncore 200704 0 nvme_core 1 2023-04-15 23:08:52,996 DEBG 'start-script' stdout output: 06496 4 nvme scsi_transport_sas 40960 1 mpt3sas i2c_core 86016 8 drm_kms_helper,i2c_algo_bit,ast,nvidia,i2c_smbus,i2c_i801,ipmi_ssif,drm syscopyarea 16384 1 drm_kms_helper tpm_tis 16384 0 ahci 45056 2 acpi_ipmi 16384 0 sysfillrect 16384 1 drm_kms_helper tpm_tis_core 20480 1 tpm_tis input_leds 16384 0 joydev 24576 0 sysimgblt 16384 1 drm_kms_helper libahci 40960 1 ahci led_class 16384 1 input_leds fb_sys_fops 16384 1 drm_kms_helper tpm 73728 2 tpm_tis,tpm_tis_core wmi 28672 0 ipmi_si 57344 1 acpi_power_meter 20480 0 acpi_pad 24576 0 button 20480 0 unix 53248 451 2023-04-15 23:08:53,003 DEBG 'start-script' stdout output: iptable_mangle 16384 2 ip_tables 28672 6 iptable_filter,iptable_raw,iptable_nat,iptable_mangle x_tables 45056 19 ip6table_filter,xt_conntrack,iptable_filter,ip6table_nat,xt_tcpudp,xt_addrtype,xt_CHECKSUM,xt_nat,xt_comment,ip6_tables,ipt_REJECT,xt_connmark,iptable_raw,ip_tables,iptable_nat,ip6table_mangle,xt_MASQUERADE,iptable_mangle,xt_mark 2023-04-15 23:08:53,003 DEBG 'start-script' stdout output: [info] iptable_mangle support detected, adding fwmark for tables 2023-04-15 23:08:53,127 DEBG 'start-script' stdout output: [info] iptables defined as follows... -------------------- 2023-04-15 23:08:53,130 DEBG 'start-script' stdout output: -P INPUT DROP -P FORWARD DROP -P OUTPUT DROP -A INPUT -s 162.221.202.17/32 -i eth0 -j ACCEPT -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 7878 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 7878 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8090 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 8090 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8686 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 8686 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8989 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 8989 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 9117 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 9117 -j ACCEPT -A INPUT -s 10.5.5.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 8118 -j ACCEPT -A INPUT -s 10.3.3.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 8118 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i wg0 -j ACCEPT -A OUTPUT -d 162.221.202.17/32 -o eth0 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 7878 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 7878 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8090 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 8090 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8686 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 8686 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8989 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 8989 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 9117 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 9117 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 10.5.5.0/24 -o eth0 -p tcp -m tcp --sport 8118 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 10.3.3.0/24 -o eth0 -p tcp -m tcp --sport 8118 -j ACCEPT -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -o wg0 -j ACCEPT 2023-04-15 23:08:53,131 DEBG 'start-script' stdout output: -------------------- 2023-04-15 23:08:53,135 DEBG 'start-script' stdout output: [info] Attempting to bring WireGuard interface 'up'... 2023-04-15 23:08:53,147 DEBG 'start-script' stderr output: Warning: `/config/wireguard/wg0.conf' is world accessible 2023-04-15 23:08:53,159 DEBG 'start-script' stderr output: [#] ip link add wg0 type wireguard 2023-04-15 23:08:53,162 DEBG 'start-script' stderr output: [#] wg setconf wg0 /dev/fd/63 2023-04-15 23:08:53,164 DEBG 'start-script' stderr output: [#] ip -4 address add 10.101.95.63/32 dev wg0 2023-04-15 23:08:53,176 DEBG 'start-script' stderr output: [#] ip link set mtu 1420 up dev wg0 2023-04-15 23:08:53,189 DEBG 'start-script' stderr output: [#] wg set wg0 fwmark 51820 2023-04-15 23:08:53,190 DEBG 'start-script' stderr output: [#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820 2023-04-15 23:08:53,193 DEBG 'start-script' stderr output: [#] ip -4 rule add not fwmark 51820 table 51820 2023-04-15 23:08:53,194 DEBG 'start-script' stderr output: [#] ip -4 rule add table main suppress_prefixlength 0 2023-04-15 23:08:53,200 DEBG 'start-script' stderr output: [#] sysctl -q net.ipv4.conf.all.src_valid_mark=1 2023-04-15 23:08:53,203 DEBG 'start-script' stderr output: [#] iptables-restore -n 2023-04-15 23:08:53,206 DEBG 'start-script' stderr output: [#] '/root/wireguardup.sh' 2023-04-15 23:08:53,212 DEBG 'start-script' stdout output: [debug] Waiting for valid local and gateway IP addresses from tunnel... 2023-04-15 23:08:54,232 DEBG 'start-script' stdout output: [debug] Valid local IP address from tunnel acquired '10.101.95.63' 2023-04-15 23:08:54,232 DEBG 'start-script' stdout output: [debug] Checking we can resolve name 'www.google.com' to address... 2023-04-15 23:08:54,251 DEBG 'watchdog-script' stdout output: [debug] Checking we can resolve name 'www.google.com' to address... 2023-04-15 23:09:39,290 DEBG 'start-script' stdout output: [debug] Having issues resolving name 'www.google.com' [debug] Retrying in 5 secs... [debug] 11 retries left 2023-04-15 23:09:39,307 DEBG 'watchdog-script' stdout output: [debug] Having issues resolving name 'www.google.com' [debug] Retrying in 5 secs... [debug] 11 retries left 2023-04-15 23:10:29,351 DEBG 'start-script' stdout output: [debug] Having issues resolving name 'www.google.com' [debug] Retrying in 5 secs... [debug] 10 retries left 2023-04-15 23:10:29,365 DEBG 'watchdog-script' stdout output: [debug] Having issues resolving name 'www.google.com' [debug] Retrying in 5 secs... [debug] 10 retries left
  8. This all started because the Fix Common Problems plugin gave me errors regarding using macvlan driver when using static IP's on containers. I have been having random unexplained crashes and I suspect macvlan is the cause. This got me thinking about the overall setup of my server, and I'm seeking some advice on the best way to run my containers. I have 4x 10GbE network interfaces (eth0-eth3) on my server. I would like to dedicate separate NIC interfaces to Unraid, Docker, and VMs respectively. I'm not worried about bonding network interfaces for redundancy/failover. Just want each component to have it's own NIC. I would also like each of these to be able to talk to each other internally without going through the switch. eth0 = 10.5.5.5 (Unraid WebUI, File Shares, SSH) eth1 = 10.5.5.6 (Docker Containers, using ipvlan driver) eth2 = 10.5.5.7 (VMs) I am not sure what is the optimal way to set this up. I don't even know where to begin. I want to make sure that I use the ipvlan driver in docker to avoid these crashes while I'm doing this. Any assistance or advice to point me in the right direction would be greatly appreciated.
  9. I spoke to VPNUnlimited about this issue (depreciated cipher AES-256-CBC), and they said that they are aware of the problem and are issuing new certificates to the servers, but in the meantime add this to your openvpn.conf tls-cipher DEFAULT:@SECLEVEL=0 It's not optimal, but it works for now, Sent from my SM-G965W using Tapatalk
  10. I am also having this issue with VPNUnlimited using openvpn. I have attempted to migrate to wireguard, but now I am having dns failure issues instead. Keep me posted on anything you find, and I will do the same! (I would be very happy if I can get wireguard to work. It's a much more efficient protocol)
  11. Fantastic! Installed and working. However I do get these errors when I run it, but it does not seem to affect functionality. nvtop: /lib64/libtinfo.so.6: no version information available (required by nvtop) nvtop: /lib64/libncursesw.so.6: no version information available (required by nvtop) Thank you so much!
  12. Oh yes I understand why you don't want to include 3rd party. I just thought I would ask if it was possible. I would still be grateful for a separate plugin. I appreciate your work here my friend.
  13. Well I'd hate to have a whole plugin just for one command. I can check the with the nerdpack author and see if they are interested?
  14. I have a suggestion/request: is it possible to include nvtop command with this plugin? I thought about asking in the nerdpack plugin, but seeing how it requires the nvidia plugin to be useful, I figured that here was a good place to start
  15. Since the last update, all of my existing torrents have disappeared. The files are still there, but the qbittorrent client shows no torrents. All categories are gone, all previous incomplete and completed torrents are gone in the client. It looks like a clean install. I have made no changes to the template, and everything is set how it should be.... Neither Sonarr or Radarr are adding torrents. This is a major inconvenience to say the least. Any suggestions on where to go from here? Edit: It seems that the torrent client IS actually running properly, with Sonarr/Radarr adding and managing torrents properly, but it's the WebUI that's not functioning. I found this bug report for qBittorrent 4.3.8 which describes my issue exactly.