ati

Members
  • Posts

    44
  • Joined

  • Last visited

Posts posted by ati

  1. 3 hours ago, Frank1940 said:

    On the Main tab (with things like Docker containers and VM's running), Push the 'STOP' array button.  With a stop watch, determine the interval required for the array to stop.  Increase that time by 25% (somewhat arbitrary percentage) and use that number to set the time on the     SETTINGS    >>>   Disk Settings     in this variable.  

    image.png.5a75f7dc163c08e76c83453bc48c4afe.png

     

    This is a great start. 

     

    Thank you!

  2. I generally don't shutdown or stop the array on my unRAID server. I was doing some electrical work in the house this past week and this was the first time I have shutdown my server in some time. I learned I was susceptible to the "Retry unmounting disk share(s)" issue even though I am running on 6.11.5. 

     

    I run my server on a UPS and have it set to auto shutdown after a 3 minute outage. I always assumed that it would properly shutdown, but I guess not (crazy me assuming things). So for the past 6+ months I have been basically susceptible to an 'unclean' shutdown. 

     

    Is there any way to force a clean shutdown? Frankly I have very little interest in updating to 6.12.3. I am slowly losing trust in LimeTech to QA/QC any update. The last time I updated I had to rebuild disks due to the Seagate error. While I understand that wasn't all on LimeTech, issuing a 'stable' release that fails to shutdown is pretty crazy. Furthermore, there are still reports of it not being fixed in 6.12.3. I have no interest in beta testing for development, that is why I generally stay a revision behind. 

     

     

     

  3. I am hoping someone can give my some guidance on getting Paperless-NGX to work with Nginx Proxy Manager. It works fine when I assign both Redis and Paperless-NGX a br0 address on my LAN. However, when I used a bridge address it does not work. I get an 502 error. 

     

    This is for local access only. I run an instance of NPM for all my services where the names are *.local (unraid.local, plex.local) etc. PiHole for DNS resolution to point to the NPM server.

     

    I generally prefer to setup custom Docker networks for services like this for the redis link, but I cannot seem to get the proxy to work without having both dockers having their own LAN IP. 

     

     

     

  4. I have a user created that uses SSH to copy files to a public share on my server. I was using this as a means to copy pictures from my phone to server automatically every night. After the update this now fails with permission denied on copy. 

     

    I also tried using the same credentials on my windows machine with WinSCP and got the same error. Seems to work fine when I log in as root. 

     

    user and group of the folder is nobody:nobody as well. 

     

     

     

  5. I would like to move my UniFI AP controller into a docker container on my unRAID server. My unRAID server is already configured for VLANs and has the appropriate VLANs accessible to its BR0 interface. What I am not sure about is how to get a trunk port to my UniFI controller's docker container. 

     

    Current network setup:

    VLAN 1: Network management

    VLAN 10: Servers

    VLAN 20: Wireless clients

    VLAN 40: IoT clients

     

    While everything is routable, I would prefer to not have to route my AP management traffic (VLAN 1) to my UniFI server (VLAN 10). Makes everything more complicated when adopting new equipment. 

     

    TL;DR

     

    How can I pass multiple VLANs to a docker container? 

     

     

     

  6. I recently went the route of adding several VLANs to segment my docker containers into DMZ areas. In the process I noticed that you have to have an IP address set in the network settings for the network (VLAN) to be available to Docker. I guess that makes sense, as that is how they learn a default gateway. 

     

    However, that poses a security risk in my mind. If one of those containers gets compromised, it can access the unRAID server which now has an IP on that network. Is there anyway I can disable the GUI and SSH access from a specific interface? Because it is on the same network segment, a router firewall rule is useless. 

     

     

     

     

    • Upvote 1
  7. 4 hours ago, JorgeB said:

    You are using a valid config but to simply things disable the bond and set eth1 as eth0 (or swap the cables)

    Tried that many times. Tried each port individually even. When I deleted the /config/network.cfg I set it back up as a single interface. 

     

    After lots of reboots (to remove the USB and get diagnostic files off it) it just worked after no changes. Any idea where there are lots of references in the syslog to virbr0 using an IP address that isn't anywhere on my network or unRAID configuration? 

     

    Regardless, I am afraid to reboot for fear of having this ordeal happen again. I spent 3 hours rebooting my server in circles for it to miraculously work again using the same configuration I started with 3 hours prior. 

  8. I am even more lost now.

     

    I shut down, to get the diagnostics off the USB and upon reboot it booted normally and I was able to access it via the network. I logged into the web GUI to startup the array. Hell I even got an email alert that my UPS was unplugged (to get the USB out). About 30 seconds into starting the array I lost network again. I connected back to the server to checkout the syslog and saw this: 

    Mar  7 15:14:38 unRAID root: Starting diskload
    Mar  7 15:14:38 unRAID emhttpd: Mounting disks...
    Mar  7 15:14:38 unRAID emhttpd: shcmd (197): /sbin/btrfs device scan
    Mar  7 15:14:38 unRAID root: Scanning for Btrfs filesystems
    Mar  7 15:14:38 unRAID emhttpd: shcmd (198): mkdir -p /mnt/disk1
    Mar  7 15:14:38 unRAID emhttpd: shcmd (199): mount -t xfs -o noatime,nodiratime /dev/md1 /mnt/disk1
    Mar  7 15:14:38 unRAID kernel: SGI XFS with ACLs, security attributes, no debug enabled
    Mar  7 15:14:38 unRAID kernel: XFS (md1): Mounting V5 Filesystem
    Mar  7 15:14:38 unRAID kernel: XFS (md1): Ending clean mount
    Mar  7 15:14:38 unRAID emhttpd: shcmd (200): xfs_growfs /mnt/disk1
    Mar  7 15:14:38 unRAID root: meta-data=/dev/md1               isize=512    agcount=8, agsize=244188659 blks
    Mar  7 15:14:38 unRAID root:          =                       sectsz=512   attr=2, projid32bit=1
    Mar  7 15:14:38 unRAID root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    Mar  7 15:14:38 unRAID root:          =                       reflink=1
    Mar  7 15:14:38 unRAID root: data     =                       bsize=4096   blocks=1953506633, imaxpct=5
    Mar  7 15:14:38 unRAID root:          =                       sunit=0      swidth=0 blks
    Mar  7 15:14:38 unRAID root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    Mar  7 15:14:38 unRAID root: log      =internal log           bsize=4096   blocks=476930, version=2
    Mar  7 15:14:38 unRAID root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    Mar  7 15:14:38 unRAID root: realtime =none                   extsz=4096   blocks=0, rtextents=0
    Mar  7 15:14:38 unRAID emhttpd: shcmd (201): mkdir -p /mnt/disk2
    Mar  7 15:14:38 unRAID emhttpd: shcmd (202): mount -t xfs -o noatime,nodiratime /dev/md2 /mnt/disk2
    Mar  7 15:14:38 unRAID kernel: XFS (md2): Mounting V5 Filesystem
    Mar  7 15:14:38 unRAID kernel: XFS (md2): Ending clean mount
    Mar  7 15:14:38 unRAID emhttpd: shcmd (203): xfs_growfs /mnt/disk2
    Mar  7 15:14:38 unRAID root: meta-data=/dev/md2               isize=512    agcount=8, agsize=244188659 blks
    Mar  7 15:14:38 unRAID root:          =                       sectsz=512   attr=2, projid32bit=1
    Mar  7 15:14:38 unRAID root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    Mar  7 15:14:38 unRAID root:          =                       reflink=1
    Mar  7 15:14:38 unRAID root: data     =                       bsize=4096   blocks=1953506633, imaxpct=5
    Mar  7 15:14:38 unRAID root:          =                       sunit=0      swidth=0 blks
    Mar  7 15:14:38 unRAID root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    Mar  7 15:14:38 unRAID root: log      =internal log           bsize=4096   blocks=476930, version=2
    Mar  7 15:14:38 unRAID root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    Mar  7 15:14:38 unRAID root: realtime =none                   extsz=4096   blocks=0, rtextents=0
    Mar  7 15:14:38 unRAID emhttpd: shcmd (204): mkdir -p /mnt/disk3
    Mar  7 15:14:38 unRAID emhttpd: shcmd (205): mount -t xfs -o noatime,nodiratime /dev/md3 /mnt/disk3
    Mar  7 15:14:38 unRAID kernel: XFS (md3): Mounting V5 Filesystem
    Mar  7 15:14:39 unRAID kernel: XFS (md3): Ending clean mount
    Mar  7 15:14:39 unRAID emhttpd: shcmd (206): xfs_growfs /mnt/disk3
    Mar  7 15:14:39 unRAID root: meta-data=/dev/md3               isize=512    agcount=6, agsize=268435455 blks
    Mar  7 15:14:39 unRAID root:          =                       sectsz=512   attr=2, projid32bit=1
    Mar  7 15:14:39 unRAID root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    Mar  7 15:14:39 unRAID root:          =                       reflink=1
    Mar  7 15:14:39 unRAID root: data     =                       bsize=4096   blocks=1465122442, imaxpct=5
    Mar  7 15:14:39 unRAID root:          =                       sunit=0      swidth=0 blks
    Mar  7 15:14:39 unRAID root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    Mar  7 15:14:39 unRAID root: log      =internal log           bsize=4096   blocks=521728, version=2
    Mar  7 15:14:39 unRAID root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    Mar  7 15:14:39 unRAID root: realtime =none                   extsz=4096   blocks=0, rtextents=0
    Mar  7 15:14:39 unRAID emhttpd: shcmd (207): mkdir -p /mnt/cache
    Mar  7 15:14:39 unRAID emhttpd: cache uuid: 7cb7e73a-56e0-4226-b453-ed9ec86cc38f
    Mar  7 15:14:39 unRAID emhttpd: cache TotDevices: 2
    Mar  7 15:14:39 unRAID emhttpd: cache NumDevices: 2
    Mar  7 15:14:39 unRAID emhttpd: cache NumFound: 2
    Mar  7 15:14:39 unRAID emhttpd: cache NumMissing: 0
    Mar  7 15:14:39 unRAID emhttpd: cache NumMisplaced: 0
    Mar  7 15:14:39 unRAID emhttpd: cache NumExtra: 0
    Mar  7 15:14:39 unRAID emhttpd: cache LuksState: 0
    Mar  7 15:14:39 unRAID emhttpd: shcmd (208): mount -t btrfs -o noatime,nodiratime -U 7cb7e73a-56e0-4226-b453-ed9ec86cc38f /mnt/cache
    Mar  7 15:14:39 unRAID kernel: BTRFS info (device sdh1): using free space tree
    Mar  7 15:14:39 unRAID kernel: BTRFS info (device sdh1): has skinny extents
    Mar  7 15:14:56 unRAID emhttpd: shcmd (209): /sbin/btrfs filesystem resize 1:max /mnt/cache
    Mar  7 15:14:56 unRAID root: Resize '/mnt/cache' of '1:max'
    Mar  7 15:14:56 unRAID kernel: BTRFS info (device sdh1): resizing devid 1
    Mar  7 15:14:56 unRAID kernel: BTRFS info (device sdh1): new size for /dev/sdh1 is 3000592928768
    Mar  7 15:14:56 unRAID emhttpd: shcmd (210): /sbin/btrfs filesystem resize 2:max /mnt/cache
    Mar  7 15:14:56 unRAID root: Resize '/mnt/cache' of '2:max'
    Mar  7 15:14:56 unRAID kernel: BTRFS info (device sdh1): resizing devid 2
    Mar  7 15:14:56 unRAID kernel: BTRFS info (device sdh1): new size for /dev/sde1 is 3000592928768
    Mar  7 15:14:56 unRAID emhttpd: shcmd (211): sync
    Mar  7 15:14:56 unRAID emhttpd: shcmd (212): mkdir /mnt/user0
    Mar  7 15:14:56 unRAID emhttpd: shcmd (213): /usr/local/sbin/shfs /mnt/user0 -disks 14 -o noatime,allow_other  |& logger
    Mar  7 15:14:56 unRAID shfs: use_ino: 1
    Mar  7 15:14:56 unRAID shfs: direct_io: 0
    Mar  7 15:14:56 unRAID emhttpd: shcmd (214): mkdir /mnt/user
    Mar  7 15:14:56 unRAID emhttpd: shcmd (215): /usr/local/sbin/shfs /mnt/user -disks 15 2048000000 -o noatime,allow_other -o remember=0  |& logger
    Mar  7 15:14:56 unRAID shfs: use_ino: 1
    Mar  7 15:14:56 unRAID shfs: direct_io: 0
    Mar  7 15:14:57 unRAID emhttpd: shcmd (217): /usr/local/sbin/update_cron
    Mar  7 15:14:57 unRAID root: Delaying execution of fix common problems scan for 10 minutes
    Mar  7 15:14:57 unRAID unassigned.devices: Mounting 'Auto Mount' Devices...
    Mar  7 15:14:57 unRAID emhttpd: Starting services...
    Mar  7 15:14:57 unRAID emhttpd: shcmd (220): /etc/rc.d/rc.samba restart
    Mar  7 15:14:59 unRAID root: Starting Samba:  /usr/sbin/smbd -D
    Mar  7 15:15:00 unRAID root:                  /usr/sbin/nmbd -D
    Mar  7 15:15:00 unRAID root:                  /usr/sbin/wsdd 
    Mar  7 15:15:00 unRAID root:                  /usr/sbin/winbindd -D
    Mar  7 15:15:00 unRAID emhttpd: shcmd (234): /usr/local/sbin/mount_image '/mnt/cache/system/docker/docker.img' /var/lib/docker 40
    Mar  7 15:15:00 unRAID kernel: BTRFS: device fsid 1e042287-9021-48d2-8ad1-772169654339 devid 1 transid 1747553 /dev/loop2
    Mar  7 15:15:00 unRAID kernel: BTRFS info (device loop2): using free space tree
    Mar  7 15:15:00 unRAID kernel: BTRFS info (device loop2): has skinny extents
    Mar  7 15:15:00 unRAID root: Resize '/var/lib/docker' of 'max'
    Mar  7 15:15:00 unRAID kernel: BTRFS info (device loop2): new size for /dev/loop2 is 42949672960
    Mar  7 15:15:00 unRAID emhttpd: shcmd (236): /etc/rc.d/rc.docker start
    Mar  7 15:15:00 unRAID root: starting dockerd ...
    Mar  7 15:15:04 unRAID avahi-daemon[10987]: Joining mDNS multicast group on interface br-4ead1d45ff54.IPv4 with address 172.18.0.1.
    Mar  7 15:15:04 unRAID avahi-daemon[10987]: New relevant interface br-4ead1d45ff54.IPv4 for mDNS.
    Mar  7 15:15:04 unRAID avahi-daemon[10987]: Registering new address record for 172.18.0.1 on br-4ead1d45ff54.IPv4.
    Mar  7 15:15:04 unRAID kernel: IPv6: ADDRCONF(NETDEV_UP): br-4ead1d45ff54: link is not ready
    Mar  7 15:15:04 unRAID avahi-daemon[10987]: Joining mDNS multicast group on interface br-66b05052a9b2.IPv4 with address 192.168.240.1.
    Mar  7 15:15:04 unRAID avahi-daemon[10987]: New relevant interface br-66b05052a9b2.IPv4 for mDNS.
    Mar  7 15:15:04 unRAID avahi-daemon[10987]: Registering new address record for 192.168.240.1 on br-66b05052a9b2.IPv4.
    Mar  7 15:15:04 unRAID kernel: IPv6: ADDRCONF(NETDEV_UP): br-66b05052a9b2: link is not ready
    Mar  7 15:15:04 unRAID avahi-daemon[10987]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    Mar  7 15:15:04 unRAID avahi-daemon[10987]: New relevant interface docker0.IPv4 for mDNS.
    Mar  7 15:15:04 unRAID avahi-daemon[10987]: Registering new address record for 172.17.0.1 on docker0.IPv4.
    Mar  7 15:15:04 unRAID kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
    Mar  7 15:15:09 unRAID rc.docker: b1f70fe360063b35c8f89ccb8a5befd5f30feb5c090801074f6f4ddcbd41b28f
    Mar  7 15:15:11 unRAID emhttpd: shcmd (250): /usr/local/sbin/mount_image '/mnt/cache/system/libvirt/libvirt.img' /etc/libvirt 1
    Mar  7 15:15:11 unRAID kernel: BTRFS: device fsid bc304288-554a-4fd8-a63c-b4a0a37f5e68 devid 1 transid 262 /dev/loop3
    Mar  7 15:15:11 unRAID kernel: BTRFS info (device loop3): using free space tree
    Mar  7 15:15:11 unRAID kernel: BTRFS info (device loop3): has skinny extents
    Mar  7 15:15:11 unRAID root: Resize '/etc/libvirt' of 'max'
    Mar  7 15:15:11 unRAID kernel: BTRFS info (device loop3): new size for /dev/loop3 is 1073741824
    Mar  7 15:15:11 unRAID emhttpd: shcmd (252): /etc/rc.d/rc.libvirt start
    Mar  7 15:15:11 unRAID root: Starting virtlockd...
    Mar  7 15:15:11 unRAID root: Starting virtlogd...
    Mar  7 15:15:11 unRAID root: Starting libvirtd...
    Mar  7 15:15:11 unRAID kernel: tun: Universal TUN/TAP device driver, 1.6
    Mar  7 15:15:11 unRAID kernel: virbr0: port 1(virbr0-nic) entered blocking state
    Mar  7 15:15:11 unRAID kernel: virbr0: port 1(virbr0-nic) entered disabled state
    Mar  7 15:15:11 unRAID kernel: device virbr0-nic entered promiscuous mode
    Mar  7 15:15:11 unRAID emhttpd: nothing to sync
    Mar  7 15:15:11 unRAID kernel: bond0: link status definitely down for interface eth0, disabling it
    Mar  7 15:15:11 unRAID kernel: bond0: making interface eth1 the new active one
    Mar  7 15:15:11 unRAID kernel: device eth0 left promiscuous mode
    Mar  7 15:15:11 unRAID kernel: device eth1 entered promiscuous mode
    Mar  7 15:15:11 unRAID avahi-daemon[10987]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.
    Mar  7 15:15:11 unRAID avahi-daemon[10987]: New relevant interface virbr0.IPv4 for mDNS.
    Mar  7 15:15:11 unRAID avahi-daemon[10987]: Registering new address record for 192.168.122.1 on virbr0.IPv4.
    Mar  7 15:15:11 unRAID kernel: virbr0: port 1(virbr0-nic) entered blocking state
    Mar  7 15:15:11 unRAID kernel: virbr0: port 1(virbr0-nic) entered listening state
    Mar  7 15:15:11 unRAID dnsmasq[14786]: started, version 2.80 cachesize 150
    Mar  7 15:15:11 unRAID dnsmasq[14786]: compile time options: IPv6 GNU-getopt no-DBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify dumpfile
    Mar  7 15:15:11 unRAID dnsmasq-dhcp[14786]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
    Mar  7 15:15:11 unRAID dnsmasq-dhcp[14786]: DHCP, sockets bound exclusively to interface virbr0
    Mar  7 15:15:11 unRAID dnsmasq[14786]: reading /etc/resolv.conf
    Mar  7 15:15:11 unRAID dnsmasq[14786]: using nameserver 8.8.8.8#53
    Mar  7 15:15:11 unRAID dnsmasq[14786]: using nameserver 8.8.4.4#53
    Mar  7 15:15:11 unRAID dnsmasq[14786]: read /etc/hosts - 2 addresses
    Mar  7 15:15:11 unRAID dnsmasq[14786]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
    Mar  7 15:15:11 unRAID dnsmasq-dhcp[14786]: read /var/lib/libvirt/dnsmasq/default.hostsfile
    Mar  7 15:15:11 unRAID kernel: virbr0: port 1(virbr0-nic) entered disabled state
    Mar  7 15:15:11 unRAID kernel: L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.
    Mar  7 15:15:12 unRAID tips.and.tweaks: Tweaks Applied
    Mar  7 15:15:12 unRAID unassigned.devices: Mounting 'Auto Mount' Remote Shares...
    Mar  7 15:15:12 unRAID sudo:     root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/bash -c /usr/local/emhttp/plugins/unbalance/unbalance -port 6237
    Mar  7 15:15:13 unRAID kernel: br0: port 2(vnet0) entered blocking state
    Mar  7 15:15:13 unRAID kernel: br0: port 2(vnet0) entered disabled state
    Mar  7 15:15:13 unRAID kernel: device vnet0 entered promiscuous mode
    Mar  7 15:15:13 unRAID kernel: br0: port 2(vnet0) entered blocking state
    Mar  7 15:15:13 unRAID kernel: br0: port 2(vnet0) entered forwarding state
    Mar  7 15:15:14 unRAID kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
    Mar  7 15:15:14 unRAID kernel: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex
    Mar  7 15:15:15 unRAID avahi-daemon[10987]: Joining mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fe00:f65e.
    Mar  7 15:15:15 unRAID avahi-daemon[10987]: New relevant interface vnet0.IPv6 for mDNS.
    Mar  7 15:15:15 unRAID avahi-daemon[10987]: Registering new address record for fe80::fc54:ff:fe00:f65e on vnet0.*.
    Mar  7 15:16:20 unRAID ntpd[1875]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
    Mar  7 15:17:13 unRAID unassigned.devices: Cannot 'Auto Mount' Remote Shares.  Network not available!
    Mar  7 15:18:16 unRAID login[10849]: ROOT LOGIN  on '/dev/tty1'
    

    What the heck is happening? 

  9. I am working my way towards upgrading from 6.8 to 6.10. As part of that I needed to update a few settings on my Seagate drives, which required a reboot. 

     

    I made the changes and rebooted, but the server didn't come back right away. I later learned it was because I left a second USB drive plugged in and it messed up the boot order. I removed that USB, rebooted and the server came up as expected. I then verified the changes using the SeaChest tools and rebooted again. 

     

    This is when it all went sideways. Everything comes up fine, but I have no network access.

    • The /boot/config/network.cfg matches my previously backed up config (the CLI shows the correct address)
    • I checked syslog and didn't see anything glaring. I pulled the link the syslog shows it going down - so the NIC isn't dead.
    • I deleted /boot/config/network.cfg and rebooted, it did not re-create the file
    • I rebooted again into GUI mode and entered in my static IP in the GUI and rebooted again for the settings to apply. No luck.
    • This is the diagnostics for today from the USB from the last reboot. 

     

    I am going crazy. This is so simple. Really makes my nervous that a simple reboot caused this much of an issue. 

     

    What am I missing. I am beyond frustrated with something so simple. 

    unraid-diagnostics-20230307-1115.zip unraid-diagnostics-20230307-1406.zip unraid-diagnostics-20230307-1451.zip

  10. 19 minutes ago, trurl said:

    Did you check that log?

    Yep. 

     

    Doesn't show anything more than the verbose output. Actually less.

    It was only a list of files transferred. It just stopped at the file before the one RSYNC hung on. 

     

    2023/02/12 22:28:04 [13522] >f..tpog... Phone Pictures/IMG_20200220_055619.jpg
    2023/02/12 22:28:04 [13522] >f..tpog... Phone Pictures/IMG_20200221_073231.jpg
    2023/02/12 22:28:04 [13522] >f..tpog... Phone Pictures/IMG_20200221_122321.jpg
    2023/02/12 22:28:04 [13522] >f..tpog... Phone Pictures/IMG_20200221_124635.jpg
    2023/02/12 22:28:04 [13522] >f..tpog... Phone Pictures/IMG_20200222_120952.jpg
    2023/02/13 07:38:57 [13522] rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(642) [sender=3.1.3]

     

  11. I am backing up one of my folders on my unRAID server to a directly attached drive via the Unassigned Disks plugin. The folder contains a bunch of family documents (PDFs, pictures, family videos, etc.) about 1.2TB in total. 

     

    In the past I have used RSYNC and it has worked well. This is what I ran this time:

    rsync -avh --info=progress2 --partial --log-file=rsynclog.log /mnt/user/Documents/ /mnt/disks/external/documents_backup_2023-02-12/

     

    RSYNC was copying just as I expected until it reached a subfolder where I backup my cellphone pictures every few weeks. It hung on a single picture (roughly 5MB). I manually moved that picture and re-ran RSYNC and it hung again on the next picture. At this point I wanted to finish backing up everything else and ran this:

    rsync -avh --info=progress2 --partial --exclude '/Phone Pictures' /mnt/user/Documents/ /mnt/disks/external/documents_backup_2023-02-12/

     

    That command ran just fine and finished everything else. 

     

    I then went back and tried to just RSYNC the 'Phone Pictures' directory and it kept hanging. I used logging and verbose mode and didn't get anything useful, just stopped mid transfer for over an hour. If I re-ran it again it would stop on the exact same file too. Until I manually moved it and it would hang on the next file. 

     

    I eventually gave up and ran this:

    cp -R /mnt/user/Documents/Phone\ Pictures /mnt/disks/external/documents_backup_2023-02-12/

    That ran just fine and I verified the files were all transferred. 

     

     

    I have never had issues like this with RSYNC in the past and I am not experienced enough with it to know how to properly troubleshoot. Is there anything I can do to provide more logging? Or any ideas what the cause might be? I was hoping to try and automate this backup to run once every few months.

     

    Thanks.

  12. On 2/6/2023 at 5:06 AM, binhex said:

     

    i would assume you both are running older docker engine, if you are running unraid then update to the latest stable release, if you are running anything else then update docker engine to the latest version.

     

     

     

    Is there anyway to update Docker without updating unRAID? I after a few attempts at upgrading that produced undesirable issues I have rolled back. 

     

    If not, what is the latest version of the container I can run? 

     

  13. On 7/19/2022 at 12:07 PM, deltaexray said:

    Would like to know If this is still needed or has been, sort of, fixed in the latest 6.10.XX Version? By now It's the only reason I'm not updating

    I am waiting as well, and from what I can tell this issue will not be fixed by LimeTech. I think if we want to move on from 6.8.x to anything newer you'll have to run through the above outlined steps.

     

    I am still holding out for something, but again, I'm not holding my breath. What frustrates me the most is this was working in 6.8.xx and not in 6.9 and onwards, so it is something they could potentially address.

  14. On 8/5/2022 at 1:56 PM, ati said:

    I am struggling to figure out what happened to my container. Yesterday I had a momentary power loss which resulted in my internet going down, but my UPS kept my unRAID server online. I restored the internet and found my container GUI inaccessible a day or so later. I figure it was related to the internet loss and breaking the VPN connection. No biggie. I restarted the container to re-establish the connection and have no luck getting back into the GUI. Nothing has changed. No configuration change, nothing, but now it won't seem to work. 

     

    I dug through the startup log and cannot seem to find a glaring error either. Any guidance would be appreciated. 

     

    Container command (passwords removed):

    # /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-delugevpn' --net='bridge' --privileged=true -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='' -e 'VPN_PASS'='' -e 'VPN_PROV'='custom' -e 'VPN_CLIENT'='openvpn' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='no' -e 'LAN_NETWORK'='192.168.10.0/24' -e 'NAME_SERVERS'='85.203.37.1,85.203.37.2' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'VPN_INPUT_PORTS'='7878,5800,5900,9117,8989,9897' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8112:8112/tcp' -p '58846:58846/tcp' -p '58946:58946/tcp' -p '58946:58946/udp' -p '8118:8118/tcp' -p '7878:7878/tcp' -p '9117:9117/tcp' -p '8989:8989/tcp' -p '9897:9897/tcp' -p '8686:8686/tcp' -v '/mnt/user/Downloads/Downloads/':'/data':'rw' -v '/mnt/cache/appdata/binhex-delugevpn':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-delugevpn'
    

     

    Log from startup (passwords removed and IPs changed)

    Created by...
    ___. .__ .__
    \_ |__ |__| ____ | |__ ____ ___ ___
    | __ \| |/ \| | \_/ __ \\ \/ /
    | \_\ \ | | \ Y \ ___/ > <
    |___ /__|___| /___| /\___ >__/\_ \
    \/ \/ \/ \/ \/
    https://hub.docker.com/u/binhex/
    
    2022-08-05 13:41:36.554341 [info] Host is running unRAID
    2022-08-05 13:41:36.606632 [info] System information Linux 8290c25a0b63 4.19.107-Unraid #1 SMP Thu Mar 5 13:55:57 PST 2020 x86_64 GNU/Linux
    2022-08-05 13:41:36.664016 [info] OS_ARCH defined as 'x86-64'
    2022-08-05 13:41:36.721235 [info] PUID defined as '99'
    2022-08-05 13:41:38.346542 [info] PGID defined as '100'
    2022-08-05 13:41:39.759059 [info] UMASK defined as '000'
    2022-08-05 13:41:39.814989 [info] Permissions already set for '/config'
    2022-08-05 13:41:39.877230 [info] Deleting files in /tmp (non recursive)...
    2022-08-05 13:41:39.942277 [info] VPN_ENABLED defined as 'yes'
    2022-08-05 13:41:39.999234 [info] VPN_CLIENT defined as 'openvpn'
    2022-08-05 13:41:40.052263 [info] VPN_PROV defined as 'custom'
    2022-08-05 13:41:40.115871 [info] OpenVPN config file (ovpn extension) is located at /config/openvpn/my_expressvpn_usa_-_chicago_udp.ovpn
    2022-08-05 13:41:40.218392 [warn] VPN configuration file /config/openvpn/my_expressvpn_usa_-_chicago_udp.ovpn remote protocol is missing or malformed, assuming protocol 'udp'
    2022-08-05 13:41:40.266851 [info] VPN remote server(s) defined as 'usa-chicago-ca-version-2.expressnetw.com,'
    
    2022-08-05 13:41:40.313238 [info] VPN remote port(s) defined as '1195,'
    2022-08-05 13:41:40.362507 [info] VPN remote protcol(s) defined as 'udp,'
    2022-08-05 13:41:40.416098 [info] VPN_DEVICE_TYPE defined as 'tun0'
    2022-08-05 13:41:40.469772 [info] VPN_OPTIONS not defined (via -e VPN_OPTIONS)
    2022-08-05 13:41:40.524077 [info] LAN_NETWORK defined as '192.168.10.0/24'
    2022-08-05 13:41:40.578576 [info] NAME_SERVERS defined as '85.203.37.1,85.203.37.2'
    2022-08-05 13:41:40.632005 [info] VPN_USER defined as ''
    2022-08-05 13:41:40.686763 [info] VPN_PASS defined as ''
    2022-08-05 13:41:40.741505 [info] ENABLE_PRIVOXY defined as 'no'
    2022-08-05 13:41:40.801958 [info] VPN_INPUT_PORTS defined as '7878,5800,5900,9117,8989,9897'
    2022-08-05 13:41:40.857511 [info] VPN_OUTPUT_PORTS not defined (via -e VPN_OUTPUT_PORTS), skipping allow for custom outgoing ports
    2022-08-05 13:41:40.913177 [info] DELUGE_DAEMON_LOG_LEVEL defined as 'info'
    2022-08-05 13:41:40.968820 [info] DELUGE_WEB_LOG_LEVEL defined as 'info'
    2022-08-05 13:41:41.026254 [info] Starting Supervisor...
    2022-08-05 13:41:41,528 INFO Included extra file "/etc/supervisor/conf.d/delugevpn.conf" during parsing
    2022-08-05 13:41:41,528 INFO Set uid to user 0 succeeded
    2022-08-05 13:41:41,533 INFO supervisord started with pid 6
    2022-08-05 13:41:42,535 INFO spawned: 'shutdown-script' with pid 187
    2022-08-05 13:41:42,537 INFO spawned: 'start-script' with pid 188
    2022-08-05 13:41:42,539 INFO spawned: 'watchdog-script' with pid 189
    2022-08-05 13:41:42,539 INFO reaped unknown pid 7 (exit status 0)
    2022-08-05 13:41:42,578 DEBG 'start-script' stdout output:
    [info] VPN is enabled, beginning configuration of VPN
    
    2022-08-05 13:41:42,579 INFO success: shutdown-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2022-08-05 13:41:42,579 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2022-08-05 13:41:42,579 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2022-08-05 13:41:42,666 DEBG 'start-script' stdout output:
    [info] Adding 85.203.37.1 to /etc/resolv.conf
    
    2022-08-05 13:41:42,671 DEBG 'start-script' stdout output:
    [info] Adding 85.203.37.2 to /etc/resolv.conf
    
    2022-08-05 13:41:43,045 DEBG 'start-script' stdout output:
    [info] Default route for container is 172.17.0.1
    
    2022-08-05 13:41:43,069 DEBG 'start-script' stdout output:
    [info] Docker network defined as 172.17.0.0/16
    
    2022-08-05 13:41:43,075 DEBG 'start-script' stdout output:
    [info] Adding 192.168.10.0/24 as route via docker eth0
    
    2022-08-05 13:41:43,077 DEBG 'start-script' stdout output:
    [info] ip route defined as follows...
    --------------------
    
    2022-08-05 13:41:43,079 DEBG 'start-script' stdout output:
    default via 172.17.0.1 dev eth0
    172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
    192.168.10.0/24 via 172.17.0.1 dev eth0
    
    2022-08-05 13:41:43,079 DEBG 'start-script' stdout output:
    broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1
    local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
    local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
    broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
    broadcast 172.17.0.0 dev eth0 table local proto kernel scope link src 172.17.0.5
    local 172.17.0.5 dev eth0 table local proto kernel scope host src 172.17.0.5
    broadcast 172.17.255.255 dev eth0 table local proto kernel scope link src 172.17.0.5
    
    2022-08-05 13:41:43,079 DEBG 'start-script' stdout output:
    --------------------
    
    2022-08-05 13:41:43,084 DEBG 'start-script' stdout output:
    iptable_mangle 16384 2
    ip_tables 24576 5 iptable_filter,iptable_nat,iptable_mangle
    
    2022-08-05 13:41:43,085 DEBG 'start-script' stdout output:
    [info] iptable_mangle support detected, adding fwmark for tables
    
    2022-08-05 13:41:43,282 DEBG 'start-script' stdout output:
    [info] iptables defined as follows...
    --------------------
    
    2022-08-05 13:41:43,284 DEBG 'start-script' stdout output:
    -P INPUT DROP
    -P FORWARD DROP
    -P OUTPUT DROP
    -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A INPUT -s 149.19.196.239/32 -i eth0 -j ACCEPT
    -A INPUT -s 45.39.44.2/32 -i eth0 -j ACCEPT
    -A INPUT -s 45.39.44.105/32 -i eth0 -j ACCEPT
    -A INPUT -s 149.19.196.116/32 -i eth0 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 8112 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 7878 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 7878 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 5800 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 5800 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 5900 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 5900 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 9117 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 9117 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 8989 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 8989 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 9897 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 9897 -j ACCEPT
    -A INPUT -s 192.168.10.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT
    -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
    -A INPUT -i lo -j ACCEPT
    -A INPUT -i tun0 -j ACCEPT
    -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A OUTPUT -d 149.19.196.239/32 -o eth0 -j ACCEPT
    -A OUTPUT -d 45.39.44.2/32 -o eth0 -j ACCEPT
    -A OUTPUT -d 45.39.44.105/32 -o eth0 -j ACCEPT
    -A OUTPUT -d 149.19.196.116/32 -o eth0 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 8112 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 7878 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 7878 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 5800 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 5800 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 5900 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 5900 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 9117 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 9117 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 8989 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 8989 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 9897 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 9897 -j ACCEPT
    -A OUTPUT -s 172.17.0.0/16 -d 192.168.10.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT
    -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
    -A OUTPUT -o lo -j ACCEPT
    -A OUTPUT -o tun0 -j ACCEPT
    
    2022-08-05 13:41:43,285 DEBG 'start-script' stdout output:
    --------------------
    
    2022-08-05 13:41:43,286 DEBG 'start-script' stdout output:
    [info] Starting OpenVPN (non daemonised)...
    
    2022-08-05 13:41:43,335 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 DEPRECATED OPTION: --cipher set to 'AES-256-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-256-CBC' to --data-ciphers or change --cipher 'AES-256-CBC' to --data-ciphers-fallback 'AES-256-CBC' to silence this warning.
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: file 'credentials.conf' is group or others accessible
    
    
    2022-08-05 13:41:43,335 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 OpenVPN 2.5.7 [git:makepkg/a0f9a3e9404c8321+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on May 31 2022
    2022-08-05 13:41:43 library versions: OpenSSL 1.1.1q 5 Jul 2022, LZO 2.10
    2022-08-05 13:41:43 WARNING: --ns-cert-type is DEPRECATED. Use --remote-cert-tls instead.
    
    2022-08-05 13:41:43 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
    
    2022-08-05 13:41:43,337 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 Outgoing Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication
    2022-08-05 13:41:43 Incoming Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication
    
    2022-08-05 13:41:43,337 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 TCP/UDP: Preserving recently used remote address: [AF_INET]149.19.196.239:1195
    2022-08-05 13:41:43 Socket Buffers: R=[212992->1048576] S=[212992->1048576]
    2022-08-05 13:41:43 UDP link local: (not bound)
    2022-08-05 13:41:43 UDP link remote: [AF_INET]149.19.196.239:1195
    
    2022-08-05 13:41:43,356 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 TLS: Initial packet from [AF_INET]149.19.196.239:1195, sid=3e2c64b4 29850e4d
    
    2022-08-05 13:41:43,379 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 VERIFY OK: depth=1, C=VG, ST=BVI, O=ExpressVPN, OU=ExpressVPN, CN=ExpressVPN CA, [email protected]
    
    2022-08-05 13:41:43,380 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 VERIFY OK: nsCertType=SERVER
    2022-08-05 13:41:43 VERIFY X509NAME OK: C=VG, ST=BVI, O=ExpressVPN, OU=ExpressVPN, CN=Server-11070-0a, [email protected]
    2022-08-05 13:41:43 VERIFY OK: depth=0, C=VG, ST=BVI, O=ExpressVPN, OU=ExpressVPN, CN=Server-11070-0a, [email protected]
    
    2022-08-05 13:41:43,406 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 Control Channel: TLSv1.3, cipher TLSv1.3 TLS_AES_256_GCM_SHA384, peer certificate: 2048 bit RSA, signature: RSA-SHA256
    2022-08-05 13:41:43 [Server-11070-0a] Peer Connection Initiated with [AF_INET]149.19.196.239:1195
    
    2022-08-05 13:41:44,648 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 SENT CONTROL [Server-11070-0a]: 'PUSH_REQUEST' (status=1)
    
    2022-08-05 13:41:44,667 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option DNS 10.122.0.1,comp-lzo no,route 10.122.0.1,topology net30,ping 10,ping-restart 60,ifconfig 10.122.1.218 10.122.1.217,peer-id 122,cipher AES-256-GCM'
    2022-08-05 13:41:44 OPTIONS IMPORT: timers and/or timeouts modified
    2022-08-05 13:41:44 OPTIONS IMPORT: compression parms modified
    2022-08-05 13:41:44 OPTIONS IMPORT: --ifconfig/up options modified
    
    2022-08-05 13:41:44,667 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 OPTIONS IMPORT: route options modified
    2022-08-05 13:41:44 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
    2022-08-05 13:41:44 OPTIONS IMPORT: peer-id set
    2022-08-05 13:41:44 OPTIONS IMPORT: adjusting link_mtu to 1629
    2022-08-05 13:41:44 OPTIONS IMPORT: data channel crypto options modified
    2022-08-05 13:41:44 Data Channel: using negotiated cipher 'AES-256-GCM'
    2022-08-05 13:41:44 NCP: overriding user-set keysize with default
    2022-08-05 13:41:44 Outgoing Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
    2022-08-05 13:41:44 Incoming Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
    2022-08-05 13:41:44 net_route_v4_best_gw query: dst 0.0.0.0
    2022-08-05 13:41:44 net_route_v4_best_gw result: via 172.17.0.1 dev eth0
    
    2022-08-05 13:41:44,667 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 ROUTE_GATEWAY 172.17.0.1/255.255.0.0 IFACE=eth0 HWADDR=02:42:ac:11:00:05
    
    2022-08-05 13:41:44,668 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 TUN/TAP device tun0 opened
    2022-08-05 13:41:44 net_iface_mtu_set: mtu 1500 for tun0
    
    2022-08-05 13:41:44,668 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 net_iface_up: set tun0 up
    2022-08-05 13:41:44 net_addr_ptp_v4_add: 10.122.1.218 peer 10.122.1.217 dev tun0
    2022-08-05 13:41:44 /root/openvpnup.sh tun0 1500 1557 10.122.1.218 10.122.1.217 init
    
    2022-08-05 13:41:46,914 DEBG 'start-script' stdout output:
    2022-08-05 13:41:46 net_route_v4_add: 149.19.196.239/32 via 172.17.0.1 dev [NULL] table 0 metric -1
    2022-08-05 13:41:46 net_route_v4_add: 0.0.0.0/1 via 10.122.1.217 dev [NULL] table 0 metric -1
    2022-08-05 13:41:46 net_route_v4_add: 128.0.0.0/1 via 10.122.1.217 dev [NULL] table 0 metric -1
    
    2022-08-05 13:41:46,914 DEBG 'start-script' stdout output:
    2022-08-05 13:41:46 net_route_v4_add: 10.122.0.1/32 via 10.122.1.217 dev [NULL] table 0 metric -1
    2022-08-05 13:41:46 Initialization Sequence Completed
    
    2022-08-05 13:41:50,730 DEBG 'start-script' stdout output:
    [info] Attempting to get external IP using 'http://checkip.amazonaws.com'...
    
    2022-08-05 13:41:50,870 DEBG 'start-script' stdout output:
    [info] Successfully retrieved external IP address 85.237.194.94
    
    2022-08-05 13:41:50,872 DEBG 'start-script' stdout output:
    [info] Application does not require port forwarding or VPN provider is != pia, skipping incoming port assignment
    
    2022-08-05 13:41:50,959 DEBG 'watchdog-script' stdout output:
    [info] Deluge listening interface IP 0.0.0.0 and VPN provider IP 10.122.1.218 different, marking for reconfigure
    
    2022-08-05 13:41:50,967 DEBG 'watchdog-script' stdout output:
    [info] Deluge not running
    
    2022-08-05 13:41:50,973 DEBG 'watchdog-script' stdout output:
    [info] Deluge Web UI not running
    
    2022-08-05 13:41:50,974 DEBG 'watchdog-script' stdout output:
    [info] Attempting to start Deluge...
    [info] Removing deluge pid file (if it exists)...
    
    2022-08-05 13:41:51,973 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'listen_interface' currently has a value of '10.122.0.138'
    [info] Deluge key 'listen_interface' will have a new value '10.122.1.218'
    [info] Writing changes to Deluge config file '/config/core.conf'...
    
    2022-08-05 13:41:52,494 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'outgoing_interface' currently has a value of 'tun0'
    [info] Deluge key 'outgoing_interface' will have a new value 'tun0'
    [info] Writing changes to Deluge config file '/config/core.conf'...
    
    2022-08-05 13:41:52,984 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'default_daemon' currently has a value of 'e2015e2ba35049b9aea47ad89d31b6a5'
    [info] Deluge key 'default_daemon' will have a new value 'e2015e2ba35049b9aea47ad89d31b6a5'
    [info] Writing changes to Deluge config file '/config/web.conf'...
    
    2022-08-05 13:41:54,409 DEBG 'watchdog-script' stdout output:
    [info] Deluge process started
    [info] Waiting for Deluge process to start listening on port 58846...
    
    2022-08-05 13:41:54,741 DEBG 'watchdog-script' stdout output:
    [info] Deluge process listening on port 58846
    
    2022-08-05 13:42:01,909 DEBG 'watchdog-script' stderr output:
    <Deferred at 0x14bb4ee22e30 current result: None>
    
    2022-08-05 13:42:02,044 DEBG 'watchdog-script' stdout output:
    [info] No torrents with state 'Error' found
    
    
    2022-08-05 13:42:02,044 DEBG 'watchdog-script' stdout output:
    [info] Starting Deluge Web UI...
    
    2022-08-05 13:42:02,045 DEBG 'watchdog-script' stdout output:
    [info] Deluge Web UI started

     

    Well it is working again. I waited a day and then I was able to connect again. No clue. :/

    Regardless, no settings changes, just was a little more patient. Maybe it was a cache issue with my browser or something.

  15. I am struggling to figure out what happened to my container. Yesterday I had a momentary power loss which resulted in my internet going down, but my UPS kept my unRAID server online. I restored the internet and found my container GUI inaccessible a day or so later. I figure it was related to the internet loss and breaking the VPN connection. No biggie. I restarted the container to re-establish the connection and have no luck getting back into the GUI. Nothing has changed. No configuration change, nothing, but now it won't seem to work. 

     

    I dug through the startup log and cannot seem to find a glaring error either. Any guidance would be appreciated. 

     

    Container command (passwords removed):

    # /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-delugevpn' --net='bridge' --privileged=true -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='' -e 'VPN_PASS'='' -e 'VPN_PROV'='custom' -e 'VPN_CLIENT'='openvpn' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='no' -e 'LAN_NETWORK'='192.168.10.0/24' -e 'NAME_SERVERS'='85.203.37.1,85.203.37.2' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'VPN_INPUT_PORTS'='7878,5800,5900,9117,8989,9897' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8112:8112/tcp' -p '58846:58846/tcp' -p '58946:58946/tcp' -p '58946:58946/udp' -p '8118:8118/tcp' -p '7878:7878/tcp' -p '9117:9117/tcp' -p '8989:8989/tcp' -p '9897:9897/tcp' -p '8686:8686/tcp' -v '/mnt/user/Downloads/Downloads/':'/data':'rw' -v '/mnt/cache/appdata/binhex-delugevpn':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-delugevpn'
    

     

    Log from startup (passwords removed and IPs changed)

    Created by...
    ___. .__ .__
    \_ |__ |__| ____ | |__ ____ ___ ___
    | __ \| |/ \| | \_/ __ \\ \/ /
    | \_\ \ | | \ Y \ ___/ > <
    |___ /__|___| /___| /\___ >__/\_ \
    \/ \/ \/ \/ \/
    https://hub.docker.com/u/binhex/
    
    2022-08-05 13:41:36.554341 [info] Host is running unRAID
    2022-08-05 13:41:36.606632 [info] System information Linux 8290c25a0b63 4.19.107-Unraid #1 SMP Thu Mar 5 13:55:57 PST 2020 x86_64 GNU/Linux
    2022-08-05 13:41:36.664016 [info] OS_ARCH defined as 'x86-64'
    2022-08-05 13:41:36.721235 [info] PUID defined as '99'
    2022-08-05 13:41:38.346542 [info] PGID defined as '100'
    2022-08-05 13:41:39.759059 [info] UMASK defined as '000'
    2022-08-05 13:41:39.814989 [info] Permissions already set for '/config'
    2022-08-05 13:41:39.877230 [info] Deleting files in /tmp (non recursive)...
    2022-08-05 13:41:39.942277 [info] VPN_ENABLED defined as 'yes'
    2022-08-05 13:41:39.999234 [info] VPN_CLIENT defined as 'openvpn'
    2022-08-05 13:41:40.052263 [info] VPN_PROV defined as 'custom'
    2022-08-05 13:41:40.115871 [info] OpenVPN config file (ovpn extension) is located at /config/openvpn/my_expressvpn_usa_-_chicago_udp.ovpn
    2022-08-05 13:41:40.218392 [warn] VPN configuration file /config/openvpn/my_expressvpn_usa_-_chicago_udp.ovpn remote protocol is missing or malformed, assuming protocol 'udp'
    2022-08-05 13:41:40.266851 [info] VPN remote server(s) defined as 'usa-chicago-ca-version-2.expressnetw.com,'
    
    2022-08-05 13:41:40.313238 [info] VPN remote port(s) defined as '1195,'
    2022-08-05 13:41:40.362507 [info] VPN remote protcol(s) defined as 'udp,'
    2022-08-05 13:41:40.416098 [info] VPN_DEVICE_TYPE defined as 'tun0'
    2022-08-05 13:41:40.469772 [info] VPN_OPTIONS not defined (via -e VPN_OPTIONS)
    2022-08-05 13:41:40.524077 [info] LAN_NETWORK defined as '192.168.10.0/24'
    2022-08-05 13:41:40.578576 [info] NAME_SERVERS defined as '85.203.37.1,85.203.37.2'
    2022-08-05 13:41:40.632005 [info] VPN_USER defined as ''
    2022-08-05 13:41:40.686763 [info] VPN_PASS defined as ''
    2022-08-05 13:41:40.741505 [info] ENABLE_PRIVOXY defined as 'no'
    2022-08-05 13:41:40.801958 [info] VPN_INPUT_PORTS defined as '7878,5800,5900,9117,8989,9897'
    2022-08-05 13:41:40.857511 [info] VPN_OUTPUT_PORTS not defined (via -e VPN_OUTPUT_PORTS), skipping allow for custom outgoing ports
    2022-08-05 13:41:40.913177 [info] DELUGE_DAEMON_LOG_LEVEL defined as 'info'
    2022-08-05 13:41:40.968820 [info] DELUGE_WEB_LOG_LEVEL defined as 'info'
    2022-08-05 13:41:41.026254 [info] Starting Supervisor...
    2022-08-05 13:41:41,528 INFO Included extra file "/etc/supervisor/conf.d/delugevpn.conf" during parsing
    2022-08-05 13:41:41,528 INFO Set uid to user 0 succeeded
    2022-08-05 13:41:41,533 INFO supervisord started with pid 6
    2022-08-05 13:41:42,535 INFO spawned: 'shutdown-script' with pid 187
    2022-08-05 13:41:42,537 INFO spawned: 'start-script' with pid 188
    2022-08-05 13:41:42,539 INFO spawned: 'watchdog-script' with pid 189
    2022-08-05 13:41:42,539 INFO reaped unknown pid 7 (exit status 0)
    2022-08-05 13:41:42,578 DEBG 'start-script' stdout output:
    [info] VPN is enabled, beginning configuration of VPN
    
    2022-08-05 13:41:42,579 INFO success: shutdown-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2022-08-05 13:41:42,579 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2022-08-05 13:41:42,579 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2022-08-05 13:41:42,666 DEBG 'start-script' stdout output:
    [info] Adding 85.203.37.1 to /etc/resolv.conf
    
    2022-08-05 13:41:42,671 DEBG 'start-script' stdout output:
    [info] Adding 85.203.37.2 to /etc/resolv.conf
    
    2022-08-05 13:41:43,045 DEBG 'start-script' stdout output:
    [info] Default route for container is 172.17.0.1
    
    2022-08-05 13:41:43,069 DEBG 'start-script' stdout output:
    [info] Docker network defined as 172.17.0.0/16
    
    2022-08-05 13:41:43,075 DEBG 'start-script' stdout output:
    [info] Adding 192.168.10.0/24 as route via docker eth0
    
    2022-08-05 13:41:43,077 DEBG 'start-script' stdout output:
    [info] ip route defined as follows...
    --------------------
    
    2022-08-05 13:41:43,079 DEBG 'start-script' stdout output:
    default via 172.17.0.1 dev eth0
    172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
    192.168.10.0/24 via 172.17.0.1 dev eth0
    
    2022-08-05 13:41:43,079 DEBG 'start-script' stdout output:
    broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1
    local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
    local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
    broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
    broadcast 172.17.0.0 dev eth0 table local proto kernel scope link src 172.17.0.5
    local 172.17.0.5 dev eth0 table local proto kernel scope host src 172.17.0.5
    broadcast 172.17.255.255 dev eth0 table local proto kernel scope link src 172.17.0.5
    
    2022-08-05 13:41:43,079 DEBG 'start-script' stdout output:
    --------------------
    
    2022-08-05 13:41:43,084 DEBG 'start-script' stdout output:
    iptable_mangle 16384 2
    ip_tables 24576 5 iptable_filter,iptable_nat,iptable_mangle
    
    2022-08-05 13:41:43,085 DEBG 'start-script' stdout output:
    [info] iptable_mangle support detected, adding fwmark for tables
    
    2022-08-05 13:41:43,282 DEBG 'start-script' stdout output:
    [info] iptables defined as follows...
    --------------------
    
    2022-08-05 13:41:43,284 DEBG 'start-script' stdout output:
    -P INPUT DROP
    -P FORWARD DROP
    -P OUTPUT DROP
    -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A INPUT -s 149.19.196.239/32 -i eth0 -j ACCEPT
    -A INPUT -s 45.39.44.2/32 -i eth0 -j ACCEPT
    -A INPUT -s 45.39.44.105/32 -i eth0 -j ACCEPT
    -A INPUT -s 149.19.196.116/32 -i eth0 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 8112 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 7878 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 7878 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 5800 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 5800 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 5900 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 5900 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 9117 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 9117 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 8989 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 8989 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 9897 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 9897 -j ACCEPT
    -A INPUT -s 192.168.10.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT
    -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
    -A INPUT -i lo -j ACCEPT
    -A INPUT -i tun0 -j ACCEPT
    -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A OUTPUT -d 149.19.196.239/32 -o eth0 -j ACCEPT
    -A OUTPUT -d 45.39.44.2/32 -o eth0 -j ACCEPT
    -A OUTPUT -d 45.39.44.105/32 -o eth0 -j ACCEPT
    -A OUTPUT -d 149.19.196.116/32 -o eth0 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 8112 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 7878 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 7878 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 5800 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 5800 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 5900 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 5900 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 9117 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 9117 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 8989 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 8989 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 9897 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 9897 -j ACCEPT
    -A OUTPUT -s 172.17.0.0/16 -d 192.168.10.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT
    -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
    -A OUTPUT -o lo -j ACCEPT
    -A OUTPUT -o tun0 -j ACCEPT
    
    2022-08-05 13:41:43,285 DEBG 'start-script' stdout output:
    --------------------
    
    2022-08-05 13:41:43,286 DEBG 'start-script' stdout output:
    [info] Starting OpenVPN (non daemonised)...
    
    2022-08-05 13:41:43,335 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 DEPRECATED OPTION: --cipher set to 'AES-256-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-256-CBC' to --data-ciphers or change --cipher 'AES-256-CBC' to --data-ciphers-fallback 'AES-256-CBC' to silence this warning.
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6
    
    2022-08-05 13:41:43 WARNING: file 'credentials.conf' is group or others accessible
    
    
    2022-08-05 13:41:43,335 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 OpenVPN 2.5.7 [git:makepkg/a0f9a3e9404c8321+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on May 31 2022
    2022-08-05 13:41:43 library versions: OpenSSL 1.1.1q 5 Jul 2022, LZO 2.10
    2022-08-05 13:41:43 WARNING: --ns-cert-type is DEPRECATED. Use --remote-cert-tls instead.
    
    2022-08-05 13:41:43 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
    
    2022-08-05 13:41:43,337 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 Outgoing Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication
    2022-08-05 13:41:43 Incoming Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication
    
    2022-08-05 13:41:43,337 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 TCP/UDP: Preserving recently used remote address: [AF_INET]149.19.196.239:1195
    2022-08-05 13:41:43 Socket Buffers: R=[212992->1048576] S=[212992->1048576]
    2022-08-05 13:41:43 UDP link local: (not bound)
    2022-08-05 13:41:43 UDP link remote: [AF_INET]149.19.196.239:1195
    
    2022-08-05 13:41:43,356 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 TLS: Initial packet from [AF_INET]149.19.196.239:1195, sid=3e2c64b4 29850e4d
    
    2022-08-05 13:41:43,379 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 VERIFY OK: depth=1, C=VG, ST=BVI, O=ExpressVPN, OU=ExpressVPN, CN=ExpressVPN CA, [email protected]
    
    2022-08-05 13:41:43,380 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 VERIFY OK: nsCertType=SERVER
    2022-08-05 13:41:43 VERIFY X509NAME OK: C=VG, ST=BVI, O=ExpressVPN, OU=ExpressVPN, CN=Server-11070-0a, [email protected]
    2022-08-05 13:41:43 VERIFY OK: depth=0, C=VG, ST=BVI, O=ExpressVPN, OU=ExpressVPN, CN=Server-11070-0a, [email protected]
    
    2022-08-05 13:41:43,406 DEBG 'start-script' stdout output:
    2022-08-05 13:41:43 Control Channel: TLSv1.3, cipher TLSv1.3 TLS_AES_256_GCM_SHA384, peer certificate: 2048 bit RSA, signature: RSA-SHA256
    2022-08-05 13:41:43 [Server-11070-0a] Peer Connection Initiated with [AF_INET]149.19.196.239:1195
    
    2022-08-05 13:41:44,648 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 SENT CONTROL [Server-11070-0a]: 'PUSH_REQUEST' (status=1)
    
    2022-08-05 13:41:44,667 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option DNS 10.122.0.1,comp-lzo no,route 10.122.0.1,topology net30,ping 10,ping-restart 60,ifconfig 10.122.1.218 10.122.1.217,peer-id 122,cipher AES-256-GCM'
    2022-08-05 13:41:44 OPTIONS IMPORT: timers and/or timeouts modified
    2022-08-05 13:41:44 OPTIONS IMPORT: compression parms modified
    2022-08-05 13:41:44 OPTIONS IMPORT: --ifconfig/up options modified
    
    2022-08-05 13:41:44,667 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 OPTIONS IMPORT: route options modified
    2022-08-05 13:41:44 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
    2022-08-05 13:41:44 OPTIONS IMPORT: peer-id set
    2022-08-05 13:41:44 OPTIONS IMPORT: adjusting link_mtu to 1629
    2022-08-05 13:41:44 OPTIONS IMPORT: data channel crypto options modified
    2022-08-05 13:41:44 Data Channel: using negotiated cipher 'AES-256-GCM'
    2022-08-05 13:41:44 NCP: overriding user-set keysize with default
    2022-08-05 13:41:44 Outgoing Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
    2022-08-05 13:41:44 Incoming Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
    2022-08-05 13:41:44 net_route_v4_best_gw query: dst 0.0.0.0
    2022-08-05 13:41:44 net_route_v4_best_gw result: via 172.17.0.1 dev eth0
    
    2022-08-05 13:41:44,667 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 ROUTE_GATEWAY 172.17.0.1/255.255.0.0 IFACE=eth0 HWADDR=02:42:ac:11:00:05
    
    2022-08-05 13:41:44,668 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 TUN/TAP device tun0 opened
    2022-08-05 13:41:44 net_iface_mtu_set: mtu 1500 for tun0
    
    2022-08-05 13:41:44,668 DEBG 'start-script' stdout output:
    2022-08-05 13:41:44 net_iface_up: set tun0 up
    2022-08-05 13:41:44 net_addr_ptp_v4_add: 10.122.1.218 peer 10.122.1.217 dev tun0
    2022-08-05 13:41:44 /root/openvpnup.sh tun0 1500 1557 10.122.1.218 10.122.1.217 init
    
    2022-08-05 13:41:46,914 DEBG 'start-script' stdout output:
    2022-08-05 13:41:46 net_route_v4_add: 149.19.196.239/32 via 172.17.0.1 dev [NULL] table 0 metric -1
    2022-08-05 13:41:46 net_route_v4_add: 0.0.0.0/1 via 10.122.1.217 dev [NULL] table 0 metric -1
    2022-08-05 13:41:46 net_route_v4_add: 128.0.0.0/1 via 10.122.1.217 dev [NULL] table 0 metric -1
    
    2022-08-05 13:41:46,914 DEBG 'start-script' stdout output:
    2022-08-05 13:41:46 net_route_v4_add: 10.122.0.1/32 via 10.122.1.217 dev [NULL] table 0 metric -1
    2022-08-05 13:41:46 Initialization Sequence Completed
    
    2022-08-05 13:41:50,730 DEBG 'start-script' stdout output:
    [info] Attempting to get external IP using 'http://checkip.amazonaws.com'...
    
    2022-08-05 13:41:50,870 DEBG 'start-script' stdout output:
    [info] Successfully retrieved external IP address 85.237.194.94
    
    2022-08-05 13:41:50,872 DEBG 'start-script' stdout output:
    [info] Application does not require port forwarding or VPN provider is != pia, skipping incoming port assignment
    
    2022-08-05 13:41:50,959 DEBG 'watchdog-script' stdout output:
    [info] Deluge listening interface IP 0.0.0.0 and VPN provider IP 10.122.1.218 different, marking for reconfigure
    
    2022-08-05 13:41:50,967 DEBG 'watchdog-script' stdout output:
    [info] Deluge not running
    
    2022-08-05 13:41:50,973 DEBG 'watchdog-script' stdout output:
    [info] Deluge Web UI not running
    
    2022-08-05 13:41:50,974 DEBG 'watchdog-script' stdout output:
    [info] Attempting to start Deluge...
    [info] Removing deluge pid file (if it exists)...
    
    2022-08-05 13:41:51,973 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'listen_interface' currently has a value of '10.122.0.138'
    [info] Deluge key 'listen_interface' will have a new value '10.122.1.218'
    [info] Writing changes to Deluge config file '/config/core.conf'...
    
    2022-08-05 13:41:52,494 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'outgoing_interface' currently has a value of 'tun0'
    [info] Deluge key 'outgoing_interface' will have a new value 'tun0'
    [info] Writing changes to Deluge config file '/config/core.conf'...
    
    2022-08-05 13:41:52,984 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'default_daemon' currently has a value of 'e2015e2ba35049b9aea47ad89d31b6a5'
    [info] Deluge key 'default_daemon' will have a new value 'e2015e2ba35049b9aea47ad89d31b6a5'
    [info] Writing changes to Deluge config file '/config/web.conf'...
    
    2022-08-05 13:41:54,409 DEBG 'watchdog-script' stdout output:
    [info] Deluge process started
    [info] Waiting for Deluge process to start listening on port 58846...
    
    2022-08-05 13:41:54,741 DEBG 'watchdog-script' stdout output:
    [info] Deluge process listening on port 58846
    
    2022-08-05 13:42:01,909 DEBG 'watchdog-script' stderr output:
    <Deferred at 0x14bb4ee22e30 current result: None>
    
    2022-08-05 13:42:02,044 DEBG 'watchdog-script' stdout output:
    [info] No torrents with state 'Error' found
    
    
    2022-08-05 13:42:02,044 DEBG 'watchdog-script' stdout output:
    [info] Starting Deluge Web UI...
    
    2022-08-05 13:42:02,045 DEBG 'watchdog-script' stdout output:
    [info] Deluge Web UI started

     

  16. Well I am not sure what I did, but I have 2 versions of the same container. One goes through binhex-delugevpn and the other doesn't. Somehow they got messed up. As soon as I deleted both of those everything was peachy. 

     

    Strange. Probably screwed something up awhile back setting them up and not getting the appdata folders unique. Once it required a rebuild it broke everything as I haven't rebuilt the containers in months.

  17. I use binhex-delugevpn as a proxy container for many services. Today I went to add binhex-lidarr to binhex-delugevpn.

     

    Steps:

     

    1. Downloaded binhex-lidarr and configured the network to none with the extra parameter for the '--net=container:binhex-delugevpn'

    2. Started the binhex-lidarr container

    3. Realized I forgot to add the port mapping in the binhex-delugevpn container and edited it

    4. Added 8686-8686 TCP port mapping to binhex-delugevpn and rebuilt it

    5. This is where everything went sideways. Generally an update to binhex-delugevpn would cause all the containers the routed though it to rebuild. This time they didn't - they all said rebuild ready rebuilding and did nothing. My GUI was freaking out - the auto start icons were flashing and the resource usage counters were flashing and the unRAID refresh logo was popping in and out every 2-3 seconds.

     

    At this point I couldn't select anything on the screen because by the time I could click it would reload/refresh. So I did the following:

    1. From the dashboard screen stopped all the containers - didn't fix anything

    2. Slowly disabled all the auto-starts by timing my clicks

    3. Stopped and restarted the Docker service from settings - no change

    4. No obvious errors/issues in the logs

     

    Even with all my containers stopped the Docker GUI is not working? I deleted the binhex-lidarr container and removed the port mapping from binhex-delugevpn. Still nothing. I have somehow managed to screw everything. What is interesting is if I start the containers from the Dashboard page they work fine, but if I go to the Docker page in the GUI it is unusable. 

  18. I tired searching around, but couldn't find anything that really matched what I was looking for. If this has already been answered, please just point me there. 

     

    I have created a custom docker network for my SWAG container. proxynetwork (172.18.0.0\24)

     

    I have 3 containers in on the proxynetwork:

    1. SWAG

    2. Service 1 

    3. Service 2

     

    Service 1 and service 2 are reversed proxied with SWAG which is mapped to port 1443 on my unRAID server's LAN IP address and port 443 is port forwarded to SWAG via unRAID LAN IP.


    What bothers me is if I SSH into any of the 3 containers on my proxynetwork I can access any other LAN resource. I'd like to firewall off those containers from accessing any LAN resource. Basically make a DMZ of sorts. 

     

    Due to how unRAID NAT's the container network (proxynetwork) to the LAN subnet unRAID sits on (bridge mode), I am unsure I can make firewall rules at my router. Not to mention I'd prefer to lock it down inside unRAID if possible. I am looking in unRAID network settings and see the routing table, but no place to add in firewall rules/IP tables.

     

    My only other thought is to create a DMZ VLAN, make unRAID VLAN aware and then put those containers in that VLAN somehow. I am not exactly sure of the process or if that will even achieve my goal.

     

    Thanks.

  19. One thing to note. In your guide you never said to add the Photonix container to your photonix_net Docker network.

     

    I set a username and password in the Docker settings, but it just stays at loading every time I log in. I've tried restarting the container with no change. It is pointed at a directory with 15 photos for testing, so it shouldn't be taking too long to load them I'd assume. I cannot even log in when I run the container in demo mode.

    • Like 2
  20. I have a 8TB IronWolf from June 2020 in my UnRAID with no issues after the initial pre-clear. Today I took the plunge and upgraded to 6.9 from 6.8 and within a few hours after the update I got a notification from my system that I had a drive in a disabled state. 

     

    I am not very strong in this department, so I am hoping people with more HDD knowledge than me can help shed some light on my SMART results and recommend a course of action. 

     

    The array is currently stopped pending what I learn here. Plus I am not really sure what the warranty process is like with Seagate. How do I prove a drive failure and get a replacement? 

     

    Thank you.

     

     

    Disk error log:

     

    Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
    Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] 4096-byte physical blocks
    Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] Write Protect is off
    Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] Mode Sense: 7f 00 10 08
    Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] Write cache: enabled, read cache: enabled, supports DPO and FUA
    Jun 8 11:14:03 unRAID kernel: sdh: sdh1
    Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] Attached SCSI disk
    Jun 8 11:14:36 unRAID emhttpd: ST8000VN004-2M2101_WKD1WM01 (sdh) 512 15628053168
    Jun 8 11:14:36 unRAID kernel: mdcmd (2): import 1 sdh 64 7814026532 0 ST8000VN004-2M2101_WKD1WM01
    Jun 8 11:14:36 unRAID kernel: md: import disk1: (sdh) ST8000VN004-2M2101_WKD1WM01 size: 7814026532
    Jun 8 11:14:36 unRAID emhttpd: read SMART /dev/sdh
    Jun 8 11:56:57 unRAID emhttpd: spinning down /dev/sdh
    Jun 8 14:16:40 unRAID kernel: sd 5:0:6:0: [sdh] tag#2816 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e5 00
    Jun 8 14:16:43 unRAID kernel: sd 5:0:6:0: [sdh] Synchronizing SCSI cache
    Jun 8 14:16:43 unRAID kernel: sd 5:0:6:0: [sdh] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00
    Jun 8 14:16:43 unRAID kernel: scsi 5:0:6:0: [sdh] tag#3201 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 cmd_age=19s
    Jun 8 14:16:43 unRAID kernel: scsi 5:0:6:0: [sdh] tag#3201 CDB: opcode=0x88 88 00 00 00 00 02 49 3c b2 f8 00 00 00 80 00 00
    Jun 8 14:16:43 unRAID kernel: blk_update_request: I/O error, dev sdh, sector 9818649336 op 0x0:(READ) flags 0x0 phys_seg 16 prio class 0
    Jun 8 14:17:07 unRAID emhttpd: read SMART /dev/sdh
    

     

    ST8000VN004-2M2101_WKD1WM01-20210608-1429.txt

  21. 19 hours ago, strike said:

    You didn't mention that in your OP, then you must add that subnet to your LAN_NETWORK as well. The point of the LAN_NETWORK variable is to define your network, and all the subnets you listed are different networks. So if you need access from all of them they need to be listed. There is no magical "catch all type networks" you can add instead, AFAIK anyway. 

    Well I mean that is how subnet masking works. You can summarize (catch all) if you set it up correctly. But whether this docker can support that or not I don't know. 

     

    I changed my LAN_NETWORK to "192.168.10.0/24,192.168.130.0/24" and still no luck.

     

    The reason I didn't mention the other networks is I don't feel that it matters. I need to get one working first. No use troubleshooting 7 things at once. 

     

    I logged into the docker and verified my networks were in the IP tables:

     

    sh-5.1# iptables --list-rules
    -P INPUT DROP
    -P FORWARD DROP
    -P OUTPUT DROP
    -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A INPUT -s ***.***.***.***/32 -i eth0 -j ACCEPT
    -A INPUT -s ***.***.***.***/32 -i eth0 -j ACCEPT
    -A INPUT -s ***.***.***.***/32 -i eth0 -j ACCEPT
    -A INPUT -s ***.***.***.***/32 -i eth0 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --dport 8112 -j ACCEPT
    -A INPUT -s 192.168.10.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT
    -A INPUT -s 192.168.130.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT
    -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
    -A INPUT -i lo -j ACCEPT
    -A INPUT -i tun0 -j ACCEPT
    -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A OUTPUT -d ***.***.***.***0/32 -o eth0 -j ACCEPT
    -A OUTPUT -d ***.***.***.***/32 -o eth0 -j ACCEPT
    -A OUTPUT -d ***.***.***.***/32 -o eth0 -j ACCEPT
    -A OUTPUT -d ***.***.***.***/32 -o eth0 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --sport 8112 -j ACCEPT
    -A OUTPUT -s 172.17.0.0/16 -d 192.168.10.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT
    -A OUTPUT -s 172.17.0.0/16 -d 192.168.130.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT
    -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
    -A OUTPUT -o lo -j ACCEPT
    -A OUTPUT -o tun0 -j ACCEPT

     

     

    I am not sure what else to try. This is rather frustrating as I don't have this issue with any other container, so it must be in the IP tables somewhere. I am just not overly familiar with how they're implemented here.

    I am a little lost that port 58846 is specifically called out as allowed from the remote subnets, while 8112 isn't and that is the port the GUI for Deluge runs on. 

     

  22. 2 hours ago, strike said:

     

    This is wrong. You need to set your LAN_NETWORK to 192.168.10.0,192.168.130.0

    You can have multiple subnets defined, just separate them with comma.

    I'll give that a go, though I don't see the difference. I have other devices on other subnets contained within that /16 network that I would like to be able to access the docker as well. 

    What is the difference between 192.168.0.0/16 and 192.168.10.0/24, 192.168.20.0/24, 192.168.30.0/24, 192.168.110.0/24, 192.168.120.0/24, 192.168.130.0/24?

     

    The br0 isn't my goal, it was just for testing to try and gather more data. I need it to run in bridge mode anyway in order to route other dockers through this one.

     

  23. I am running into some trouble with what I believe is the LAN_NETWORK parameter and the IP_TABLES blocking me from accessing the deluge web GUI from a different subnet depending on the docker network configuration. I tired to use the binhex FAQ and some other searches, but I just couldn't find anything that applied to my particular situation.

     

    Here is the setup ONE:

    unRAID is at 192.168.10.40/24

    PC I am attempting to access the binhex-delugevpn web GUI from is at 192.168.130.50/24

    LAN_NETWORK is set to 192.168.0.0/16 to encompass everything

    Docker is configured as a bridge network.

    Here is the docker run command:

    user@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-delugevpn' --net='bridge' --privileged=true -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='user' -e 'VPN_PASS'='pass' -e 'VPN_PROV'='custom' -e 'VPN_CLIENT'='openvpn' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='no' -e 'LAN_NETWORK'='192.168.0.0/16' -e 'NAME_SERVERS'='nameserver' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'VPN_INPUT_PORTS'='' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8112:8112/tcp' -p '58846:58846/tcp' -p '58946:58946/tcp' -p '58946:58946/udp' -p '8118:8118/tcp' -v '/mnt/user/appdata/data':'/data':'rw' -v '/mnt/cache/appdata/binhex-delugevpn':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-delugevpn'

    When I run this docker, the VPN connects (verified via docker log), but I cannot access it from my PC on a different subnet. However, if I fire up a Firefox docker also in bridge mode, I can access the deluge web GUI. 

     

     

    Here is the setup TWO:

    unRAID is at 192.168.10.40/24

    PC I am attempting to access the binhex-delugevpn web GUI from is at 192.168.130.50/24

    LAN_NETWORK is set to 192.168.10.0/24

    Docker is configured as br0 network with IP of 192.168.10.210/24

    Here is the docker run command:

    user@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-delugevpn' --net='br0' --ip='192.168.10.210' --privileged=true -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'TCP_PORT_8112'='8112' -e 'TCP_PORT_58846'='58846' -e 'TCP_PORT_58946'='58946' -e 'UDP_PORT_58946'='58946' -e 'TCP_PORT_8118'='8118' -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='user' -e 'VPN_PASS'='pass' -e 'VPN_PROV'='custom' -e 'VPN_CLIENT'='openvpn' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='no' -e 'LAN_NETWORK'='192.168.10.0/24' -e 'NAME_SERVERS'='nameserver' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'VPN_INPUT_PORTS'='' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/cache/appdata/data':'/data':'rw' -v '/mnt/cache/appdata/binhex-delugevpn/':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-delugevpn'

    When I run this docker, the VPN connects (verified via docker log) and I can access it from my PC on a different subnet. What is even more confusing for me is if I fire up the same Firefox docker and try and access the deluge web GUI, I cannot even though they are both on the very same layer 2 network. 

     

    The main reason I am trying to get this figured out is I have 2-3 other dockers I'd like to route through this binhex-delugevpn docker to grab the VPN benefits. However, in order for that to work the dockers must be configured in bridge mode. As referenced above, when I am in bridge mode I am unable to access the web GUI of the various services contained within those dockers.