Thomas K

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by Thomas K

  1. Did an upgrade from 6.11.4 to 6.12.6 and ... worked flawless. Docker, VMs ... everything fine.
  2. Hi, seems the Template Repo link should be https://github.com/JakeShirley/unraid-templates/blob/main/archiveteam-warrior.xml ?
  3. Thanks for the clarification, works fine from the Unraid Terminal via docker run --rm -ti --name=ctop --volume /var/run/docker.sock:/var/run/docker.sock:ro quay.io/vektorlab/ctop:latest
  4. Hi, I try to get the ctop docker image to run, but it always errors out and stop immediately: panic: open /dev/tty: no such device or address [recovered] panic: open /dev/tty: no such device or address Running as privileged didn't change the issue. Thanks
  5. Solved it, its an permission issue on the host path. For me /mnt/cachepool/archiveteam-warrior-data needed the right permissions.
  6. Same error for me. Did you success in solving it?
  7. Did some try runs. It works fine and you don't have to worry "where" the disk is mounted in the script.
  8. MOUNTPOINT : where the partition is mounted In my case for my backup usb disk it would be /mnt/disks/WCJ65BZT and I wont have to bother with mountpoints for backup at all.
  9. Would it work out to use $MOUNTPOINT as destination? e.g. backup_jobs=( # source # destination "/mnt/user" "$MOUNTPOINT/Backups" ) Any reason why its not used by default? Thanks
  10. Had the same issue. Password worked fine for the Windows client. Set a new password shorter then 30 character, only numbers and letters, 10 characters long ==> issue solved
  11. Hm, two hours before the mover did run and afterwards the cold boot is logged. Lets see if maybe next time more info or the same is logged. Thanks in the meantime.
  12. Thanks, finally was able to catch it via syslog. Hopefully someone can interpret it.
  13. Hi, on my new HP Microserver Gen10 Plus UnRaid crashes sporadically about once a week. Happened with 6.9.2 and also the brand new 6.10.2. Diagnosis was run, but I don't see anything to diag there, as I was only able to run it after a cold reset. Thanks for any support, thomas tower-diagnostics-20220608-1452.zip
  14. Upgrade from 6.9.2 to 6.10.2 worked flawless for an ProLiant MicroServer Gen10 Plus Thanks for the great work.
  15. That would be great of a future update. Streamlined version building on existing WIREGUARD_DROP_WG0 iptables -N WIREGUARD_INPUT iptables -A INPUT -j WIREGUARD_INPUT iptables -A WIREGUARD_INPUT -i wg0 -j WIREGUARD_DROP_WG0
  16. Worked it out, you have to filter the INPUT chain of the wg0 device incoming. My example if some else needs it: iptables -N WIREGUARD_INPUT iptables -N WIREGUARD_DROP_WG0_INPUT iptables -A INPUT -j WIREGUARD_INPUT iptables -A WIREGUARD_INPUT -i wg0 -j WIREGUARD_DROP_WG0_INPUT iptables -A WIREGUARD_DROP_WG0_INPUT -s 10.253.0.0/24 -d 10.0.0.11/32 -j ACCEPT iptables -A WIREGUARD_DROP_WG0_INPUT -s 10.253.0.0/24 -j DROP iptables -A WIREGUARD_DROP_WG0_INPUT -j RETURN
  17. -A WIREGUARD -o br0 -j WIREGUARD_DROP_WG0 -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 10.0.0.11/32 -j ACCEPT -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -j DROP -A WIREGUARD_DROP_WG0 -j RETURN Why are the iptables rules created on br0 and not wg0? A tcpdump shows, that the traffic from the peer to the wireguard host is not crossing br0 - only wg0, so the rule does not match. Traffic from the peer to other local lan destinations cross br0 and so the rule matches.
  18. Hi, the setup "Remote access to LAN" works fine and the client is connected and can ping the IPs in the remote LAN. But in the config I said "Local tunnel firewall" Allow and only set 10.0.0.11 as allowed. Nevertheless am I able to ping 10.0.0.10 (Unraid Server itself) - no other hosts. Is that by design and cannot be removed? Attached the generated iptables config: # Generated by iptables-save v1.8.5 on Fri Mar 4 21:31:04 2022 *mangle :PREROUTING ACCEPT [585916432:1133041336885] :INPUT ACCEPT [40469455:499819706678] :FORWARD ACCEPT [546394462:633615039025] :OUTPUT ACCEPT [32114760:4849559837] :POSTROUTING ACCEPT [578543223:638470079442] :LIBVIRT_PRT - [0:0] -A POSTROUTING -j LIBVIRT_PRT COMMIT # Completed on Fri Mar 4 21:31:04 2022 # Generated by iptables-save v1.8.5 on Fri Mar 4 21:31:04 2022 *nat :PREROUTING ACCEPT [98:29053] :INPUT ACCEPT [67:21594] :OUTPUT ACCEPT [32:2057] :POSTROUTING ACCEPT [60:9200] :DOCKER - [0:0] :LIBVIRT_PRT - [0:0] -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE -A POSTROUTING -j LIBVIRT_PRT -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 3875 -j MASQUERADE -A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 8181 -j MASQUERADE -A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 8080 -j MASQUERADE -A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 4443 -j MASQUERADE -A POSTROUTING -s 10.253.0.0/24 -o br0 -j MASQUERADE -A DOCKER -i docker0 -j RETURN -A DOCKER ! -i docker0 -p tcp -m tcp --dport 3875 -j DNAT --to-destination 172.17.0.2:3875 -A DOCKER ! -i docker0 -p tcp -m tcp --dport 7818 -j DNAT --to-destination 172.17.0.4:8181 -A DOCKER ! -i docker0 -p tcp -m tcp --dport 1880 -j DNAT --to-destination 172.17.0.4:8080 -A DOCKER ! -i docker0 -p tcp -m tcp --dport 18443 -j DNAT --to-destination 172.17.0.4:4443 COMMIT # Completed on Fri Mar 4 21:31:04 2022 # Generated by iptables-save v1.8.5 on Fri Mar 4 21:31:04 2022 *filter :INPUT ACCEPT [2045:465504] :FORWARD ACCEPT [188:71769] :OUTPUT ACCEPT [1269:1510752] :DOCKER - [0:0] :DOCKER-ISOLATION-STAGE-1 - [0:0] :DOCKER-ISOLATION-STAGE-2 - [0:0] :DOCKER-USER - [0:0] :LIBVIRT_FWI - [0:0] :LIBVIRT_FWO - [0:0] :LIBVIRT_FWX - [0:0] :LIBVIRT_INP - [0:0] :LIBVIRT_OUT - [0:0] :WIREGUARD - [0:0] :WIREGUARD_DROP_WG0 - [0:0] -A INPUT -j LIBVIRT_INP -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION-STAGE-1 -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -j LIBVIRT_FWX -A FORWARD -j LIBVIRT_FWI -A FORWARD -j LIBVIRT_FWO -A FORWARD -j WIREGUARD -A OUTPUT -j LIBVIRT_OUT -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 3875 -j ACCEPT -A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8181 -j ACCEPT -A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8080 -j ACCEPT -A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 4443 -j ACCEPT -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -j RETURN -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP -A DOCKER-ISOLATION-STAGE-2 -j RETURN -A DOCKER-USER -j RETURN -A WIREGUARD -o br0 -j WIREGUARD_DROP_WG0 -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -d 10.0.0.11/32 -j ACCEPT -A WIREGUARD_DROP_WG0 -s 10.253.0.0/24 -j DROP -A WIREGUARD_DROP_WG0 -j RETURN COMMIT # Completed on Fri Mar 4 21:31:04 2022
  19. I think would make sense to hide if there are no snapshots at the beginning to not confuse users.
  20. Great plugin, I started on a fresh Unraid installation and the plugin shows: Is that by design? Shouldn't there be any snapshots by default because there is no subvolume after a fresh format? Thanks