FayeInMay

Members
  • Posts

    16
  • Joined

  • Last visited

FayeInMay's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi @bonienl, another weird addition to the issue description: Restarting the container with "docker restart <br0_container>" does not fix the Wireguard route. Only using the restart button in the unraid UI fixes the routes.
  2. Hi @bonienl, as requested in the support ticket, I'm doing the follow-up in this thread. It seems like the issue was not fixed in 6.12.8. After rebooting unraid and waiting for all containers to start-up, I still need to manually restart containers to fix their routes. This can be seen in this video: https://www.youtube.com/watch?v=G8H5YaxdO8c I also included diagnostics from after rebooting unraid (1153) and from after restarting the containers manually after the reboot (1155). For further information maybe also see https://forums.unraid.net/bug-reports/stable-releases/6123-issue-with-wireguard-integration-and-docker-routing-r2594/ nass-diagnostics-20240219-1153.zip nass-diagnostics-20240219-1155.zip
  3. I'm not missing the file and have the same problem with the Move Now button follows plug-in filters. Any ideas towards that?
  4. Hello, I've been struggling with my other issue below, but maybe found a specific bug/fix for it. Therefor this new issue because it might not even fix the other one. It seems like line 166 in rc.d/rc.docker is not entirely correct. "[[ -n $NETWORK ]] && nsenter -n -t $PID ip -4 route add $NETWORK via $THISIP dev br0 2>/dev/null" translates to nsenter -n -t 9439 ip -4 route add 10.253.0.0 via 192.168.0.231 dev br0 If I understand it correctly, it attaches to the container process and adds a static route for wireguard. In my case 10.253.0.0/24 to 192.168.0.231. But it uses the br0 interface. This interface is not available within the docker container. The available interface is eth0. Therefor the command failes and the static route does not get created. The only thing I do not know is why this only happens on unraid/docker startup The static route will be added upon container restart. I believe the scripts doing both actions (initial startup / container restart) are not the same and that might explain it.
  5. Hi there, it seems like there might be a bug with the wireguard integration in unraid. I read below that the routes for br0 access will automatically be created. Thus I should be able to connect from wireguard to unraid br0 docker container. And I am able to do that. Until I restart my machine. After restarting my machine all docker containers including my nginx proxy manager are booting up again. But after the npm docker container started, it has the wrong routings towards wireguard. It's solved automatically though when restarting the container again. Maybe some sort of race condition with docker and wireguard integration? Although I did set wait 60 on npm to test this and I still have the same result. Output in container after unraid restart: # traceroute 10.253.0.1 traceroute to 10.253.0.1 (10.253.0.1), 30 hops max, 60 byte packets 1 192.168.0.1 (192.168.0.1) 0.730 ms 0.745 ms 0.769 ms 2 192.168.2.1 (192.168.2.1) 1.256 ms 1.694 ms 1.703 ms 3 p3e9bf104.dip0.t-ipconnect.de (62.155.241.4) 21.984 ms !N 22.011 ms !N 22.006 ms !N # route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 # ping 10.253.0.1 PING 10.253.0.1 (10.253.0.1): 56 data bytes 36 bytes from p3e9bf104.dip0.t-ipconnect.de (62.155.241.4): Destination Net Unreachable 36 bytes from p3e9bf104.dip0.t-ipconnect.de (62.155.241.4): Destination Net Unreachable 36 bytes from p3e9bf104.dip0.t-ipconnect.de (62.155.241.4): Destination Net Unreachable ^C--- 10.253.0.1 ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss Output in container after NPM container restart: # traceroute 10.253.0.1 traceroute to 10.253.0.1 (10.253.0.1), 30 hops max, 60 byte packets 1 10.253.0.1 (10.253.0.1) 0.117 ms 0.028 ms 0.043 ms # route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 10.253.0.0 192.168.0.231 255.255.255.0 UG 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 # ping 10.253.0.1 PING 10.253.0.1 (10.253.0.1): 56 data bytes 64 bytes from 10.253.0.1: icmp_seq=0 ttl=64 time=0.157 ms 64 bytes from 10.253.0.1: icmp_seq=1 ttl=64 time=0.133 ms ^C--- 10.253.0.1 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.133/0.145/0.157/0.000 ms
  6. My last question / post was answered with this.
  7. I'm using: With "Use NAT" = Yes and "Host access to custom networks" = enabled (static route optional) -> NO static rule was set server and dockers on bridge/host - accessible! VMs and other systems on LAN - NOT accessible dockers with custom IP - NOT accessible (avoid this config) But my actual result with UnRaid 6.12.1 is: With "Use NAT" = Yes and "Host access to custom networks" = enabled (static route optional) -> NO static rule was set server and dockers on bridge/host - accessible! VMs and other systems on LAN - NOT accessible dockers with custom IP (on br0) - accessible! (avoid this config) Does anyone know why exactly that could be?
  8. Reviving this, because it's the first google result, so maybe interesting for other people. The problem seems to be the same on 6.12.0?
  9. Okay, UFS explorer does recognize all my folders I had at least. Maybe only 30 GB (parity rebuild progress) is gone of 1.5 TB. That would be really good. I wonder: Does ZFS write to the drive in a specific order? Meaning, do I need to only search block 0 to 2tb or search all blocks? Maybe that's a stupid question, but I have no clue how exactly that works.
  10. Doesn't updating parity for 2 TB of files take a long time? Couldn't parity still be valid?
  11. I think I just deleted all my data. My original plan was to change the filesystem of one of my data drives. So I stopped the array, changed the FS, started the array. Now it said unmountable. Now for some reason, god nows why, I clicked format the drive. I know there's a lot of warnings that it does not belong to a parity rebuild and you should not do this, but for whatever reason I thought: Yea it is fine in this case, it will rebuild from parity when I format the drive, because how else should I tell unraid to do it. Well, that was obviously wrong. After that I thought "Well, this is not working, let's find a guide." So I used all of the steps of this guide afterwards: https://flemmingss.com/replacing-a-data-drive-in-unraid/ . After 5 minutes into the parity rebuild I realized that I messed up. I stopped the array and exported diagnostics. Maybe someone could have a look at the diagnostics and tell me what my next step would be? Maybe I didn't mess up, but I doubt that. nass-diagnostics-20230609-1214.zip
  12. Hello, I have a multi router setup, basically like the picture below. Now I do have configured port forwarding from R1 Port 33443 to R2 33443 and from R2 33443 to UnRAID 443 I can actually connect to my WAN IP:33443, but unraid connect and unraid say the server is unreachable. Do I need to pay attention to anything else in this specific setup? I mean it works using the external ip, but I cannot use Unraid connect as it doesn't recognize it.
  13. Also having one small issue / clarification: Running "move now" 3 times in a row somehow forces the move and ignores the rules, even though "Move Now button follows plug-in filters" is set to yes. It follow the rules the first 2 times I press "move now" though. Is this intended / a feature or a bug?
  14. I'm having the same issue with UnRaid 6.12.0RC5 and Mover Tuning 2023.05.16. 1st is default mover without error 2nd is mover tuning with error 3rd is mover tuning with normal file name and without error May 17 11:25:42 Nass emhttpd: shcmd (221255): /usr/local/sbin/mover |& logger -t move & May 17 11:25:42 Nass root: Starting Mover May 17 11:25:42 Nass root: ionice -c 2 -n 0 nice -n 0 /usr/local/sbin/mover.old May 17 11:25:42 Nass move: mover: started May 17 11:25:42 Nass move: file: /mnt/cache/files/Y2Mate.is - PROJECT MASTER YI Login Screen - League of Legends-YjrL3WjaSN0-720p-1654334915588.mp4 May 17 11:25:42 Nass move: mover: finished May 17 11:26:41 Nass emhttpd: shcmd (221928): /usr/local/sbin/mover |& logger -t move & May 17 11:26:41 Nass root: Starting Mover May 17 11:26:41 Nass root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 0 0 0 '' '' '' '' no 10 '' '' May 17 11:26:41 Nass move: Log Level: 1 May 17 11:26:41 Nass move: mover: started May 17 11:26:41 Nass move: error: move, 380: No such file or directory (2): lstat: /mnt/cache/files/Y2Mate.is - PROJECT MASTER YI Login Screen - League of Legends-YjrL3WjaSN0-720p-1654334915588.mp4 May 17 11:26:41 Nass move: mover: finished May 17 11:40:01 Nass root: Starting Mover May 17 11:40:01 Nass root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 0 0 0 '' '' '' '' no 10 '' '' May 17 11:40:01 Nass root: Log Level: 1 May 17 11:40:01 Nass root: mover: started May 17 11:40:01 Nass root: skip: /mnt/cache/files/test.mp4 May 17 11:40:01 Nass root: mover: finished