CaptainIntel

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by CaptainIntel

  1. Hi Unraid-Community, I am running Unraid since a few years but lately changed my host setup to a Gigabyte MJ11-EC0 AMD Epyc SOC mATX. The system runs well in general, but I noticed, when I put load on the network interface, the connection will drop for a few seconds and then will come back up again. Sys-Log shows the following: Oct 19 17:09:49 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Oct 19 17:09:49 Disklan kernel: br0: port 1(eth0) entered blocking state Oct 19 17:09:49 Disklan kernel: br0: port 1(eth0) entered forwarding state Oct 19 17:10:00 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Down Oct 19 17:10:00 Disklan kernel: br0: port 1(eth0) entered disabled state Oct 19 17:10:03 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Oct 19 17:10:03 Disklan kernel: br0: port 1(eth0) entered blocking state Oct 19 17:10:03 Disklan kernel: br0: port 1(eth0) entered forwarding state Oct 19 17:10:08 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Down Oct 19 17:10:08 Disklan kernel: br0: port 1(eth0) entered disabled state Oct 19 17:10:11 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Oct 19 17:10:14 Disklan kernel: igb 0000:04:00.0: exceed max 2 second Oct 19 17:10:14 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Down Oct 19 17:10:15 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Oct 19 17:10:15 Disklan kernel: br0: port 1(eth0) entered blocking state Oct 19 17:10:15 Disklan kernel: br0: port 1(eth0) entered forwarding state Oct 19 17:10:28 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Down Oct 19 17:10:28 Disklan kernel: br0: port 1(eth0) entered disabled state Oct 19 17:10:31 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Oct 19 17:10:31 Disklan kernel: br0: port 1(eth0) entered blocking state Oct 19 17:10:31 Disklan kernel: br0: port 1(eth0) entered forwarding state Oct 19 17:10:37 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Down Oct 19 17:10:37 Disklan kernel: br0: port 1(eth0) entered disabled state Oct 19 17:10:40 Disklan kernel: igb 0000:04:00.0 eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Oct 19 17:10:40 Disklan kernel: br0: port 1(eth0) entered blocking state Oct 19 17:10:40 Disklan kernel: br0: port 1(eth0) entered forwarding state I doubled checked the cables and connectors and changed them to eliminate this kind of issue. Is there the possibility, that the LAN-Interface is damaged? Additional Information: Unraid Version: 6.12.4 Host: Gigabyte MJ11-EC0 AMD EPYC™ Embedded 3000 SoC processor MB LAN-Interface: Intel® I210-AT Cable: Cat 7 Patch It is a really weird issue... Thank You for suggestions
  2. Thank you for providing the working connections. So it should be sth. else which will cause the issue...
  3. in Version 6.11.5 it works with netbios enabled... Is this a bug in the new release? EDIT: Tried out, still not working (SMB and NFS)
  4. Same for me. Adding the tailscale0 interface enables WebUI Access over the Tailscale IP again but neither SMB nor NFS over the Tailscale IP!
  5. Hi, I have following issue: I don't have a public IPV4-Address but want to run some servers. I rented a VPS with a public IPV4 and successfully forwarded ports like 8080 on a VM to the VPS. I tested a setup which allows me to tunnel let's say a nginx-server from my Unraid VM via my VPS (with a public ipv4-address) and make it accessible via the VPS's IP like "123.123.123.123:80". I used this tutorial and it works great: https://gist.github.com/Quick104/d6529ce0cf2e6f2e5b94c421a388318b I can access the Server running on the VM (with wireguard inside) on my UnRaid-Server. Now, I want to simple use this method to access my docker containers. So I set up the build in Wireguard-Feature in unraid. The container is able to connect to the internet via the VPS IP. All port forwardings are set on the VPS as always. But I can't access let's say port 8080 on the VPS Ip. Curious: When I run a port scan, it shows me a active port 8080. It looks like the webpage is loading, but it's not. There must be a problem between the wireguard endpoint on my unraid and the container. Following IPTables are set on the VPS (The VPS Setup shouldn't be the problem): iptables -A FORWARD -i eth0 -o wg0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT & iptables -A FORWARD -i wg0 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT & iptables -A FORWARD -i eth0 -o wg0 -p tcp --syn --dport 8080 -m conntrack --ctstate NEW -j ACCEPT & iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 8080 -j DNAT --to-destination 10.66.66.2 & iptables -t nat -A POSTROUTING -o wg0 -p tcp --dport 8080 -d 10.66.66.2 -j SNAT --to-source 10.66.66.1 & These Iptables work when running the tunnel directly between a VM (KVM) and the VPS. I also tried the docker-container gluetun-vpn. It produces the exact same issue. I can't connect to ports like 8080 via the "vpn tunneled access for docker". Is there anyone having the same issue or solved it? Thanks Unraid-Community!
  6. I want GluTun to tunnel certain Docker containers to the VPS with a public IP to access them over this certain public IP (including the Ports like 8080).
  7. Hello, I have following problem: I can't successfully forward Ports to my VPS with a static public IPV4 address (116.203.XXX.XXX) via GluTun VPN, Custom Provider My setup: Unraid Server (behind a fritzbox home router, only with ds-lite and no public ipv4 address). This server is running the glutunvpn docker container and connects via wireguard to the VPS successfully. The VPS (Ubuntu 20.04) has a wireguard server installed and no firewall. Connection to the server is working. So I tried to setup port forwarding some container ports (like 8080 tcp for SABNZB to be reachable under the VPSServerIP:8080. Every time: connection refused. UFW and other firewalls no set. What my Port-Forwarding config of glutun looks like in the attachments. IP's of my setup: UnRaid Server: 192.168.178.21 (192.168.178.0/24 subnet) VPS Server: Public IP 116.203.XXX.XXX, Wireguard IP: 10.66.66.1, Endpoint: 116.203.XXX.XXX, Interface eth0, wg0 GLUTUN Container: Wireguard IP 10.66.66.2 I also enabled IP Forwarding on the Linux VPS: echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf sudo sysctl -p /etc/sysctl.conf What I've done wrong? Thank you very much