Alexstrasza

Members
  • Posts

    58
  • Joined

  • Last visited

Everything posted by Alexstrasza

  1. I think you've muddled your 🦈 and 🛡️'s 😉
  2. Ah... have you made sure that "Host access to custom networks" is set to "Enabled" in your UnRaid docker settings?
  3. Are you running the latest version of the container available (:latest)? If so our environments should be identical. The only thing I had to change to get mine working was turning IPv6 forwarding on for the host, which you've done too.
  4. Strange. Can you check cat /proc/sys/net/ipv6/conf/all/forwarding from the UnRaid console? I know it should return 1 based on your results above, but it's worth double checking.
  5. That's a pretty strange issue, that's not the error you should get if it's a SMBv1 problem from what I know, the error is normally much more specific. Have you tried posting a general thread in https://forums.unraid.net/forum/55-general-support/? They'll be much better prepared to help you there, this is more of a support thread for Tailscale-specific issues. Unfortunately I think if your non-Tailscale file transfers don't work properly, they are unlikely to work within Tailscale, as the method is exactly the same.
  6. Hi there, no, no more complete panics since then. I've had quite a lot of general instability issues, the X470D4U motherboard seems to be plagued with issues, in particular reliability varies hugely BIOS to BIOS, so if you're running that motherboard then I heavily recommend the P3.50 BIOS firmware and 2.20.00 BMC firmware, which have been the most stable I've seen. It's also worth noting that future UnRaid versions have also improved other issues a lot, so I heavily recommend checking you are on the latest version as well.
  7. There was a bug with older versions of QEMU where the host-passthrough CPU mode would not work. This is fixed in later versions of UnRaid. There's this really understated warning in the motherboard manual which has caught me out before and might be your problem too: The setting isn't called exactly the same thing as the manual says either (so that's helpful), but you need to manually switch the slots into 2x8 mode for either to work properly with two GPUs installed on a Ryzen 3000 series CPU.
  8. Correct on all three counts! However there is a bit more nuance to it. Whilst Wireguard can be used with a kernel implementation (which I believe is more efficient, so less CPU usage) it can also be implemented in software. Tailscale at the moment exclusively uses the software implementation to ease cross-platform compatibility, although there are plans in the future to link in with the kernel system on systems with support. This means it's technically not speaking with the system implementation at all at the moment. As for compatibility in general, as far as I'm aware any number of systems can use the underlying Wireguard technology, as long as they don't use conflicting address spaces (this is true with any VPN afaik and in my experience). Since Tailscale uses the rare 100. address range, it's incredibly unlikely to conflict with anything else provided you haven't manually specified that same range for the Unraid Wireguard tunnels.
  9. They can both run independently, so feel free to try it out!
  10. Good to hear! I'm glad I'm not going insane :). Unfortunately I still can't see the button because of another error: key [iManufacturer] doesn't exist Sounds like it's the same thing again but for this key 😅. I wonder what it's trying to read that lacks these values?
  11. Hi John, Am I going insane, or did there used to be a "Benchmark all" button or something to that extent? The only 'multiple benchmark' button I can find is under the controller info, and that seems to just check for a controller bottleneck. Also, when I look at the "USB Bus Tree" page, I see this error. Thanks for all your work!
  12. Generally speaking this isn't an issue with game containers as they can stop so quickly (within 15s or so). Do you have any specific examples?
  13. No problem, glad it was that setting and not something more messy!
  14. This is probably due to the fact that Docker containers are prevented from talking to the host by default. So the traffic will be trying to do this: You -> Tailscale tunnel -> Tailscale Docker on Unraid Host -x> Pihole container Before it was doing this: You -> LAN -> Directly in the network interface of the Unraid host and routed to the PiHole To fix, try going to Settings -> Docker and changing "Host access to custom networks" to "Enabled". You'll have to temporarily disable Docker to do this and then restart it. Let me know if that works!
  15. It should just work, because I believe UnRaid IPv4 forwarding is on by default (it did and was for me). Try double checking with https://tailscale.com/kb/1104/enable-ip-forwarding/
  16. It's very strange that no image is output yet the whole VM still works, including VM use. I'm afraid I'm a bit stumped on this one. The only thing I can think of is perhaps you've left the VNC screen enabled, and the VM is setup to not output to the GPU screen?
  17. I'll be really interested to learn your results. My VM is primarily for gaming, but I also have it mining in the background as well. I'm not seeing any slow speeds, but to be fair I don't have a non-VM version to compare it to.
  18. I think in theory you can dynamically stub it, but a lot of the problems I had pre the isolation feature were caused by a terminal or docker latching onto it and never letting go (which causes the VM to error with "busy" if I remember correctly). I would definitely recommend keeping it isolated and just doing whatever work you need in the VM until SR-IOV goes a bit more mainstream, which as I understand will resolve the issue by allowing vGPU slices to be allocated.
  19. For me on up-to-date UnRaid (6.9.2) (there were more workarounds needed on the older ones) it was as simple as: Isolate card in Tools -> System Devices (Reboot) VM Setup, as you want apart from swap graphics to the card and make sure to tick the NVIDIA devices in "other PCI Devices", save. To avoid the Ryzen 3000 VM bug (due to be fixed soon I think), reopen the config and switch to XML mode. Change "host-passthrough" to "host-model" and delete the cache line 2 lines after that Save and start and you should be good Let me know how you get on.
  20. On a fresh reinstall I can confirm the template picked up had :latest, so I have no idea why I got an old 2020 build when I first downloaded. My best guess is some cursed CA caching or something, but it doesn't seem to be happening any more so I guess it's fixed 😅? Did you have a chance to look into the warning about exit nodes I mentioned above? I'm definitely still getting this on the container vs my Raspberry Pi, but the subnet and exit route features are 100% working, so I'm not sure the cause for the warning. UPDATE: This turned out to be because I had IPv6 forwarding off on my host.
  21. Also, please can you see if it's possible to support https://tailscale.com/kb/1103/exit-nodes? If I try to enable it, it informs me that IP forwarding is disabled and directs me to https://tailscale.com/kb/1104/enable-ip-forwarding. Thanks for the container 🐳❤️! EDIT: Huh, in actual testing it seems to work fine...? Tailscale bug perhaps?