Alexstrasza

Members
  • Content Count

    46
  • Joined

  • Last visited

Everything posted by Alexstrasza

  1. Good to hear! I'm glad I'm not going insane :). Unfortunately I still can't see the button because of another error: key [iManufacturer] doesn't exist Sounds like it's the same thing again but for this key 😅. I wonder what it's trying to read that lacks these values?
  2. Hi John, Am I going insane, or did there used to be a "Benchmark all" button or something to that extent? The only 'multiple benchmark' button I can find is under the controller info, and that seems to just check for a controller bottleneck. Also, when I look at the "USB Bus Tree" page, I see this error. Thanks for all your work!
  3. Generally speaking this isn't an issue with game containers as they can stop so quickly (within 15s or so). Do you have any specific examples?
  4. No problem, glad it was that setting and not something more messy!
  5. This is probably due to the fact that Docker containers are prevented from talking to the host by default. So the traffic will be trying to do this: You -> Tailscale tunnel -> Tailscale Docker on Unraid Host -x> Pihole container Before it was doing this: You -> LAN -> Directly in the network interface of the Unraid host and routed to the PiHole To fix, try going to Settings -> Docker and changing "Host access to custom networks" to "Enabled". You'll have to temporarily disable Docker to do this and then restart it. L
  6. It should just work, because I believe UnRaid IPv4 forwarding is on by default (it did and was for me). Try double checking with https://tailscale.com/kb/1104/enable-ip-forwarding/
  7. It's very strange that no image is output yet the whole VM still works, including VM use. I'm afraid I'm a bit stumped on this one. The only thing I can think of is perhaps you've left the VNC screen enabled, and the VM is setup to not output to the GPU screen?
  8. I'll be really interested to learn your results. My VM is primarily for gaming, but I also have it mining in the background as well. I'm not seeing any slow speeds, but to be fair I don't have a non-VM version to compare it to.
  9. I think in theory you can dynamically stub it, but a lot of the problems I had pre the isolation feature were caused by a terminal or docker latching onto it and never letting go (which causes the VM to error with "busy" if I remember correctly). I would definitely recommend keeping it isolated and just doing whatever work you need in the VM until SR-IOV goes a bit more mainstream, which as I understand will resolve the issue by allowing vGPU slices to be allocated.
  10. For me on up-to-date UnRaid (6.9.2) (there were more workarounds needed on the older ones) it was as simple as: Isolate card in Tools -> System Devices (Reboot) VM Setup, as you want apart from swap graphics to the card and make sure to tick the NVIDIA devices in "other PCI Devices", save. To avoid the Ryzen 3000 VM bug (due to be fixed soon I think), reopen the config and switch to XML mode. Change "host-passthrough" to "host-model" and delete the cache line 2 lines after that Save and start and you should be good Let me know how you get on.
  11. On a fresh reinstall I can confirm the template picked up had :latest, so I have no idea why I got an old 2020 build when I first downloaded. My best guess is some cursed CA caching or something, but it doesn't seem to be happening any more so I guess it's fixed 😅? Did you have a chance to look into the warning about exit nodes I mentioned above? I'm definitely still getting this on the container vs my Raspberry Pi, but the subnet and exit route features are 100% working, so I'm not sure the cause for the warning. UPDATE: This turned out to be because I had IPv6 forward
  12. Also, please can you see if it's possible to support https://tailscale.com/kb/1103/exit-nodes? If I try to enable it, it informs me that IP forwarding is disabled and directs me to https://tailscale.com/kb/1104/enable-ip-forwarding. Thanks for the container 🐳❤️! EDIT: Huh, in actual testing it seems to work fine...? Tailscale bug perhaps?
  13. Hi Dean, can you double check the template is set to use :latest? I did a fresh install from community apps today and it defaulted to a versioned tag (which is quite out of date at this point).
  14. That's what I've ended up doing, but why is it that the tunnel does not come back up even if "autostart" is on?
  15. Hi there all. Is it expected that adding a new peer to a tunnel will disable the tunnel when apply is pressed? I've ended up in a semi-locked out situation multiple times when adding a peer and hitting apply via another peer on an active tunnel.
  16. Thanks for the breakdown, and don't worry about the breakage - I was worried it was *me* doing something dumb 😅! Strangely the issue hasn't re-occurred since. If I notice the same symptoms though, at least I know what the cause is this time 🙂
  17. When I rebooted the issue re-occurred a while later.
  18. Conclusion of the problem: It was caused by a faulty update of the Parity Check Tuning plugin. Removing the plugin and rebooting resolves the issue.
  19. I've just come across this thread too. I think quite a few people will have their plugins on auto-update and suddenly find themselves unable to SSH in as a result of this - That's what happened to me.
  20. Ah, I've just seen this thread which somehow I overlooked earlier being used by others with this problem. It might be related to a plugin update:
  21. Hi there all, I've recently updated to 6.9.0-rc2, however this was occurring before on the last beta, so I'm pretty sure it's not related to an UnRaid version itself. Something keeps changing the ownership of my root directory (/, not /root) to nobody:users from the normal root:root. This seems to upset SSH's strict owernship checks, preventing me from using a public key to log in. Does anyone know what might be causing this? I've seen the ownership change to both this and my "wolf" user. No Docker containers have the root directory mounted
  22. @GaLaReN, I've got a working one with a RTX 2080 passed through
  23. But can't this be done after the remove operation, to save time waiting for the rebalance across the "removed" SSD? I get that, but I don't understand why the command line calls it twice, the second time with "dconvert=single,soft" instead.