Alexstrasza

Members
  • Content Count

    39
  • Joined

  • Last visited

Community Reputation

4 Neutral

About Alexstrasza

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It's very strange that no image is output yet the whole VM still works, including VM use. I'm afraid I'm a bit stumped on this one. The only thing I can think of is perhaps you've left the VNC screen enabled, and the VM is setup to not output to the GPU screen?
  2. I'll be really interested to learn your results. My VM is primarily for gaming, but I also have it mining in the background as well. I'm not seeing any slow speeds, but to be fair I don't have a non-VM version to compare it to.
  3. I think in theory you can dynamically stub it, but a lot of the problems I had pre the isolation feature were caused by a terminal or docker latching onto it and never letting go (which causes the VM to error with "busy" if I remember correctly). I would definitely recommend keeping it isolated and just doing whatever work you need in the VM until SR-IOV goes a bit more mainstream, which as I understand will resolve the issue by allowing vGPU slices to be allocated.
  4. For me on up-to-date UnRaid (6.9.2) (there were more workarounds needed on the older ones) it was as simple as: Isolate card in Tools -> System Devices (Reboot) VM Setup, as you want apart from swap graphics to the card and make sure to tick the NVIDIA devices in "other PCI Devices", save. To avoid the Ryzen 3000 VM bug (due to be fixed soon I think), reopen the config and switch to XML mode. Change "host-passthrough" to "host-model" and delete the cache line 2 lines after that Save and start and you should be good Let me know how you get on.
  5. On a fresh reinstall I can confirm the template picked up had :latest, so I have no idea why I got an old 2020 build when I first downloaded. My best guess is some cursed CA caching or something, but it doesn't seem to be happening any more so I guess it's fixed 😅? Did you have a chance to look into the warning about exit nodes I mentioned above? I'm definitely still getting this on the container vs my Raspberry Pi, but the subnet and exit route features are 100% working, so I'm not sure the cause for the warning.
  6. Also, please can you see if it's possible to support https://tailscale.com/kb/1103/exit-nodes? If I try to enable it, it informs me that IP forwarding is disabled and directs me to https://tailscale.com/kb/1104/enable-ip-forwarding. Thanks for the container 🐳❤️! EDIT: Huh, in actual testing it seems to work fine...? Tailscale bug perhaps?
  7. Hi Dean, can you double check the template is set to use :latest? I did a fresh install from community apps today and it defaulted to a versioned tag (which is quite out of date at this point).
  8. That's what I've ended up doing, but why is it that the tunnel does not come back up even if "autostart" is on?
  9. Hi there all. Is it expected that adding a new peer to a tunnel will disable the tunnel when apply is pressed? I've ended up in a semi-locked out situation multiple times when adding a peer and hitting apply via another peer on an active tunnel.
  10. Thanks for the breakdown, and don't worry about the breakage - I was worried it was *me* doing something dumb 😅! Strangely the issue hasn't re-occurred since. If I notice the same symptoms though, at least I know what the cause is this time 🙂
  11. When I rebooted the issue re-occurred a while later.
  12. Conclusion of the problem: It was caused by a faulty update of the Parity Check Tuning plugin. Removing the plugin and rebooting resolves the issue.
  13. I've just come across this thread too. I think quite a few people will have their plugins on auto-update and suddenly find themselves unable to SSH in as a result of this - That's what happened to me.
  14. Ah, I've just seen this thread which somehow I overlooked earlier being used by others with this problem. It might be related to a plugin update:
  15. @GaLaReN, I've got a working one with a RTX 2080 passed through