Alexstrasza

Members
  • Posts

    65
  • Joined

  • Last visited

Everything posted by Alexstrasza

  1. It should just work, because I believe UnRaid IPv4 forwarding is on by default (it did and was for me). Try double checking with https://tailscale.com/kb/1104/enable-ip-forwarding/
  2. It's very strange that no image is output yet the whole VM still works, including VM use. I'm afraid I'm a bit stumped on this one. The only thing I can think of is perhaps you've left the VNC screen enabled, and the VM is setup to not output to the GPU screen?
  3. I'll be really interested to learn your results. My VM is primarily for gaming, but I also have it mining in the background as well. I'm not seeing any slow speeds, but to be fair I don't have a non-VM version to compare it to.
  4. I think in theory you can dynamically stub it, but a lot of the problems I had pre the isolation feature were caused by a terminal or docker latching onto it and never letting go (which causes the VM to error with "busy" if I remember correctly). I would definitely recommend keeping it isolated and just doing whatever work you need in the VM until SR-IOV goes a bit more mainstream, which as I understand will resolve the issue by allowing vGPU slices to be allocated.
  5. For me on up-to-date UnRaid (6.9.2) (there were more workarounds needed on the older ones) it was as simple as: Isolate card in Tools -> System Devices (Reboot) VM Setup, as you want apart from swap graphics to the card and make sure to tick the NVIDIA devices in "other PCI Devices", save. To avoid the Ryzen 3000 VM bug (due to be fixed soon I think), reopen the config and switch to XML mode. Change "host-passthrough" to "host-model" and delete the cache line 2 lines after that Save and start and you should be good Let me know how you get on.
  6. On a fresh reinstall I can confirm the template picked up had :latest, so I have no idea why I got an old 2020 build when I first downloaded. My best guess is some cursed CA caching or something, but it doesn't seem to be happening any more so I guess it's fixed 😅? Did you have a chance to look into the warning about exit nodes I mentioned above? I'm definitely still getting this on the container vs my Raspberry Pi, but the subnet and exit route features are 100% working, so I'm not sure the cause for the warning. UPDATE: This turned out to be because I had IPv6 forwarding off on my host.
  7. Also, please can you see if it's possible to support https://tailscale.com/kb/1103/exit-nodes? If I try to enable it, it informs me that IP forwarding is disabled and directs me to https://tailscale.com/kb/1104/enable-ip-forwarding. Thanks for the container 🐳❤️! EDIT: Huh, in actual testing it seems to work fine...? Tailscale bug perhaps?
  8. Hi Dean, can you double check the template is set to use :latest? I did a fresh install from community apps today and it defaulted to a versioned tag (which is quite out of date at this point).
  9. That's what I've ended up doing, but why is it that the tunnel does not come back up even if "autostart" is on?
  10. Hi there all. Is it expected that adding a new peer to a tunnel will disable the tunnel when apply is pressed? I've ended up in a semi-locked out situation multiple times when adding a peer and hitting apply via another peer on an active tunnel.
  11. Thanks for the breakdown, and don't worry about the breakage - I was worried it was *me* doing something dumb 😅! Strangely the issue hasn't re-occurred since. If I notice the same symptoms though, at least I know what the cause is this time 🙂
  12. When I rebooted the issue re-occurred a while later.
  13. Conclusion of the problem: It was caused by a faulty update of the Parity Check Tuning plugin. Removing the plugin and rebooting resolves the issue.
  14. I've just come across this thread too. I think quite a few people will have their plugins on auto-update and suddenly find themselves unable to SSH in as a result of this - That's what happened to me.
  15. Ah, I've just seen this thread which somehow I overlooked earlier being used by others with this problem. It might be related to a plugin update:
  16. Hi there all, I've recently updated to 6.9.0-rc2, however this was occurring before on the last beta, so I'm pretty sure it's not related to an UnRaid version itself. Something keeps changing the ownership of my root directory (/, not /root) to nobody:users from the normal root:root. This seems to upset SSH's strict owernship checks, preventing me from using a public key to log in. Does anyone know what might be causing this? I've seen the ownership change to both this and my "wolf" user. No Docker containers have the root directory mounted Ownership is correct at first boot, and for a random amount of time afterwards using "chown root:root /" does not fix the problem, SSH still complains - A chmod needed? The SSH error: "Authentication refused: bad ownership or modes for directory /" Any help would be much appreciated.
  17. @GaLaReN, I've got a working one with a RTX 2080 passed through
  18. But can't this be done after the remove operation, to save time waiting for the rebalance across the "removed" SSD? I get that, but I don't understand why the command line calls it twice, the second time with "dconvert=single,soft" instead.
  19. I've found the original command-line run by UnRaid: Oct 4 01:45:47 Sector5 emhttpd: shcmd (41): /sbin/btrfs balance start -f -dconvert=single -mconvert=single /mnt/cache && /sbin/btrfs balance start -f -dconvert=single,soft -mconvert=single,soft /mnt/cache && /sbin/btrfs device delete /dev/nvme0n1p1 /mnt/cache & That would seem to match up with what I've observed: Convert to single (striped?) mode Convert again? Then finally copy the data off the SSD I want to delete. So I'm still confused as to why UnRaid chooses to do steps 1 & 2 rather than skipping to step 3.
  20. Hi all, just a general question about btrfs pools. I recently wanted to remove an SSD from my raid1 pool, so I unassigned it and then restarted the array. To my suprise, instead of the pool starting up in some degraded mode, it instead seemed to begin a mandatory rebalance with the unassigned SSD to switch them to "single" operation, after which I assume it will remove it. According to this article, that seems to be normal for removing a device. So my question is, for a raid1 with two devices, to a raid1 with 1 (degraded, but still the same), why does a rebalance have to occur - If I were to simply remove the SSD physically, it would keep working, so why does it need to rebalance if the drive is instead not removed? I'm running 6.9.0-beta29, but about to downgrade to beta25 because of the vfio drive passthrough issues.
  21. Thanks for the information, that makes a lot more sense to me now.
  22. When these changes hit the release candidate, will we be automatically prompted to re-create our cache pools if needed? Or is this fix applied without needing to re-create the pool? I'm a little confused.
  23. Hi @martinf, I'm using two of these Crucial 1TB SSDs: https://uk.pcpartpicker.com/product/pxKcCJ/crucial-p1-1tb-m2-2280-solid-state-drive-ct1000p1ssd8. The read speed is around 1300MB/s according to DiskSpeed, unfortunately it doesn't test write speed.
  24. Did this debugging get added? I'm still not having drive images download automatically.