Jump to content

Alexstrasza

Members
  • Posts

    65
  • Joined

  • Last visited

Posts posted by Alexstrasza

  1. 15 hours ago, Fillwe said:

     

    I've been trying to get subnet relay to work, have added a flag with --advertise-routes=192.168.5.0/24 (This is the subnet my unraid box is on). And it shows up in the tailscale dashboard, but after i have enabled it i can't ping any of my devices on that subnet. Did you have to change any other settings in Unraid to get it working?

     

     

    It should just work, because I believe UnRaid IPv4 forwarding is on by default (it did and was for me). Try double checking with https://tailscale.com/kb/1104/enable-ip-forwarding/

  2. 1 hour ago, Fresh said:

    Unfortunately not binding it to VFIO at start causes virtman to hang up and ultimately the entire server to crash :( binding to VFIO makes the VM start up but no image is output.

    I can happily tell you though, that mining performance has not suffered a single mh/s when mining eth against bare metal mining!

     

    It's very strange that no image is output yet the whole VM still works, including VM use. I'm afraid I'm a bit stumped on this one. The only thing I can think of is perhaps you've left the VNC screen enabled, and the VM is setup to not output to the GPU screen?

  3. 1 minute ago, Fresh said:

    Right, my docker is an eth miner, so I want to have the most performance out of it which I can't get thorough the VM. Will try with/without isolation and report back wat my findings are :)

     

    I'll be really interested to learn your results. My VM is primarily for gaming, but I also have it mining in the background as well. I'm not seeing any slow speeds, but to be fair I don't have a non-VM version to compare it to.

  4. 4 minutes ago, Fresh said:

    Thanks will try later today. Is there not an option without isolation of the card? Its not the primary card but i also use it for a docker container if thr vm isnt running. Docker works great though 


    I think in theory you can dynamically stub it, but a lot of the problems I had pre the isolation feature were caused by a terminal or docker latching onto it and never letting go (which causes the VM to error with "busy" if I remember correctly). I would definitely recommend keeping it isolated and just doing whatever work you need in the VM until SR-IOV goes a bit more mainstream, which as I understand will resolve the issue by allowing vGPU slices to be allocated.

    • Like 1
  5. 3 hours ago, Fresh said:

    Sorry for resureccting this but how didd you? Im running a r5 2600 and cant passthrough my gtx 1070. VM manager is completely unresponsive after starting the vm. No video, nothing 

     

    For me on up-to-date UnRaid (6.9.2) (there were more workarounds needed on the older ones) it was as simple as:

     

    1. Isolate card in Tools -> System Devices (Reboot)
    2. VM Setup, as you want apart from swap graphics to the card and make sure to tick the NVIDIA devices in "other PCI Devices", save.
    3. To avoid the Ryzen 3000 VM bug (due to be fixed soon I think), reopen the config and switch to XML mode.
    4. Change "host-passthrough" to "host-model" and delete the cache line 2 lines after that
    5. Save and start and you should be good :)

    Let me know how you get on.

    • Like 1
  6. On 3/23/2021 at 9:46 AM, dsmith44 said:

    Can you totally remove the image and try?

     

    On a fresh reinstall I can confirm the template picked up had :latest, so I have no idea why I got an old 2020 build when I first downloaded. My best guess is some cursed CA caching or something, but it doesn't seem to be happening any more so I guess it's fixed 😅?

     

    Did you have a chance to look into the warning about exit nodes I mentioned above? I'm definitely still getting this on the container vs my Raspberry Pi, but the subnet and exit route features are 100% working, so I'm not sure the cause for the warning.

     

    UPDATE: This turned out to be because I had IPv6 forwarding off on my host.

     

    image.png.cb345ec11f120f33c4d0bac674d700ec.png

  7. On 10/19/2020 at 6:30 PM, dsmith44 said:

    A couple of updates.

     

    I have changed the template to pull latest rather than versioned builds, tailscale itself is developing more slowly now so this feels appropriate.

    Please change the 'Repository' to deasmi/unraid-tailscale:latest to use this.

     

    Secondly I've merged in support for passing flags to tailscale. 

    If you want to use this define a variable UP_FLAGS.

    These will be appended to the command that invokes tailscale.

     

    Please note if you are using UP_FLAGS I cannot provide support until it is removed, but I recognise some people may want to try subnet routing and the like.

    Thanks to @shayne for this.

     

    Dean

     

    Hi Dean, can you double check the template is set to use :latest? I did a fresh install from community apps today and it defaulted to a versioned tag (which is quite out of date at this point).

  8. 1 hour ago, ljm42 said:

     

    The tunnel has to be restarted when you add a peer. If you are connected to the tunnel at the time you do this, it goes down but does not come back up. If this is a common thing you need to do I would recommend creating a backup tunnel that you connect to when modifying the main tunnel.

     

    That's what I've ended up doing, but why is it that the tunnel does not come back up even if "autostart" is on?

  9. Hi there all. Is it expected that adding a new peer to a tunnel will disable the tunnel when apply is pressed? I've ended up in a semi-locked out situation multiple times when adding a peer and hitting apply via another peer on an active tunnel.

  10. 5 hours ago, itimpi said:

     

     

    I took me a while, but I got to the root cause I think.  I was surprised it had suddenly happened when I had not changed anything in the package creation area (for which I run a script).  I found it depended on which of my various unRaid environments I used to run the package creation :(  It still surprises me that the effect was to change the ownership of existing folder rather just the new ones being added.  The reason the problem arose in the first place might be of interest to other developers in case they encounter a similar scenario.

    • I have a Dropbox container using a Dropbox share on my main unRraid server which holds the source of the plugin. 
    • I use my Windows machine (which is also running Dropbox) as the main editing environment as I have better tools there.
    • I use Dropbox to synchronise the source between all my unRaid environments so any change made in Windows appears in my unRaid environments within a few seconds.
    • I was mounting this share on the other environments using a Unassigned Devices SMB share
    • My script that builds the package specifies that the ownership of the files should be root:root and if the package is built on the main server this is how the ownership ends up.
    • On the environments using the SMB share the ownership was showing as nobody:users and the command to change it to root:root was being silently ignored. 

    By changing the Dropbox share to be a NFS share the build script CAN change the ownership to the expected values.

     

     

    Thanks for the breakdown, and don't worry about the breakage - I was worried it was *me* doing something dumb 😅!

     

    5 hours ago, weirdcrap said:

    Interesting, you may have more than one plugin to blame here then. As Ken-ji mentioned in that other thread, you will probably need to go through the plugins one by one to figure out which one is changing your ownership and ask the author to fix it.

     

    Strangely the issue hasn't re-occurred since. If I notice the same symptoms though, at least I know what the cause is this time 🙂

     

  11. Hi there all, I've recently updated to 6.9.0-rc2, however this was occurring before on the last beta, so I'm pretty sure it's not related to an UnRaid version itself.

     

    Something keeps changing the ownership of my root directory (/, not /root) to nobody:users from the normal root:root. This seems to upset SSH's strict owernship checks, preventing me from using a public key to log in.

    image.png.182e054be72d91c57c391820772f3919.png

     

    Does anyone know what might be causing this?

     

    • I've seen the ownership change to both this and my "wolf" user.
    • No Docker containers have the root directory mounted
    • Ownership is correct at first boot, and for a random amount of time afterwards
    • using "chown root:root /" does not fix the problem, SSH still complains - A chmod needed?

     

    The SSH error: "Authentication refused: bad ownership or modes for directory /"

     

    Any help would be much appreciated.

  12. 9 hours ago, JorgeB said:

    To make the pool not degraded.

    But can't this be done after the remove operation, to save time waiting for the rebalance across the "removed" SSD?

    9 hours ago, JorgeB said:

    These are only one operation, balance is what converts the pools to single mode.

    I get that, but I don't understand why the command line calls it twice, the second time with "dconvert=single,soft" instead.

  13. I've found the original command-line run by UnRaid:

    Oct  4 01:45:47 Sector5 emhttpd: shcmd (41): /sbin/btrfs balance start -f -dconvert=single -mconvert=single /mnt/cache && /sbin/btrfs balance start -f -dconvert=single,soft -mconvert=single,soft /mnt/cache && /sbin/btrfs device delete /dev/nvme0n1p1 /mnt/cache &

    That would seem to match up with what I've observed:

    1. Convert to single (striped?) mode
    2. Convert again?
    3. Then finally copy the data off the SSD I want to delete.

    So I'm still confused as to why UnRaid chooses to do steps 1 & 2 rather than skipping to step 3.

  14. Hi all, just a general question about btrfs pools.  I recently wanted to remove an SSD from my raid1 pool, so I unassigned it and then restarted the array. To my suprise, instead of the pool starting up in some degraded mode, it instead seemed to begin a mandatory rebalance with the unassigned SSD to switch them to "single" operation, after which I assume it will remove it.

     

    According to this article, that seems to be normal for removing a device.  So my question is, for a raid1 with two devices, to a raid1 with 1 (degraded, but still the same), why does a rebalance have to occur - If I were to simply remove the SSD physically, it would keep working, so why does it need to rebalance if the drive is instead not removed?

     

    I'm running 6.9.0-beta29, but about to downgrade to beta25 because of the vfio drive passthrough issues.

     

    image.png.c3fe7ca8ae915ca6795d620f6800b2ea.png

  15. A way to set the docker stop timeout (normally specified with the -t paramater in the `docker stop` command), per container rather than globally. Use case: Containers which require extra time to stop, in order to save databases and other shutdown tasks (for example, the Storj container), but without increasing system stop time in the event of a misbehaving different container. In addition, this should apply to the "stop" command in the GUI, which I'm unsure if currently obeys the system default.

     

    See discussion thread: 

     

×
×
  • Create New...