cobhc

Members
  • Posts

    105
  • Joined

  • Last visited

Everything posted by cobhc

  1. Hi all, I'm using Unraid's built-in WireGuard to obfuscate my received/sent traffic on a few different dockers. The problem, however, is that I am now also using my Router to provide access to the network remotely (also through WireGuard) and cannot access the WebUI of those dockers, which are using the above tunnel (wg0) as their network. Is this something I can configure in the tunnel on Unraid, or am I out of luck doing it this way?
  2. I believe that's because the update process checks that all plugins are up to date before asking you to reboot in the situation that there's an update that's needed for the plugin to be compatible with that version of Unraid. Not sure if this is something you can disable.
  3. Updated from rc5 without issue so far.
  4. To be clear, I wasn't suggesting it was to do with 6.10, just specifying which version of Unraid I'm running. I tried the solution shown here on Q.2 https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md My supervisord.log is attached, and below is my docker run command. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-qbittorrentvpn' --net='bridge' --privileged=true -e TZ="Europe/London" -e HOST_OS="Unraid" -e HOST_HOSTNAME="DarthUnraider" -e HOST_CONTAINERNAME="binhex-qbittorrentvpn" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='redacted' -e 'VPN_PASS'='redacted' -e 'VPN_PROV'='pia' -e 'VPN_CLIENT'='wireguard' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='yes' -e 'WEBUI_PORT'='8080' -e 'LAN_NETWORK'='192.168.2.0/24' -e 'NAME_SERVERS'='84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1' -e 'VPN_INPUT_PORTS'='' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8080]/' -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/qbittorrent-icon.png' -p '6881:6881/tcp' -p '6881:6881/udp' -p '8080:8080/tcp' -p '8118:8118/tcp' -v '/mnt/user/Data/Downloads/Torrents/':'/data/Downloads/Torrents/':'rw' -v '/mnt/user/appdata/binhex-qbittorrentvpn':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-qbittorrentvpn' 4550ced7ab48e96cb26f6a9bc6b7704191fbcfdab915ac54b1e02d4c7571e47a The command finished successfully! supervisord.log
  5. No, to be honest I didn't try this before updating to RC3.
  6. Hi, I've tried the instructions in relation to accessing the WebUI outside of the local network and it doesn't appear to work still. Any ideas? I'm on 6.10.0-RC3.
  7. Reported this here but I still can't passthrough my GPU to a VM without a sever crash on RC3, only this time it's on shutdown and not on boot.
  8. Following on from my report here about RC2, VM's at least now boot in RC3 with my GPU passed through but shutting the VM down results in the server locking up and not being able to even SSH in. Diagnostics attached. Hopefully this can be looked into or else it looks like I'm destined to stay on 6.10.0-RC1. darthunraider-diagnostics-20220311-0636.zip
  9. But it works fine on RC1 without having to bind my GPU, meaning there's a regression there. Also, not binding my GPU allows me to use it in dockers, etc. when the VM isn't running which is a feature I'd rather not/shouldn't have to give up.
  10. Guess I'm going back to RC1 till another RC or 6.10 official comes out as I can't use any VM with passthrough on this version.
  11. Guess I'm alone in this? Any ideas before I go back to RC1 again?
  12. Hi, after experiencing this before and rolling back to 6.10RC1 where things run fine I thought I'd check this out again. Trying to passthrough a GPU in either an existing VM or a new one (with a fresh vdisk and no boot media inserted) causes no video output clicking on the VM tab causes the whole WebUI to crash. Trying "virsh shutdown [vm name]" in a terminal just locks the terminal session up. I'm not binding my GPU to VFIO and I've tried both Q35/i440fx 6.0 and 6.1 with the same outcome. This happens regardless of whether I give a vbios or not. All BIOS settings are the same and all xml settings are the same. Any help or ideas would be appreciated. GPU is an AMD 5700XT which works fine in 6.10.0-RC1 with the AMD Vendor Reset plugin, etc. Attached diagnostics after having to force a shutdown. Edit: Now with RC3 VM's are booting but locking up the server on shutdown. Diagnostics from the thread here added. darthunraider-diagnostics-20220311-0636.zip
  13. Can we basically ignore the mcelog error on AMD CPUs if we're not going to use mce?
  14. Sorry for the delay, that works fine for me. Thanks!
  15. That's exactly what I have done, didn't have a huge amount of trackers set up so didn't take long and appears to work well and seems faster than Jackett to me.
  16. I wonder if it's as simple as the "bluez-firmware: version 1.2" upgrade in 6.10.0-rc1 causing the issue as the NMBluezManager is a different version on the two logs you've attached?
  17. Great, just updated and all good. Thank you!
  18. Thanks a lot for this! Any chance of the container being updated to the version that came out 7th August? Thanks.
  19. I'm just passing it through within the VM Manager for each VM, it's not bound to VFIO or anything and yeah it worked fine previously with the Wifi 6 part disabled.
  20. That's not included either, thanks for the suggestion by the way!
  21. No, hciconfig doesn't seem to be included. That sounds like a logical explanation though,
  22. Hi, On 6.9.2 I was passing through my onboard Bluetooth Controller on my Asus B550 Strix-E which was working fine, however since updating to 6.10.0-RC1 I am no longer able to do. It works fine in BM on Windows 10 but throws up a Code 10 in a Windows 10 VM and doesn't let me enable Bluetooth in a PopOS VM. It's picked up as "Intel Corp. (8087:0029)" when trying to passthrough and is twinned with the onboard WiFi 6 (which I have disabled as I don't use it). All other settings are the same as they were before updating. Diagnostics attached. darthunraider-diagnostics-20210811-1629.zip
  23. Upgraded fine apart from I now can't passthrough my onboard Bluetooth controller. It works fine in a BM Windows 10 install, but throws up a Code 10 error in the Windows 10 VM and doesn't even let me Enable Bluetooth in a PopOS VM. Edit:- Created a separate thread here.
  24. Nope, forgot I posted this to be honest. I've just learnt to ignore the false reading I guess.