-
Posts
105 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by cobhc
-
-
17 hours ago, 1812 said:
why does it keep installing the newest Nvidia driver after every update? I use a gt for Plex and it always rolls to the latest vs keeping me on the 470.129.06 which is the one that works for this card.
I believe that's because the update process checks that all plugins are up to date before asking you to reboot in the situation that there's an update that's needed for the plugin to be compatible with that version of Unraid. Not sure if this is something you can disable.
-
3 hours ago, wgstarks said:
Looks like you haven’t added all your lan networks. IIRC you need to add the network you are connecting from separated by a comma.
Great, that works now! Thank you!
-
9 hours ago, wgstarks said:
So it may not have anything to do with 6.10. Why don't you post a brief outline of what you have tried along with your docker run command and supervisord log. Be sure to redact user/password from both.
To be clear, I wasn't suggesting it was to do with 6.10, just specifying which version of Unraid I'm running.
I tried the solution shown here on Q.2
https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
My supervisord.log is attached, and below is my docker run command.
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-qbittorrentvpn' --net='bridge' --privileged=true -e TZ="Europe/London" -e HOST_OS="Unraid" -e HOST_HOSTNAME="DarthUnraider" -e HOST_CONTAINERNAME="binhex-qbittorrentvpn" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='redacted' -e 'VPN_PASS'='redacted' -e 'VPN_PROV'='pia' -e 'VPN_CLIENT'='wireguard' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='yes' -e 'WEBUI_PORT'='8080' -e 'LAN_NETWORK'='192.168.2.0/24' -e 'NAME_SERVERS'='84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1' -e 'VPN_INPUT_PORTS'='' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8080]/' -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/qbittorrent-icon.png' -p '6881:6881/tcp' -p '6881:6881/udp' -p '8080:8080/tcp' -p '8118:8118/tcp' -v '/mnt/user/Data/Downloads/Torrents/':'/data/Downloads/Torrents/':'rw' -v '/mnt/user/appdata/binhex-qbittorrentvpn':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-qbittorrentvpn' 4550ced7ab48e96cb26f6a9bc6b7704191fbcfdab915ac54b1e02d4c7571e47a The command finished successfully!
-
41 minutes ago, wgstarks said:
So it was working before you updated to RC3 but now has stopped?
No, to be honest I didn't try this before updating to RC3.
-
Hi, I've tried the instructions in relation to accessing the WebUI outside of the local network and it doesn't appear to work still. Any ideas? I'm on 6.10.0-RC3.
-
On 8/25/2021 at 2:33 AM, tjb_altf4 said:
Any Jackett users switched over yet ?
Thinking about giving Prowlarr a go
That's exactly what I have done, didn't have a huge amount of trackers set up so didn't take long and appears to work well and seems faster than Jackett to me.
-
20 minutes ago, binhex said:
New image is currently building do a check for updates in about 30 minutes from now
Sent from my CLT-L09 using Tapatalk
Great, just updated and all good. Thank you!
- 1
-
Thanks a lot for this! Any chance of the container being updated to the version that came out 7th August?
Thanks.
-
1 hour ago, KnifeFed said:
Did you get any clarity on this? I have the same mobo with the same issue.
Nope, forgot I posted this to be honest. I've just learnt to ignore the false reading I guess.
-
I'd be interested to know if anyone comes up with a good solution over ethernet or the like, as I'd love to do something like this myself in the future; but in the meantime, you can use Parsec on both machines and get pretty low latency (less than 10ms) with relatively minimal video compression artifacts especially if both the host and client are wired.
-
Managed to get temps on my Strix B550-E with acpi_enforce_resources=lax in my boot options, otherwise I get nothing but CPU temps. But the SYSTIN temp that seems to relate to motherboard temps goes up to 80-90C and won't come down even at idle which I know is not correct as it goes back down after a restart and is okay for a while, it also isn't showing anywhere near that in the BIOS. Is this because of the boot argument?
-
1 hour ago, jcabello7 said:
The AMD reset vendor is working fine for me with Unraid 6.8.3. Can I update to 6.9.2 with safety without breaking anything? What should be the method to update with the modified kernel?
Thanks in advance and nice work!Not to discourage @thor2002ro's work, but as I only needed the Vendor Reset patch I moved to using this and compiling the kernel myself. You have to re-do this after every update, but at least you can do it (relatively) safely.
-
Anyone else getting higher than should be idle cpu usage from this docker? It's not showing as being used much in the advanced view on the dockers tab, it's in the main cpu usage and when I look in top in a terminal it's the java service that it's using.
As soon as I stop the docker, my cpu clocks back down again.
-
Did you try with just acpi_enforce_resources=lax in your boot options? That worked for me.
-
Hi all,
Wondered if someone might have any tips on getting my 5700xt to passthrough without having to use vfio bind?
Without it I get a purple/corrupted screen and Windows 10 throws a code 43. It is the only GPU in the system, I'm running 6.9.1, legacy boot (UEFI stops any VM's getting past the Tianocore logo) and I'm passing through the GPU BIOS as otherwise I get a black screen.
I would like to be able to get it working without the vfio bind so that I could use the card within dockers, etc. when the VM isn't running.
Thanks.
-
3 hours ago, ich777 said:
From what I see in your Diagnostics you have bound the card to VFIO since you use it in your Windows 10 VM.
That's one reason why it won't work, for testing purposes you can try to unbind the card and see if 'radeontop' works without the card bound to VFIO (if you bind the card to VFIO it's only "accessible" by the VM's).
That makes sense and yes it works without the VFIO bind however without that I cannot pass through my GPU to my VM's, so it looks like I'll have to give both GPU Statistics and any hardware passthrough in dockers a miss.
Thanks for your help anyway
- 1
-
4 minutes ago, ich777 said:
Eventually @b3rs3rk can help here since this seems like it has something to do with the front end but it is really weird since if you start up the VM it seems to work like you wrote above.
Can you please post your Diagnostics (Tools -> Diagnostics -> Download -> drop the downloaded file here in the textbox)?
Please see attached.
Apologies, I edited my previous post as the VM is now also showing 100% usage and also typing radeontop into the terminal also shows full usage. Maybe there's an issue with my particular card.
-
10 minutes ago, ich777 said:
Looks good, can you tell me if the VM is set to autostart like asked before?
No there isn't any VM's set to autostart, I did put that in the previous post but it was below the code block so might have been hard to see!
Edit: Just rebooted and still the same and then I tried booting up the VM again and now it's also stuck at 100% usage after dropping back to 0% temporarily.
-
31 minutes ago, ich777 said:
Can you give me the output of 'lsmod' without a VM loaded up? Is the VM set to autostart?
Here you go:-
Module Size Used by iptable_raw 16384 1 wireguard 86016 0 curve25519_x86_64 32768 1 wireguard libcurve25519_generic 49152 2 curve25519_x86_64,wireguard libchacha20poly1305 16384 1 wireguard chacha_x86_64 28672 1 libchacha20poly1305 poly1305_x86_64 28672 1 libchacha20poly1305 ip6_udp_tunnel 16384 1 wireguard udp_tunnel 20480 1 wireguard libblake2s 16384 1 wireguard blake2s_x86_64 20480 1 libblake2s libblake2s_generic 20480 1 blake2s_x86_64 libchacha 16384 1 chacha_x86_64 xt_CHECKSUM 16384 1 ipt_REJECT 16384 2 ip6table_mangle 16384 1 ip6table_nat 16384 1 vhost_net 24576 0 tun 49152 2 vhost_net vhost 32768 1 vhost_net vhost_iotlb 16384 1 vhost tap 24576 1 vhost_net xt_nat 16384 38 veth 24576 0 xt_MASQUERADE 16384 31 iptable_nat 16384 4 nf_nat 36864 4 ip6table_nat,xt_nat,iptable_nat,xt_MASQUERADE nfsd 196608 11 lockd 77824 1 nfsd grace 16384 1 lockd sunrpc 446464 14 nfsd,lockd md_mod 45056 3 iptable_mangle 16384 2 nct6775 53248 0 hwmon_vid 16384 1 nct6775 ip6table_filter 16384 1 ip6_tables 28672 3 ip6table_filter,ip6table_nat,ip6table_mangle iptable_filter 16384 2 ip_tables 28672 6 iptable_filter,iptable_raw,iptable_nat,iptable_mangle amdgpu 4493312 0 gpu_sched 32768 1 amdgpu i2c_algo_bit 16384 1 amdgpu drm_kms_helper 167936 1 amdgpu ttm 77824 1 amdgpu drm 385024 4 gpu_sched,drm_kms_helper,amdgpu,ttm backlight 16384 2 amdgpu,drm agpgart 36864 2 ttm,drm syscopyarea 16384 1 drm_kms_helper sysfillrect 16384 1 drm_kms_helper sysimgblt 16384 1 drm_kms_helper fb_sys_fops 16384 1 drm_kms_helper vendor_reset 81920 0 wmi_bmof 16384 0 mxm_wmi 16384 0 edac_mce_amd 32768 0 kvm_amd 98304 0 kvm 667648 1 kvm_amd crct10dif_pclmul 16384 1 crc32_pclmul 16384 0 crc32c_intel 24576 6 ghash_clmulni_intel 16384 0 aesni_intel 364544 0 crypto_simd 16384 1 aesni_intel cryptd 20480 2 crypto_simd,ghash_clmulni_intel glue_helper 16384 1 aesni_intel rapl 16384 0 btusb 45056 0 btrtl 16384 1 btusb i2c_piix4 24576 0 btbcm 16384 1 btusb btintel 24576 1 btusb k10temp 16384 0 ccp 73728 1 kvm_amd igc 90112 0 i2c_core 65536 5 drm_kms_helper,i2c_algo_bit,amdgpu,i2c_piix4,drm ahci 40960 4 bluetooth 405504 5 btrtl,btintel,btbcm,btusb libahci 32768 1 ahci ecdh_generic 16384 1 bluetooth ecc 28672 1 ecdh_generic nvme 36864 1 nvme_core 81920 3 nvme input_leds 16384 0 led_class 16384 1 input_leds wmi 24576 2 wmi_bmof,mxm_wmi acpi_cpufreq 16384 0 button 16384 0
And I don't have any VM's set to autostart.
-
1 hour ago, ich777 said:
Exactly, but please be also sure to build it with the gnif/vendor-reset patch (please also note that if you want to use it in VM's I don't think the GPU Statistics will work if it's currently used by a VM).
If you only plan to use it for Docker containers then you can also use the stock unRAID builds and install the Plugin from the CA App.
Can you post a screenshot?
I did also build the kernel with the vendor reset patch. I only really wanted to use RadeonTop for the GPU Statistics as it's nice to have showing on the main screen in the UI.
Here's a screenshot of what I'm seeing when the server is idle (GPU definitely not in use as the fans aren't spinning). Strangely, when I spin up a VM, the figures all drop back down and then appear to work correctly when the VM is running/under load.
-
Just to double check is the RadeonTop module in the kernel helper the right one to use a Navi card with GPU Statistics?
I've installed it but it's showing 100% usage when nothing is running. Wondered if I'd done something wrong or should report this in the GPU Statistics thread?
-
11 minutes ago, OmgImAlexis said:
Does running `unraid-api restart` fix the graphql offline issue?
Strangely, I get "unraid-api: command not found" when I run that command in a terminal.
Is it something I need to install separately? It wasn't mentioned as a prerequisite on the wiki.
Edit: Second reboot fixed it. All good now, thanks.
- 1
-
Hi,
I get the error "Graphql is offline" on my server using this plug-in, any ideas?
Also I get an error during the flash backup, I guess this is because I'm using a custom kernel?
Thanks.
-
Anyone else getting where the docker just sticks at "Selected macOS Product" when you check the logs?
I did have a working VM but screwed something up so decided to start fresh and now no matter what version of macOS I choose or which download method, it never moves past this line even when leaving it running overnight and removing all traces of Macinabox beforehand.
WireGuard quickstart
in Plugins and Apps
Posted
Hi all, I'm using Unraid's built-in WireGuard to obfuscate my received/sent traffic on a few different dockers.
The problem, however, is that I am now also using my Router to provide access to the network remotely (also through WireGuard) and cannot access the WebUI of those dockers, which are using the above tunnel (wg0) as their network. Is this something I can configure in the tunnel on Unraid, or am I out of luck doing it this way?