cobhc

Members
  • Posts

    92
  • Joined

  • Last visited

Everything posted by cobhc

  1. Can we basically ignore the mcelog error on AMD CPUs if we're not going to use mce?
  2. Sorry for the delay, that works fine for me. Thanks!
  3. That's exactly what I have done, didn't have a huge amount of trackers set up so didn't take long and appears to work well and seems faster than Jackett to me.
  4. I wonder if it's as simple as the "bluez-firmware: version 1.2" upgrade in 6.10.0-rc1 causing the issue as the NMBluezManager is a different version on the two logs you've attached?
  5. Great, just updated and all good. Thank you!
  6. Thanks a lot for this! Any chance of the container being updated to the version that came out 7th August? Thanks.
  7. I'm just passing it through within the VM Manager for each VM, it's not bound to VFIO or anything and yeah it worked fine previously with the Wifi 6 part disabled.
  8. That's not included either, thanks for the suggestion by the way!
  9. No, hciconfig doesn't seem to be included. That sounds like a logical explanation though,
  10. Hi, On 6.9.2 I was passing through my onboard Bluetooth Controller on my Asus B550 Strix-E which was working fine, however since updating to 6.10.0-RC1 I am no longer able to do. It works fine in BM on Windows 10 but throws up a Code 10 in a Windows 10 VM and doesn't let me enable Bluetooth in a PopOS VM. It's picked up as "Intel Corp. (8087:0029)" when trying to passthrough and is twinned with the onboard WiFi 6 (which I have disabled as I don't use it). All other settings are the same as they were before updating. Diagnostics attached. darthunraider-diagnostics-20210811-1629.zip
  11. Upgraded fine apart from I now can't passthrough my onboard Bluetooth controller. It works fine in a BM Windows 10 install, but throws up a Code 10 error in the Windows 10 VM and doesn't even let me Enable Bluetooth in a PopOS VM. Edit:- Created a separate thread here.
  12. Nope, forgot I posted this to be honest. I've just learnt to ignore the false reading I guess.
  13. I'd be interested to know if anyone comes up with a good solution over ethernet or the like, as I'd love to do something like this myself in the future; but in the meantime, you can use Parsec on both machines and get pretty low latency (less than 10ms) with relatively minimal video compression artifacts especially if both the host and client are wired.
  14. Managed to get temps on my Strix B550-E with acpi_enforce_resources=lax in my boot options, otherwise I get nothing but CPU temps. But the SYSTIN temp that seems to relate to motherboard temps goes up to 80-90C and won't come down even at idle which I know is not correct as it goes back down after a restart and is okay for a while, it also isn't showing anywhere near that in the BIOS. Is this because of the boot argument?
  15. Not to discourage @thor2002ro's work, but as I only needed the Vendor Reset patch I moved to using this and compiling the kernel myself. You have to re-do this after every update, but at least you can do it (relatively) safely.
  16. Anyone else getting higher than should be idle cpu usage from this docker? It's not showing as being used much in the advanced view on the dockers tab, it's in the main cpu usage and when I look in top in a terminal it's the java service that it's using. As soon as I stop the docker, my cpu clocks back down again.
  17. Did you try with just acpi_enforce_resources=lax in your boot options? That worked for me.
  18. Hi all, Wondered if someone might have any tips on getting my 5700xt to passthrough without having to use vfio bind? Without it I get a purple/corrupted screen and Windows 10 throws a code 43. It is the only GPU in the system, I'm running 6.9.1, legacy boot (UEFI stops any VM's getting past the Tianocore logo) and I'm passing through the GPU BIOS as otherwise I get a black screen. I would like to be able to get it working without the vfio bind so that I could use the card within dockers, etc. when the VM isn't running. Thanks.
  19. That makes sense and yes it works without the VFIO bind however without that I cannot pass through my GPU to my VM's, so it looks like I'll have to give both GPU Statistics and any hardware passthrough in dockers a miss. Thanks for your help anyway
  20. Please see attached. Apologies, I edited my previous post as the VM is now also showing 100% usage and also typing radeontop into the terminal also shows full usage. Maybe there's an issue with my particular card. tower-diagnostics-20210322-1407.zip
  21. No there isn't any VM's set to autostart, I did put that in the previous post but it was below the code block so might have been hard to see! Edit: Just rebooted and still the same and then I tried booting up the VM again and now it's also stuck at 100% usage after dropping back to 0% temporarily.
  22. Here you go:- Module Size Used by iptable_raw 16384 1 wireguard 86016 0 curve25519_x86_64 32768 1 wireguard libcurve25519_generic 49152 2 curve25519_x86_64,wireguard libchacha20poly1305 16384 1 wireguard chacha_x86_64 28672 1 libchacha20poly1305 poly1305_x86_64 28672 1 libchacha20poly1305 ip6_udp_tunnel 16384 1 wireguard udp_tunnel 20480 1 wireguard libblake2s 16384 1 wireguard blake2s_x86_64 20480 1 libblake2s libblake2s_generic 20480 1 blake2s_x86_64 libchacha 16384 1 chacha_x86_64 xt_CHECKSUM 16384 1 ipt_REJECT 16384 2 ip6table_mangle 16384 1 ip6table_nat 16384 1 vhost_net 24576 0 tun 49152 2 vhost_net vhost 32768 1 vhost_net vhost_iotlb 16384 1 vhost tap 24576 1 vhost_net xt_nat 16384 38 veth 24576 0 xt_MASQUERADE 16384 31 iptable_nat 16384 4 nf_nat 36864 4 ip6table_nat,xt_nat,iptable_nat,xt_MASQUERADE nfsd 196608 11 lockd 77824 1 nfsd grace 16384 1 lockd sunrpc 446464 14 nfsd,lockd md_mod 45056 3 iptable_mangle 16384 2 nct6775 53248 0 hwmon_vid 16384 1 nct6775 ip6table_filter 16384 1 ip6_tables 28672 3 ip6table_filter,ip6table_nat,ip6table_mangle iptable_filter 16384 2 ip_tables 28672 6 iptable_filter,iptable_raw,iptable_nat,iptable_mangle amdgpu 4493312 0 gpu_sched 32768 1 amdgpu i2c_algo_bit 16384 1 amdgpu drm_kms_helper 167936 1 amdgpu ttm 77824 1 amdgpu drm 385024 4 gpu_sched,drm_kms_helper,amdgpu,ttm backlight 16384 2 amdgpu,drm agpgart 36864 2 ttm,drm syscopyarea 16384 1 drm_kms_helper sysfillrect 16384 1 drm_kms_helper sysimgblt 16384 1 drm_kms_helper fb_sys_fops 16384 1 drm_kms_helper vendor_reset 81920 0 wmi_bmof 16384 0 mxm_wmi 16384 0 edac_mce_amd 32768 0 kvm_amd 98304 0 kvm 667648 1 kvm_amd crct10dif_pclmul 16384 1 crc32_pclmul 16384 0 crc32c_intel 24576 6 ghash_clmulni_intel 16384 0 aesni_intel 364544 0 crypto_simd 16384 1 aesni_intel cryptd 20480 2 crypto_simd,ghash_clmulni_intel glue_helper 16384 1 aesni_intel rapl 16384 0 btusb 45056 0 btrtl 16384 1 btusb i2c_piix4 24576 0 btbcm 16384 1 btusb btintel 24576 1 btusb k10temp 16384 0 ccp 73728 1 kvm_amd igc 90112 0 i2c_core 65536 5 drm_kms_helper,i2c_algo_bit,amdgpu,i2c_piix4,drm ahci 40960 4 bluetooth 405504 5 btrtl,btintel,btbcm,btusb libahci 32768 1 ahci ecdh_generic 16384 1 bluetooth ecc 28672 1 ecdh_generic nvme 36864 1 nvme_core 81920 3 nvme input_leds 16384 0 led_class 16384 1 input_leds wmi 24576 2 wmi_bmof,mxm_wmi acpi_cpufreq 16384 0 button 16384 0 And I don't have any VM's set to autostart.
  23. I did also build the kernel with the vendor reset patch. I only really wanted to use RadeonTop for the GPU Statistics as it's nice to have showing on the main screen in the UI. Here's a screenshot of what I'm seeing when the server is idle (GPU definitely not in use as the fans aren't spinning). Strangely, when I spin up a VM, the figures all drop back down and then appear to work correctly when the VM is running/under load.
  24. Just to double check is the RadeonTop module in the kernel helper the right one to use a Navi card with GPU Statistics? I've installed it but it's showing 100% usage when nothing is running. Wondered if I'd done something wrong or should report this in the GPU Statistics thread?
  25. Strangely, I get "unraid-api: command not found" when I run that command in a terminal. Is it something I need to install separately? It wasn't mentioned as a prerequisite on the wiki. Edit: Second reboot fixed it. All good now, thanks.