fearlessknight

Members
  • Posts

    19
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

fearlessknight's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Turns out my VM was just bad. After rebuilding from scratch everything including the CPU usage and idle was operating smoothly. Running on 6.9.2. Have you tried a fresh rebuild of the VM? I also disabled system updates and just have whatever came on the Windows 10 image. Could possibly be a Microsoft update.
  2. Did you ever figure this out? I'm also having difficulty. I can select my HBA from the list of PCI devices in my VM. However, when it starts, the VM is rendered inoperable. When its removed, the VM starts up just fine. Thought I may need to flash the HBA to IT mode. Its an LSI MegaRAID 9280-e. Any suggestions would help!
  3. Upgraded to 6.9.2 and noticed my Windows VMs, Linux is fine, despite gpu pass through, operate very sluggishly. The CPUs pinned also run at high cores. They are fine at idle. I'm not sure what could be causing it. I also created a new VM with no luck.Swapped hardware and experience the same issue. Anyone else experience this?
  4. I'm also experiencing high, constant CPU usage from 6.9.2 only when a VM is booted, which is causing it to run sluggishly slow. I attempted to downgrade to 6.8.3 from the Unraid Boot media and transfer my config folder over. However, I can't seem to get a ping response from the gateway using 6.8.3. I've also ensured I have manually set the network settings within the network.cfg file with no luck. However if I transfer the brz* files from any 6.9.* version, the system is then pingable and I'm able to login to the console. Any ideas? The original issue I'm experiencing is found in this thread. Thanks!
  5. I've updated despite fixes not stating any changes made to Passthrough. No luck.
  6. So this still seems odd to me as I've had this happen even after upgrading and despite using the script; I'm able to use 1 VM with 1 GPU, thankfully with the user script having worked. However, If I try to run 2 VMs and 2 GPU's, only 1 works while the other I can remote into, but get an Error 43 with the 2nd GPU. No matter how I flip flop the hardware, its still the same. Is anyone experiencing this as well? Unfortunately, I can't revert back to 6.8.3.
  7. I will look more into this when I get home tonight and try out TightVNC to give you my results. I would also try to remote in with RDP and see if you can view your GPU drivers. Can you ping your VM after its running?
  8. Script location: /tmp/user.scripts/tmpScripts/GPUPass/script Note that closing this window will abort the execution of this script /tmp/user.scripts/tmpScripts/GPUPass/script: line 3: /sys/class/vtconsole/vtcon1/bind: No such file or directory /tmp/user.scripts/tmpScripts/GPUPass/script: line 4: echo: write error: No such device Same as yours. What do you normally use to remote into your VM? Maybe try a fresh VM install and run the script.
  9. This is exactly how I have it setup on my end with the script to run at the start of the array. It looks like you have it pointing to a different location. It should be pointing to the config/plugins directory on the unraid USB.
  10. For anyone having trouble with GPU Passthrough on 6.9.0; either freezing or unable to install drivers, here is the fix for that: Credit goes to @Celsian I resolved my issue. Having dug deeper I found the following in my VM Log: 2021-03-06T06:32:32.186442Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region1+0x1be6ba, 0x0,1) failed: Device or resource busy Some searching yielded another thread: Adding a user script to User Script Plugin that runs when the array starts seems to have solved my issue, I was able to successfully boot into Windows 10 VM with the GTX 1060 passed through. I have installed the drivers successfully. The script: #!/bin/bash echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind
  11. This is excellent! I'm able to boot up my VM and GPU without any problems from the drivers. Thank you! I will be sharing this on the new 6.9.0 thread!
  12. Thanks again for finding the temp work around. I'll do some more in depth testing and follow up on my results.
  13. Just tried this and was able to get the GPU to passthrough successfully along with the Error 43. I'm going to pass this along to another thread with this info and see if they can't fix this during the next patch. I will mention you as well. Do you have any issues running software while hyperV is off? Or have you moved back to 6.8.3?
  14. Server Specs: Gigabyte Technology Co., Ltd. Z490 VISION G Version F20b Intel® Core™ i9-10850K CPU @ 3.60GHz 64 GiB DDR4 GPU1: Asus GTX 1070 GPU2: Asus GTX 1070 ** I'm able to login to the VNC with no Passthrough. The moment I attempt to install GPU drivers, the system freezes and boots into AutoRecovery.
  15. Were you able to passthrough your GPU after disabling the docker? Or was it still booting from VNC?