yakboyslim

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by yakboyslim

  1. Looks like I got it solved. Saw a thread of someone with similar issue who reinstalled unRAID and that fixed it. That didn't work for me, but switching to the next release (6.10 rc) did. Works great now, and the tips above got me audio out too.
  2. So I have narrowed the problem down to hyper threaded pairs. It starts up and runs fine as long as I do not select both pins in a thread pair (i.e. only select 0 OR 1, but not 0 and 1) . This works for me, for now, but is definitely leaving a lot of performance on the table, and especially since the purpose of the VM was for video editing for my wife any CPU tradeoff is unfortunate. I have tried selecting and deselecting the "Hyper-V" setting in the VM. I have not started messing with the BIOS of the Unraid machine itself, but everything was set so hyperthreading should be supported, and definitely is for everything else running on the machine. I am also trying to work a plan to try moving the disk to a vdisk, but with this new revelation I'm not sure if that is my problem any more. Any ideas what could cause this issue? Any settings to try?
  3. Diagnostics attached. I'll look into not passing through the NVME, but I will need to figure out moving that image to a vdisk. Would this be the cause of the problem? I should add that I had occasional success with restarting this VM, and only recently has it completely given up. So it has worked with the nvme passed through, just not most of the time. codraid-diagnostics-20220209-1746.zip
  4. Thanks for the help. I just moved so this has been on a backburner, but I just tried again with these changes. Same results. Unfortunately now I can't boot into windows at all to disable fast startup. During windows boot I get the Tiano splash for the VM, then I get the spinning wheel for windows. It may go to the recovery options start screen, but no matter what option I choose there it freezes. Usually the spinning wheel reappears and then stops spinning. When this happens if I look at the CPU utilization on the Unraid dashboard at least one thread is pegged at 100%. If I wait long enough other threads will get stuck at 100% too. I have waited over an hour, and this condition never improves, only gets worse with more threads sticking. I have also tried changing the CPU pinning, but this does not improve it. In fact, I have seen where the stuck threads are occasionally not even pinned for the VM. The stuck threads are fairly random, just when I think a certain thread is the problem a different thread will stick first. As one last aside I found a forum post on here about disabling WSD for samba shares due to random stuck threads. Figured it was worth a shot, but it didn't help. Any help is appreciated, I am so frustrated I am about to buy another computer, when this VM was supposed to be the plan all along.
  5. I have a windows 10 VM set up on a passed through NVME drive. I am able to start this VM once with GPU passthrough, and it works fine. However, once I shut the VM down it will not start again. No errors, nothing in the VM log that stands out to me. It just won't start again. I think I was able to delete the VM and make a new one and get it back once, but the most recent time this happened required me deleting the libvrt image and rebooting unraid. Diagnostics attached. (My apologies for all the SSH errors in there, I thought I had fixed the issues with my failover pihole's gravitysync, but it appears I did not.) codraid-diagnostics-20211130-1506.zip
  6. I recently had an issue where my cache pool got messed up while swapping disks in and out (trying to remove an NVMe disk to use separately for VMs) Unfortunately, something happened and unRAID wanted to format all 4 disks. I was able to recover all the data using btrs restore (I had appdata backups as well using the plugin) However this did not work for my VM vdisk. The only way I could copy this off to my array was to use tar, which appeared to work. So I have a copy of the vdisk, that at least is the correct size. When I copy this back to the pool setup for VMs now and setup a VM to use it I cannot boot past the UEFI shell. Using a gparted live disk I can view the disk and I see 3 correctly sized partitions, but they all say unrecognized file systems. Running testdisk does not recognize any file systems to recover as well. It sees them, but then after running analyze it doesn't find anything. I fear the vdisk is actually corrupt, but hoping someone here has an idea to help. I hadn't quite got around to setting up a backup plan for this VM, so unfortunately I really need to recover the data on this VM or I am starting over on a lot of things. VM is a Ubuntu 21 server if that matters. Willing to post any logs that would help.
  7. 6.10.0-rc2 and the v470.86 work great for me.
  8. I'd be willing to. I have never done an upgrade before so I might need a guide, etc. but I would be willing to try it.
  9. Got it working finally. Rolled back my BIOS version from 1.70 to 1.40. Took me forever to get it to boot, but eventually found out CSM is broken in this older BIOS version. Disabled CSM, unraid booted fine and everything works now! So apparently Asrock broke multiple GPU in an update at some point. Either way, I have one GPU working in dockers, and one passed through to the VM and everything else seems to work as before! Thanks for the help! Now to learn all these other PCIE passthroughs...
  10. Complete. Still not working, but at least Nvidia knows I am on the most current and maybe can troubleshoot further with them as well.
  11. I am on 6.9.2. You rock, thanks so much for all the help
  12. Nvidia is advising I try driver version 470.82.00. This is not an option for me in the Unraid Nvidia plugin. They included some instructions for manually installing the drivers to slackware, and disabling noveau. Is it advisable to follow those instructions or will it play nice with the unraid nvidia plugin?
  13. Tried enabled and disabled. No difference
  14. I have tried without. I have also tried with a VBIOS off techpowerup, and I have also tried following this guide: When I try to dump the vbios it says it succeeded and creates a rom file, but the script also has an error for "qemu closed the monitor unexpectedly" so I don't think it is actually working. Regardless, I tried with that rom output and still got the same results. "qemu unexpectedly closed the monitor" I am going to try unbinding the VFIO just to show that the second card still does not appear in nvidia-smi
  15. The connections are all from gravity-sync. I'm running pihole in a docker container as well as on an actual raspberry pi and using gravity-sync to keep them in sync, but it runs way too often and I am still working on that. The problem is it doesn't work with the VM. It gives the qemu error, which from my (possibly inaccurate googling) is related to a memory allocation issue, which is similar to what you said was the probable cause earlier. When I don't bind the VFIO it doesn't appear in the Nvidia driver plugin either. Only the first card appears there. If I VFIO bind the first card then neither card appears in Nvidia driver. Everything points to a motherboard issue, but I don't want to just throw money into another one without knowing an idea of what the problem is here.
  16. Its an ASrock B460 Phantom Gaming 4. I currently have the Quadro P400 in the upper x16 slot (CPU lanes) and the GTX1060 in the lower x16 slot (PCH running at x4 I believe) But that is just the arrangement that I stopped troubleshooting on, I have tried both slots, with every combo of VFIO I can think of. Currently the GTX1060 is VFIO since the hope is one day to pass that card to a VM while the P400 is used by docker containers. I have tried a few different ACS override settings with no effect. When I try to start my VM (currently a Ubuntu 18.4 server) it fails to start with the error "Execution Error: Internal error: qemu unexpectedly closed the monitor" If need be I can rerun the diagnostics, but I have included the diagnostics for how it is currently (GTX1060 with VFIO binding). @ich777 Thanks for the help again. Also attached is the diagnostics before I did any VFIO binding from a few weeks ago.codraid-diagnostics-20211026-2318.zip codraid-diagnostics-20211109-0808.zip
  17. I am still unable to get a second GPU working in my computer. Contacting Asrock support has led to them telling me that two nvidia cards is not supported, only two AMD. It is a B460 board, which do not support SLI. I have insisted to them many times that I do not need SLI, but am I incorrect in assuming that is what they are saying is unsupported? Between having a niche use case, and a language barrier, I don't think I am getting the right answers from them. As a follow-up question, if I do switch motherboards does anyone have any recommendations for LGA-1200 motherboards that are know to work with two Nvidia cards?
  18. I was in UEFI with no option to boot into CSM. I had to do some digging (switched 'EFI' folder on flash drive '-EFI') but was able to boot up into legacy mode. No change though.
  19. Well looks like swapping the cards explained a lot. Now the Quadro is recognized instead. I had to swap them back to get a monitor working on BIOS, so in the current state the 1060 is working now. (Whichever card is in slot 1) Ensured my BIOS was up to date (it was). And checked that the PCIE slot speeds were all still set to AUTO (also tried GEN3 with no change) Tried enabling PCIE Native mode as well. So the problem is with the slot, but I don't know what to do past that. The slot has never been used before this, so maybe it's just bad? I feel like that is unlikely though.
  20. Thanks for the suggestion. Did both the pci=realloc=off and enabled "Above 4G Decoding". No luck though. After that, I also saw SR-IOV disabled in the BIOS, so I tried enabling that since it sounded related (to my uneducated self). No change either.
  21. Attached. Thanks in advance for any help anyone can give. codraid-diagnostics-20211026-2318.zip
  22. I'm having some trouble getting this to see my second GPU. I have a GTX 1060 that I will eventually setup to passthrough to VM, but I also have a Quadro P400 to use for Plex/Tdarr transcoding. I see the P400 in system devices, but not in the Nvidia Driver plugin or in the output of nvidia-smi. I did my best to search but I have not found a solution to this. Anything I can try to make it appear? I have checked and the P400 is supported on the driver version I have selected.