Brydezen

Members
  • Posts

    135
  • Joined

  • Last visited

Everything posted by Brydezen

  1. @ljm42 I got a five unclean shutdown logs all referring to the same issue (the latest two already posted in this thread). That it is unable to unmount the cache drive where the docker.img file lives. I have since downgraded back to 6.1.5 and everything works just as expected again. All of them seems to have stopped the docker service. But is still unable to unmount for whatever reason.
  2. Hi. I would like to add to this thread. This issue seems very random indeed. On the 4th of july (so two days from posting this) I disabled docker in the web ui. Since according to my logs from other days (with the same issue) it was docker that kept the disk unmountable. So I disabled docker before trying to shutdown the server. And to my surprise that worked - or so I thought. Then yesterday (5th) I tried the same thing again but with no luck this time. This issue seems so be somewhat random. I will add two logs. One from yesterday and the one from the 4th. EDIT: For the time being I think I will go back to 6.15 since I dont even use any of the newer features like ZFS or have rearranged much of the UI. I hope this issue will be addressed and fixed somewhat quickly tower-diagnostics-20230706-0047.zip tower-diagnostics-20230704-0055.zip
  3. Hello. I have a weird problem I cannot figure out. Some time ago every time I shutdown my server (hitting the power button or using the webui) my unraid server seems to report it shutdown "unclean" and I have no idea why or how to fix it. I can't seem to find anything wrong. So I hope someone here is able to figure it out. I have looked through the logs on the usb stick multiple times but i'm totally lost. I have attatched two diagnostics. One that was generated when the server shutdown. And another right after it rebooted. Thank you! tower-diagnostics-20230227-1152 - shutdown.zip tower-diagnostics-20230227-1205 - rebooted.zip
  4. That's great. I have tried this on my own VM. It does get more usable but is still unable to play games using it. So i'm shopping for new hardware soon. Was time for a upgrade anyways.
  5. That was my next question in regards to how you installed the nvidia drivers. Because loading the VM up with no graphics driver but MSI enabled works fine. But halfway through it might crash because it maybe changes the device instance path because of the update. So you just installed them in safe mode? Because having the installer crash midway is for sure gonna cause some kind of weird problem if it's not allowed to finish fully. Think I might prep a fresh new VM in Q35 5.1 on my 6.8.3 machine and try and migrate that to either 6.9.2 or 6.10RC2
  6. So you are saying that you now have a fully functional VM with a GPU passed through with no hickups at all just by enabling MSI interrupts? Also have some other questions about your VM: what bios and version did you use? Did you do a fresh reinstall? What Nvidia driver did you install? Do you have HyperV enabled on the VM? if Yes, then what do you have in their? Any other special XML you have added? Tell as much as you can. So I can try and recreate on my own machine 🤞🏻 I thought it was mostly enabled if you had audio issues on your VM. Looking at the lspci -v -s <ID> I can se my current VM on 6.8.3 does have MSI enabled on the GPU. Just seems odd it's should all be down too that. Maybe someone can or have created a script for manually check if it's enabled on every boot. EDIT: This little snippet in powershell can grab the "DISPLAY" aka GPU installed and give you the patch. Will see if I can get some sort of script up and running for check if MSISupported is set to 1 or not. gwmi Win32_PnPSignedDriver | ? DeviceClass -eq "DISPLAY" | Select DeviceID EDIT 2: Think I have most of the pieces for creating a noob script. I'm no way good at powershell. This is my first ever attempt at creating on. But it will: Check if their is a graphics card - Get the Device instance path. Check is the "MessageSignaledInterruptProperties" exsist in the registery keys. And then check if "MSISupported" exsist and what value it has. And based on what value it has it should change it. And if it changes I will make it do a automatic reboot (maybe) Or maybe just a popup saying its been changes and you should reboot.
  7. UPDATE: I pulled the plug spend over 5 days trying to fix it. So rolled back to 6.8.3 - before I did I also tried 6.10-RC2 as last straw. I read somewhere that the linux kernel had problems with VFIO passthrough in version 5.1, 5.2 and 5.3 - and unraid just updated to 5.1 in 6.9.2 - so I blame it on the kernel choice. I hope later versions of unraid could advance beyond those kernels with potential problems. https://www.heiko-sieger.info/running-windows-10-on-linux-using-kvm-with-vga-passthrough/#Kernel_51_through_53_having_Issues_with_VFIO-solved_with_latest_kernel_update EDIT: Not saying he is right. But seems odd that so many are having problems with the 5.1 kernel in unraid 6.9(.X)
  8. Wanted you to maybe try a new Windows 10 VM on 6.9.2 Q35 with all UEFI booting. I just tried it. And seemed to still give me error 43. Then I tried pulling out my old main VM vdisk. I created a new template for it. And low and behold it somewhat worked. It was not unusable but still wasn't able to play any games. I then tried to upgrade to the latest Nvidia geforce game ready driver. Using a "clean install" in the advanced section. And after doing that it went back to totally unusable. I blame nvidia for the issue now. But hard to say for sure. Before it was running 471.68 (nvidia driver) - Not sure what to do now. Maybe I will try this guide and see if it can fix the VM for good. https://forums.unraid.net/topic/103501-gpu-passthrough-doesnt-work-after-updating-to-unraid-69/?do=findComment&comment=961341
  9. If you don't mind trying it out? I have seen people talking about lower graphics driver versions. But I haven't tried it yet. My next move is trying to go back to Legacy. Right now I'm on UEFI. Wanna see if they gets me further. But installing the machine on Q35 and adding those lines to my template for sure got me further. Don't have time until Monday to work on it a bit more. But only gonna give it two more days before migrating back to 6.8.3 - can be spending this much time on something that should just work.
  10. I got some good news. I have been able to reinstall a new Windows 10 machine with only the GPU passed through and connect to it over RDP. Tho windows keeps reporting error 43 with the GPU in device management. I followed this guide to setup the VM itself: I then also unticked the GPU from Tools > System Devices and added them directly to the flash drive command using: pci-stub.ids=XXXX:XXXX,XXXX:XXXX but I have not overcome the error 43 yet. But it is for sure a step further than I have ever come before. Think I will try and follow this long guide next:
  11. I just tried a completely new install of unraid. And at first glance everything seemed great. My main windows VM actually started up and was somewhat usable. WAAAAAY more than just updating the normal usb. But still not enough for me to be happy running on daily. So I decided to just try and make a completely new VM. Everything went great over VNC and the first boot with a graphics card also seemed fine. But after the nvidia drivers where about 15% into the installation the VM just freezed up. At this point I don't know what else to do. Can't be on 6.8.2 forever. I want to upgrade to 6.9.2 or beyond. But I don't know what to do at this point. I'm begining to give up on unraid. If it's this much hassle I might just end up switching to another hypervisor. I feel like I have done close to everything I can. Trying to run the VM in a bazillion different configurations: Hypervisor: Yes, No. USB: 2.0, 3.0 etc etc. More ram, less ram and so on. Once in a while it will actually start up into the desktop itself even with only the gpu and rdp. But crash after like 2 - 3 minutes
  12. So you are running Legacy and i'm running UEFI. Same motherboard and both problems with nvidia gpu's. Seems weird. Have you done any customization outside of the unraid web gui? I have done a few things related to CPU's but now I can't remember exactly what it was. That's why I was thinking about doing a completly fresh unraid install and test. I haven't tried any Seabios VM's. Seems like most people recommend OVMF so never really striked me to use the Seabios. if a completly fresh unraid install don't work i'm sadly moving to another hypervisor. Makes me kinda sad if it has to come down to that. But seems like no one is really interested in helping anymore.
  13. Alright. Can you remember what boot option you where running? UEFI or Legacy?
  14. What motherboard are you using? And are you using UEFI or legacy boot? My problems also start the second a graphics card is installed. Work just fine with VNC. But a graphics card makes the VM unusable
  15. I'm using the same motherboard. But I doubt it's the motherboard. But could maybe be the UEFI boot option. Will maybe try and do a complete fresh install of unraid with legecy and UEFI. Just copy over the disk and cache arrays. So everything else is totally clean. It's a long shot. But I can't keep staying at 6.8.3 - every plugin is getting outdated for my version. I just don't have that much time do deal with it. I need it to just work so it's really frustrating. Please let me know if you find a way to fix your issue
  16. What motherboard and cpu are you running?
  17. I could not get it to even boot into legacy mode for some reason. It just kept telling me it's not a bootable device. So i'm giving up now. going back to 6.8.3 - don't wanna waste more time troubleshooting when nothing seems to help at all. Hopefully the next release will work for me.
  18. I might just redo the installation on try to go with legacy boot and see if that will fix anything for me.
  19. Pretty sure it's UEFI. I said yes to the UEFI option when running the make_bootable script. And made sure my boot order was using UEFI: General USB 1.0 or something. I do install all the virtio drivers the vm needs. But not 100% sure I have installed the guest agent tho. I might need to try and do that using vnc EDIT: I just tried installing the guest agent. It had no positive effect on the virtual machine.
  20. I just did a fresh install of unraid on my usb copied all the files over. Tried 5.1, 5.0 and 4.2 I still get the same result. Nothing seems to work for me.
  21. I have tried all the new machine types. And the 4.2 it was on when I ran 6.8.3. Don't seem to have any difference at all. I did see that in the announcement. I also had to reformat my drive for the 1MiB "alignment bug" as I was getting crazy read and writes for some reason.
  22. Weird. Maybe I will just reinstall my usb using the unraid tool. And transfer my settings files over. If not I will just downgrade I think. It's on my second day of troubleshooting.
  23. I have tried using the vBIOS. Both one I dumped myself and downloaded. Nothing works with that either.
  24. I recently updated from 6.8.3 to 6.9.2 - everything worked fine. But none of my virtual machines works when passing through any gpu to them. They just boot loop into windows recovery or just don't load up at all. I get the tianocore loading screen on my display just fine. But after that it either freezes or boot loops. I have tried many different things I could find in other threads but nothing seems to work for me. I have no idea what to do at this point. tower-diagnostics-20210412-1039.zip
  25. Hello. I just decided to take the jump to 6.9.2 today, as the threads seems to have slowed down on bug reporting. Everything worked just fine on 6.8.3. After the first reboot into 6.9.2 my main windows 10 vm was really slow. And I check the ssd and it seemed to have been effected in regards to the 1 MiB alignment "bug". So I moved everything from it and reformatted using unassigned devices plugin. And moved my vm's back. Everything seemed fine for around 30 minutes and it just crashed on me. The funny thing is that it works just fine using VNC. Their is no problems at all. Everything seems to work fine just up until the gpu driver seems to be loaded in windows. Then everything freezes or crashes. I have tried rebooting. removing the gpu from vfio and back in. Nothing seems to help me at all. - I then found this thread and followed the guide by removing all the gpu drivers using vnc. Added the gpu back in. Booted just fine into windows with display on my screens. Then when I tried installing the gpu driver it just crashed. And now i'm back to square one. I got a flash drive backup from before upgrading. So might just end up downgrading again. But I hope someone can help me out here. I get this line almost everytime I boot up a VM with my gpu passed through to it. Tower kernel: vfio-pci 0000:81:00.0: vfio_ecap_init: hiding ecap 0x19@0x900 - not sure if this is intended or not. I have not seen it before. But could ofc be a 6.9 thing. Best regards, Brydezen tower-diagnostics-20210410-2236.zip