TPNuts

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by TPNuts

  1. Having the same issue as well here, it is truly annoying as it randomly works but when it doesnt work its persistent.
  2. Anyone know how to verify the IP address of the docker? Curl command is not working on the console which is odd.
  3. I'm a bit late to the party but is there a hardware passthrough issue in this rc version? I have been messing around and pushing the limits of the VM configs to see what works and what doesn't. Oddly enough my usb controller that is passed through is now glitchy (Vantec 3.1 gen 2 2-port Usb c + Usb A) and does not show any devices. I am having to run individual passthrough for each device vs the entire controller. Edit: It may also just be the ASMedia chipset is not good at passing through because it was working before perfectly fine on 6.9.2.
  4. What was your performance increase just out of curiosity, I am trying to configure a win11 vm (unraid 6.10.0 rc2) with passthrough of my 3090.
  5. I'm having the same issue (on Unraid 6.10.0 rc2) with mine filling up. I have to restart the server to get the log cleared. Mine has to do with the vbios misconfigured for passthrough which results in this error 2021-12-27T03:40:02.510435Z qemu-system-x86_64: vfio_region_write(0000:21:00.0:region1+0xe5340, 0x0,1) failed: Device or resource busy over and over When you try to restart the VM after the log is dumped over and over it errors out with this Unable to write to file /var/log/libvirt/qemu/Windows 11.log: No space left on device I am amazed that we do not have a way to stop it from flooding the log so fast.
  6. So a followup to this: I essentially got up and running although 6.10.0 rc2 is not very friendly with usb passthrough on items I have. It Turns out the vbios isnt even needed and that the stubbing works well for the 3090. The only major issue is the usb passthrough as Windows 11 hates recognizing pasthrough cards and even usb items added individually. I dont know if I am doing this correctly but the drivers fo not recognize half the items. More to follow. Make, break, and recreate.
  7. Im in the same scenerio with a 3090 Founders. It seems some have been able to leave the vbios blank but my setup doesnt post without it. When I also run the script I get either an error telling me to bind the item (even though vfio binding is already in place) or more likely it posts some file (or some random file) that is under 70kb. i have looked over the script and I think (and i may be very wrong) that the script is no longer able to pull vbios using the temp vm created. I observed that it doesnt post 1. at all. or 2. if it does it errors out. I am kind of stuck currently in limbo (or shutting the server down to go baremetal.
  8. It was a Quarantine christmas (wife whos a physician got covid on day 1 of christmas break 🤷‍♂️) so I had a ton of time to troubleshoot, and I can pretty much confirm that the vbios dump script is borked with the new OS updates. I tried 6.9.2 and Next branch 6.10.0 RC2 trying to dump a founders edition 3090 Nvidia card and doing a passthrough. I am getting vbios sizes in 70kb range which is not right at all. I have updated the BIOS on the main board (TRX40 Auros Master with a 3970x) and still no luck. I have narrowed the issue directly to the vbios itself as I cannot get anything beyond a black screen and the log goes nuts and errors vfio on the binding to the nvidia card when the vm is run (subsequently filling the entire log storage to 100% without fail every time. I am going to post the diagnostics, logs, and xml below. Also I will post the errors asscoiated with bios dump from the script used with SpaceInvader (note I have tried enabling and disabling ReBAR and that didnt help. Some have had luck without having to dump the vbios but in my case it isn't posting without it. Any help would be appreciated. Also feel free to openly school me, I am always open to learn how to correct my mistakes. Any Help is appreciated XML: syslog: VFIO-pci: SpaceInvader vbios log:
  9. Is it even worth changing hidden state. Didn't nvidia remove KVM checks on the latest driver?
  10. A bit of a newb here how is this done. (am having issues with pihole not giving permision to setupvars.conf)
  11. Just the gpu and audio but I did get it figured out. I ended up deleting the VM and recreating it and it just started working on its own so i'm not going to touch it. So far as for server stability goes at 3200 it is rock solid. Im just curious as to what will happen when it runs in quad channel. Will that make any diffrence? I wont keep it higher than 3200 now that i know that is the actual issue (aka im an idiot and didnt even think of that being an issue even though i knew it, old people stubborness).
  12. So I reconfigured bios and set the ram at 32x (3200mhz), unraid booted up with no issues except somehow the network was reconfigured, so i fixed that I have run into another issue now. reenabling vitualization in the bios I had passed through my 980ti to my VM but now i receive this error "nternal error: qemu unexpectedly closed the monitor: 2020-04-30T09:11:33.530626Z qemu-system-x86_64: -device vfio-pci,host=0000:4a:00.0,id=hostdev0,bus=pci.5,addr=0x0: vfio 0000:4a:00.0: failed to setup container for group 55: Failed to set iommu for container: Operation not permitted" Any way to get it back up in the same vdisk. oddly enough nothing else has changed that i know of. Devices still shows the IOMMU Group 55 as the 980ti and the hd audio GM200.
  13. As a preface I posted my first post on another thread linked here Just didn't want to hijack OP's thread. So I am having a multitude of kernel panic, unable to find rootfs errors along with sometimes not even booting altogether. Then sometimes it boots fine and stays great! I suspect this being an ongoing issue as my unraid server was unstable on my i7-4770k prior to upgrading anyhow I figured this would just be a hardware change and it did for the most part work till well 2 days after when I started having kernel panic crashes. And we are back to having issues. specs are as follows: Gigabyte Master Aorus TRX40 AMD Threadripper 3970x 128gb (currently 96gb) Gskill DDR4-3600 16gbx2 F4-3600C16D-32GVKC (im at 96gb ram (waiting on a replacement 2 dimm set from gskill as 1 was doa)) on XMP profile of 3600 <- testing with 3600 and 3200 5 x 10tb Ironwolf Pro hdd (4 on the raid array, 1 as parity) 1 x 2tb Sabrent Rocket 4.0 NVME 1 x 1tb WD SN750 NVME I ran memtest because it may have been a bad ram stick or dimm slot. when the 6 dimm modules were installed I did test it with memtest, in the first test I got 1000s of errors so i went the single dimm way and ran each stick through 2 passes with no errors. So that was a bit weird. I then added each set back in 2 at a time and ran 2 passes. No errors I am currently (Edit: testing) on the set of 6 again on the same dimm slots as well and will update accordingly. (Edit2: no errors on all 6 sticks.... what is going on! Is there any instance a bad usb could cause kernel panic issues to rootfs? Even after multiple formats. any instruction on rebuilding the array without using the existing config on a new usb? I also forgot to state I am running the system on a SAMSUNG 128GB BAR Plus (Metal) USB 3.1 Flash Drive MUF-128BE3/AM) Another Question I had was that when i was switching out the modules and testing them one by one sometimes the USB would not be recognized and bios would load showing only the NVME drives since the array is running off of a LSI Sas 9300-8e board to an external hard drive storage rack. This would then be resolved as once I shut the machine down all I would have to do is pull the usb and reinsert it back in the same USB 2.0 port and it would recognize again and start the boot process on unraid... This is highly irregular and any advise on this would be appreciated.
  14. SO i got the system up and running here is what i did Pulled the flash drive, backing up the drive on my laptop wiped and reran the OS with 6.8.2 vs 6.8.3 it was on before. Also moved the flash drive to a USB 2.0 port (new motherboard so i didnt realize i had placed it on a 3.0 port Booted up perfectly. I think that the issue is related to either the OS itseld as I did try to do a rebuild before and it was a hit or miss every reboot, or it was the usb 2.0 which very well could be as it is always wise to run it on a stable legacy port.
  15. Any progress with this as I have a 3970x on master aorus trx40 board with a VM+you pass through of 980ti. It has given me some serious issues. I rebuild the unraid profile and copied over the config file. It works for a bit but I see some serious issues in stability now. You would think that multiple generations would have some clarity. I can't even pull diagnostics due to the system freezing up and requiring a hard reset. Any guidance would be greatly appreciated!