geekazoid

Members
  • Content Count

    48
  • Joined

  • Last visited

Community Reputation

2 Neutral

About geekazoid

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Having this issue with a VM as well. It's an anomaly. Win10, 6 cores. The CPU usage reported is host only, in VM the CPUs are idle. So this is a hypervisor/VM configuration issue. I'm chasing it down as a libvirt/qemu issue since I'm not seeing a solution in the unraid searches. Check this thread out for example: link.
  2. Attached. It's been a while so I will quickly recap: GPU C- onboard VGA to 1080@60 (for UNRAID console only) GPU 1- Quadro M4000 DisplayPort 1.2 to 4K@60 (for PCI Passthrough only - workstation 1) GPU 2- GT 1030 HDMI 2.0 to 4K@60 (for PCI Passthrough only - workstation 2) In BIOS, the primary graphics is set to the onboard (GPU C). The other option is to set it to the PCIe and then it will likely go to the first PCI ID which is GPU 1. I only use the first: onboard. As desired, when console is on GPU C, I have remote access to it via the integr
  3. Cool. Well what it changes is this: - the iommu group numbers change for my passthrough devices, possible more. Total iommu groups is diminished. All my passthrough devices are listed though. - my PCIe USB cards are still not showing up as shareable in the VM manager - the devices connected to my passthrough USB cards are still connected to my host despite the blacklist - this appears in dmesg: root@fluffy:~# dmesg | grep vfio [ 0.000000] Command line: BOOT_IMAGE=/bzimage vfio-pci.ids=10de:13f1,10de:1d01,1912:0014,1b73:111i intel_iommu=on initrd=/bzroot [ 0
  4. Oh in fact I believe that you meant this: Thus the correct form would be: append vfio-pci.ids=10de:13f1,10de:1d01,1912:0014,1b73:111 intel_iommu=on initrd=/bzroot Is this right?
  5. Sorry I think I forgot to enable notify on reply. Thanks for this important comment on my syntax. By provider I assume that you mean domain? So basically use the notation from dmesg. So if I follow you, this: append vfio-pci.ids=01:00 81:00 02:00 03:00 intel_iommu=on initrd=/bzroot should be this: append vfio-pci.ids=0000:0100 0000:8100 0000:0200 0000:0300 intel_iommu=on initrd=/bzroot
  6. Did some clean cycles to confirm my condition. Here's what happens when I employ vfio settings "new method" as above after a clean boot without it (no errors) and a reconfigure with it after a power cycle: [ 98.766638] vfio-pci 0000:01:00.0: BAR 1: can't reserve [mem 0xb0000000-0xbfffffff 64bit pref] (above repeats ~2000 times) [ 98.932910] pcieport 0000:00:01.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID) [ 98.932914] pcieport 0000:00:01.0: device [8086:2f02] error status/mask=00004000/00000000 [ 98.932918] pcieport 0000:00:0
  7. I should add that you need to power cycle the motherboard between changes to these settings. The PCI root device was throwing errors in dmesg after I added he vfio options, even after rolling them back and rebooting. It must put the device in a state that doesn't change until you power cycle the board. YMMV.
  8. What is the best way to blacklist devices so that the unraid host's kernel doesn't try to use them? For example I have two GPUs and two USB cards dedicated to two VMs, not to mention all of the peripherals connected to those USB cards. I don't want unraid to even bother trying to use these devices when the VMs are not started - connecting to those USB devices to the host is problematic. Right now I can see when I stop a VM all the passed through devices start appearing in dmesg. I used append vfio-pci.ids=01:00,81:00,02:00,03:00 but it didn't change this behavior at all. When I add
  9. Sorry for the late reply. This is an old thread. I can recreate this problem and compare if you still have the issue.
  10. Months ago I reported an issue where this plugin stopped working. It would stall while scanning and never refresh the page. If there were no errors, you could never see the plugin page because it would auto-scan (undesirable) and then hang with no feedback. This behavior is gone now. I don't know if you made changes but assuming you didn't there are some environmental changes on my end that could have contributed to this. I recently relocated my machine back to a network I manage (HOME) from the network I was a guest (INLAWS) in. HOME is the same network my server was in for years before
  11. I'm aware of the /boot/previous and it being available to restore from the https://<unraid_server>/Tools/Update however it really should be implemented so that you can boot from it in grub. I would work on this myself but I really have a full plate this summer and just can't take on any other projects. I am not convinced that file corruption is the issue. Both times this has happened I repaired it with make_bootable.bat. EFI is what was broken.
  12. This happened last time I upgraded, to 6.7.1. And it happened exactly the same way this time. I stopped the array, made a backup of /boot, ran update assistant, upgraded the OS, and it kernel panics on boot. From what I can tell, the update is breaking EFI. But I'm sure that comments here will tell me that its normal and not fix it. Going back to 6.6.7.
  13. You know what would be smart? If the update script made a backup of the critical files its manipulating, and verified it's work before reporting SUCCESS! It's not that hard to at least make sure you don't corrupt files. Why you guys keep telling users to do a job a script can do for them is beyond me. I was surprised when my 6.7.1 update blew up and the comments were "maybe the update failed". Like, you have a script doing this that didn't check file integrity!? Why suddenly is my storage system unable to write files reliably?? Nothing in 2yrs of running Unraid has made me feel less trus