• [6.10-RC1] - VFIO-PCI Log error


    gr021857
    • Minor

    Hi,

     

    I've got myself into an issue where I'm getting the files do not exist errors in the VFIO-PCI Log when it is trying to bind my GPU.  I upgraded to the RC1 but also made a move to use Nvidia plugin and use the GPU for Tadarr and that was all working fine.  I've tried to go back to using the GPU for VMs and it won't passthrough.  Looking at the VFIO log it is saying it can't adjust group ownership as the file or directory doesn't exist but looking in terminal I can see the folder and the files (3 files - 28, 29 and vfio).

     

    Not sure if this is causing the VM issue but it's the first error I can find.

     

     

    Loading config from /boot/config/vfio-pci.cfg
    BIND=0000:0a:00.0|10de:1c03 0000:0a:00.1|10de:10f1
    ---
    Processing 0000:0a:00.0 10de:1c03
    Vendor:Device 10de:1c03 found at 0000:0a:00.0
    
    IOMMU group members (sans bridges):
    /sys/bus/pci/devices/0000:0a:00.0/iommu_group/devices/0000:0a:00.0
    
    Binding...
    chown: cannot access '/dev/vfio/28': No such file or directory
    Error: unable to adjust group ownership of /dev/vfio/28
    ---
    Processing 0000:0a:00.1 10de:10f1
    Vendor:Device 10de:10f1 found at 0000:0a:00.1
    
    IOMMU group members (sans bridges):
    /sys/bus/pci/devices/0000:0a:00.1/iommu_group/devices/0000:0a:00.1
    
    Binding...
    chown: cannot access '/dev/vfio/29': No such file or directory
    Error: unable to adjust group ownership of /dev/vfio/29
    ---
    vfio-pci binding complete
    
    Devices listed in /sys/bus/pci/drivers/vfio-pci:
    lrwxrwxrwx 1 root root 0 Nov 3 13:47 0000:0a:00.0 -> ../../../../devices/pci0000:00/0000:00:03.1/0000:0a:00.0
    lrwxrwxrwx 1 root root 0 Nov 3 13:47 0000:0a:00.1 -> ../../../../devices/pci0000:00/0000:00:03.1/0000:0a:00.1
    
    ls -l /dev/vfio/
    ls: cannot access '/dev/vfio/': No such file or directory

     

    Thank you in advance for any help

     

     




    User Feedback

    Recommended Comments

    I had this same issue in RC1 but it went away in RC2. Unfortunately it didn't fix my code43 error in my specific case of trying to pass through a primary GPU to VMs.

    Link to comment

    Hi there,

     

    One issue could be that if you're loading the GPU driver to use the GPU with a Docker container, then you can't also use that GPU with a virtual machine.  GPUs need to have their driver stubbed in order to be used in a VM.  When you use the NVIDIA plugin, it installs the driver for the card which prevents you from using it in a VM.  This is not a bug.

    Link to comment
    On 11/4/2021 at 2:28 PM, bigbangus said:

    I had this same issue in RC1 but it went away in RC2. Unfortunately it didn't fix my code43 error in my specific case of trying to pass through a primary GPU to VMs.

    Thank you, I upgraded to RC2 and can confirm I'm not getting the VFIO error any more.

     

    On 11/5/2021 at 5:09 PM, jonp said:

    Hi there,

     

    One issue could be that if you're loading the GPU driver to use the GPU with a Docker container, then you can't also use that GPU with a virtual machine.  GPUs need to have their driver stubbed in order to be used in a VM.  When you use the NVIDIA plugin, it installs the driver for the card which prevents you from using it in a VM.  This is not a bug.

     

    Thanks, I have managed to get the VM to work now although from what I've read I didn't expect it to work but I put the graphics card as VNC and then added my pass through GPU as a secondary card.  It works and I can VNC into the system too.  If I just run with the GPU as the primary it starts but never loads into windows.  Don't have time currently to look at what is happening/why/how it is working like this, as we've got a little one so I've not got much time.  For now it is working so like a well balanced plate I'm not going to touch it again for fear of it breaking haha

    Link to comment

    You can try adding "video=efifb:off" to the end of the boot command in the flash drive.

     

    This makes it so that when you boot Unraid, it won't ever get passed the blue rectangle prompt on your display. Essentially it will never use the primary GPU to display the console and so it won't conflict with your VM when it tries to pass it. (in theory)

     

    image.png.bc264d8d20792add1ec4e7465459c3a2.png

    Link to comment

    THANK YOU BIGBANGUS!
    Unraid 6.10 Full Release, Radeon 5700xt & RX 580

    Unraid was initializing my 5700xt and giving me driver issues!

    video=efifb:off     fixed it! tyvm<3

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.