Unraid not showing NVME drives after Proxmox pass through


Recommended Posts

I have Unraid virtualized on proxmox. I'm having trouble getting two Western Digital SN770 M.2 drives to show up as drives in Unraid after I passed the drives through to the Unraid VM as PCIe devices. The drives do show up as pci devices on the Unraid VM, they just aren't being recognized as drives. The output of lspci from the Unraid VM shows them as:

 

02:00.0 Non-Volatile memory controller: Sandisk Corp Device 5017 (rev 01)
03:00.0 Non-Volatile memory controller: Sandisk Corp Device 5017 (rev 01)
 

Is there anything else I need to do to have Unraid recognize them as drives?

 

If I run Unraid bare metal, it has no problems picking them up as drives in the gui. I'm also passing an HBA pcie card through to Unraid, and the sata HDDs attached to that card are detected in Unraid VM.

 

A diagnostic file is attached. My Proxmox hardware settings are below. The last two entries are the M.2 drives I'm passing through.

 

image.png.77457845e6440edaaedf6ef5540c5038.png

 

 

tower-diagnostics-20230318-0913.zip

Edited by tonyis
Link to comment
Mar 18 05:47:24 Tower kernel: nvme nvme0: pci function 0000:02:00.0
Mar 18 05:47:24 Tower kernel: nvme nvme1: pci function 0000:03:00.0
Mar 18 05:47:24 Tower kernel: nvme nvme0: Removing after probe failure status: -19
Mar 18 05:47:24 Tower kernel: nvme nvme1: Removing after probe failure status: -19

 

Can only see that both devices are failing to initialize, but if they work in bare metal it suggest it's Proxmox related.

Link to comment
8 hours ago, JorgeB said:
Mar 18 05:47:24 Tower kernel: nvme nvme0: pci function 0000:02:00.0
Mar 18 05:47:24 Tower kernel: nvme nvme1: pci function 0000:03:00.0
Mar 18 05:47:24 Tower kernel: nvme nvme0: Removing after probe failure status: -19
Mar 18 05:47:24 Tower kernel: nvme nvme1: Removing after probe failure status: -19

 

Can only see that both devices are failing to initialize, but if they work in bare metal it suggest it's Proxmox related.

 

Thanks for finding that. That gave me a lead on the right keywords to google and I was able to figure it out.

 

The key was to add "pci=nommconf" to /etc/default/grub as part of the "GRUB_CMDLINE_LINUX_DEFAULT" line. There was a few other minor errors to figure out with my IOMMU groups from there, but that was the big issue.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.