Jump to content

NVMe drive not visible


Recommended Posts

Hello,

despite all efforts, I cannot get my NVMe drive to show in unRaid, so I'd really appreciate any help.

 

Configuration:

  -UnRaid V 6.8.3

  -Motherboard Asus Prime X299-A

  -a couple of HDDs and SSD which work perfectly OK

  -Samsung 970 PRO NVMe M.2

The Samsung drive is visible in BIOS, and I'm booting Windows from it both bare metal and in Unraid Windows VM(using Space invader one methods).

 

However, the drive is not visible in the Main page, because IMHO it should occurr in the "Unassigned devices" section.

 

 

 

bbbserver-diagnostics-20200330-2025.zip

Link to comment

Does it happen to be the one you've stubbed in syslinux so that it's not visible to unRaid?

 

BOOT_IMAGE=/bzimage initrd=/bzroot,/bzroot-gui  vfio-pci.ids=144d:a808
02:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]
	Subsystem: Samsung Electronics Co Ltd Device [144d:a801]
	Kernel driver in use: vfio-pci
	Kernel modules: nvme

Although it does appear that you've got 2 SSD's with the same identical ID, so you would probably have to use a different stubbing method (outside of my knowledge) (assuming 1 of them is going to a VM)

Link to comment
26 minutes ago, dukra said:

The Samsung drive is visible in BIOS, and I'm booting Windows from it both bare metal and in Unraid Windows VM(using Space invader one methods).

You stubbed the device by ID:

Mar 30 19:34:07 BBBServer kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot,/bzroot-gui  vfio-pci.ids=144d:a808

So all devices with the same ID will be unavailable (i.e. invisible) to Unraid i.e. it will not appear anywhere including Unassigned Devices.

You look to have 2 devices with 144d:a808 (which is shared among multiple Samsung M.2 SSDs e.g. 970 Evo, PM983 and apparently now 970 Pro too).

 

Are you looking to have 1 used by Unraid and 1 used by the Windows VM or something?

 

Link to comment

First of all, many thanks for the prompt support testdasi and Squid!

 

Sorry for a slight misinformation in my first post, there are actually 2 NVMe drives in my rig.

Both of them are visible in BIOS but not in the Unraid dashboard.

The first NVMe drive is listed in IOMMU devices with

 

   IOMMU group 15:[144d:a808] 02:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983

 

and the 2nd one is 

 

    IOMMU group 18:[144d:a808] 06:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983

 

The rest of my original description is correct, I'm dual booting one drive both from BIOS boot and Unraid VM Boot. This VM has the in "Other PCI devices" the first NVMe(where it resides) checked.

 

What I'd like is to either transfer my OSX image from normal SSD to the 2nd NVMe drive or install OSX(Hackintosh way) similarly to the Windows 10.

In addition to that, I'd like to have Docker containers and some other general data stored in the 2nd NVMe. For all this I need to have the drive visible at least in Unassigned devices.

 

Could you please point me in right direction in terms of stubbing, since I don't have any clue what that is?

Thanks again, guys!

 

 

 

Link to comment
31 minutes ago, dukra said:

What I'd like is to either transfer my OSX image from normal SSD to the 2nd NVMe drive or install OSX(Hackintosh way) similarly to the Windows 10.

In addition to that, I'd like to have Docker containers and some other general data stored in the 2nd NVMe. For all this I need to have the drive visible at least in Unassigned devices.

So first and foremost, you cannot use the 2nd NVMe "similarly to the Windows 10" AND still "have Docker containers and some other general data" on it.

"Similar to the Windows 10" means passing it through as a PCIe device, which means it is exclusively used by VM so there's no way for Unraid to use it.

You can, however, have the OSX image as a vdisk and that's the only way to share the NVMe with Unraid.

 

 

To make the NVMe show up in Unassigned Device, you need to remove vfio-pci.ids=144d:a808 from your syslinux.

Notes:

  • This doesn't automatically nullify your VM ability to use it as a pass-through PCIe device. It just doesn't show up on the Other PCI Devices section of the VM template but in the xml, it is still being passed through.
    • From my own experience, as long as I don't mount the NVMe with Unraid, I can start the VM fine and it automatically grabs the NVMe used in its config. Of course, once the VM uses it, it will disappear from Unassigned Devices until the server reboots.
  • Be careful interacting with the NVMe, especially the one that you use for the Win10 VM. If you mount and write stuff to it, there's a chance it would corrupt the data, making your Windows VM not bootable subsequently. Read is usually fine (e.g. dd from /dev).

 

 

To make a single device appears in Other PCI Devices, install the VFIO-PCI Config plugin from the app store then go to Settings -> VFIO-PCI.CFG then tick the device you want to appear in Other PCI Devices -> Build VFIO-PCI.CFG and then reboot Unraid.

  • As mentioned above, any device that appears in Other PCI Devices will NOT appear in Unassigned Devices.
  • If you physically change your PCIe devices in anyway (e.g. changing slot, adding, removing, swapping devices), you should disable (untick) all the ticked devices first and rebuild the VFIO-PCI.CFG before making the physical change.
    • This will ensure you don't accidentally stub the wrong device because changing devices can change the bus number which the VFIO-PCI.CFG uses to stub.
    • If you are familiar enough with your config, you can actually guess accurately what is going to change and thus don't have to first disable VFIO-PCI.CFG but it's just safer to disable it.
    • "Stub" = making it appear in Other PCI Devices.

 

 

Finally if you are familiar with the xml, you can actually make a very simple edit in the xml to pass through the other NVMe without the need to use VFIO-PCI.CFG at all.

I have done that many times and it's actually a lot simpler than it seems initially so I recommend spending some time to understand the xml.

 

 

 

 

 

Edited by testdasi
  • Like 1
Link to comment
19 hours ago, testdasi said:

So first and foremost, you cannot use the 2nd NVMe "similarly to the Windows 10" AND still "have Docker containers and some other general data" on it.

"Similar to the Windows 10" means passing it through as a PCIe device, which means it is exclusively used by VM so there's no way for Unraid to use it.

You can, however, have the OSX image as a vdisk and that's the only way to share the NVMe with Unraid.

 

 

To make the NVMe show up in Unassigned Device, you need to remove vfio-pci.ids=144d:a808 from your syslinux.

Notes:

  • This doesn't automatically nullify your VM ability to use it as a pass-through PCIe device. It just doesn't show up on the Other PCI Devices section of the VM template but in the xml, it is still being passed through.
    • From my own experience, as long as I don't mount the NVMe with Unraid, I can start the VM fine and it automatically grabs the NVMe used in its config. Of course, once the VM uses it, it will disappear from Unassigned Devices until the server reboots.
  • Be careful interacting with the NVMe, especially the one that you use for the Win10 VM. If you mount and write stuff to it, there's a chance it would corrupt the data, making your Windows VM not bootable subsequently. Read is usually fine (e.g. dd from /dev).

 

 

To make a single device appears in Other PCI Devices, install the VFIO-PCI Config plugin from the app store then go to Settings -> VFIO-PCI.CFG then tick the device you want to appear in Other PCI Devices -> Build VFIO-PCI.CFG and then reboot Unraid.

  • As mentioned above, any device that appears in Other PCI Devices will NOT appear in Unassigned Devices.
  • If you physically change your PCIe devices in anyway (e.g. changing slot, adding, removing, swapping devices), you should disable (untick) all the ticked devices first and rebuild the VFIO-PCI.CFG before making the physical change.
    • This will ensure you don't accidentally stub the wrong device because changing devices can change the bus number which the VFIO-PCI.CFG uses to stub.
    • If you are familiar enough with your config, you can actually guess accurately what is going to change and thus don't have to first disable VFIO-PCI.CFG but it's just safer to disable it.
    • "Stub" = making it appear in Other PCI Devices.

 

 

Finally if you are familiar with the xml, you can actually make a very simple edit in the xml to pass through the other NVMe without the need to use VFIO-PCI.CFG at all.

I have done that many times and it's actually a lot simpler than it seems initially so I recommend spending some time to understand the xml.

 

 

 

 

 

Thank you very much, testdasi, that did the trick! 

NVMe disks are now visible in the Dashboard and I can try the scenarios you described!

Much appreciated!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...