NVME M.2 Passthrough


Recommended Posts

On 4/30/2019 at 11:11 PM, limetech said:

This patch has been removed in 6.7.0-rc8 and a different workaround is recommended:

https://bugzilla.kernel.org/show_bug.cgi?id=202055#c42

Trying this results in an infinite boot loop for me (booting from a non sm2263 device). Also, the method with binding the drive does nothing as well (I used vfio-pci.ids kernel param, but I assume it works the same? the drive disappeared from available drives after rebooting but was available in vm).

 

Where can I download rc7 to test if it works with that? It's a new drive I just bought so I don't know if it worked before.

 

The drive is Intel 660p 1TB.

 

@EDIT

I returned the drive, no point in hacking it in software to make it work when there are alternatives. A pity someone as big as intel put out such a dud.

Edited by Krzaku
Link to comment
  • 2 weeks later...

Hello after reading the thread i am trying to add my nvme drive to a windows vm but its taking forever to load. 

 

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <alias name='ua-sm2262'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.ua-sm2262.x-msix-relocation=bar2'/>
  </qemu:commandline>
</domain>

after i read the fix i have tried to implement it to my XML however i cant seem to get the part of the :

 

<alias name='ua-sm2262'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>

Where should we get the information ?? I tried with the bus='0x02' but i have an error i canot save the XML... if i leave everything to 0 it pupulates as the XML i posted above.... I managed to not give me an error so i know i must be going in the right direction. 

 

Any of you guys have more info on this?

 

Link to comment

I had to patch kernel to make it work with windows.

Unfortunately last workaround currently implemented in unraid did not work for me either.

 

You could try to use older patched unraid kernel.

Edited by Maker
grammar
Link to comment
  • 2 weeks later...
8 hours ago, zinkpro45 said:

Please someone respond to this. Weeks without an update is not acceptable.

It's a Linux kernel problem. People are already trying to help to find workarounds but if it's not fixed in the kernel, there is no guarantee whatsoever that these workarounds would work for everyone. Don't blame the tires when the engine breaks.

 

You can't possibly expect everyone on the forum to drop everything to give you (free) bespoke advice on an on-demand basis. Such expectation is also not acceptable.

 

Best course of action for you is to use the last Unraid version that works.

  • Upvote 1
Link to comment
10 minutes ago, itimpi said:

Have you actually tried 6.7 stable?    Once a release goes stable users are expected to stop running rc releases.

Yup. 6.7 straight up doesn't work. Thebrelease candidates before the patch was removed work great.

 

On 6/27/2019 at 6:49 AM, testdasi said:

It's a Linux kernel problem. People are already trying to help to find workarounds but if it's not fixed in the kernel, there is no guarantee whatsoever that these workarounds would work for everyone. Don't blame the tires when the engine breaks.

 

You can't possibly expect everyone on the forum to drop everything to give you (free) bespoke advice on an on-demand basis. Such expectation is also not acceptable.

 

Best course of action for you is to use the last Unraid version that works.

You know honestly, if you pay for a product and something's broken in it, it's not unreasonable to expect communication about why it's broken and fixes. That on top of the fact that it was working on the release candidates means they should revert to making the patch included by default until unraid can communicate how to make the new way work properly.

Link to comment
4 minutes ago, zinkpro45 said:

You know honestly, if you pay for a product and something's broken in it, it's not unreasonable to expect communication about why it's broken and fixes. That on top of the fact that it was working on the release candidates means they should revert to making the patch included by default until unraid can communicate how to make the new way work properly.

You know the root cause of this is that your NVMe device is broken, right?  It corrupts its own MSI-X capability on FLR.  Maybe point some of your frustration at the device vendor or buy an NVMe device that just works.

Link to comment
58 minutes ago, aw_ said:

You know the root cause of this is that your NVMe device is broken, right?  It corrupts its own MSI-X capability on FLR.  Maybe point some of your frustration at the device vendor or buy an NVMe device that just works.

Yes, which is why we had a Patch that fixed it. Don't remove the patch until you have a proper method of supporting the device that the patch fixed. 


A very very large portion of NVME drives use this controller. There's no reason why Unraid can't re-add the patch and get this working until a suitable non-patch solution can be found.

Link to comment
  • 2 weeks later...

I'm a bit of a newb.  I have an HP EX920 and have run into this bug. Before I dig deep and try to understand and apply this work-around, https://bugzilla.kernel.org/show_bug.cgi?id=202055#c42, can someone confirm that this should work with my NVMe?  I don't want to spend a lot of time/effort and risk breaking my server if it's known to not work.

 

Also I want to say I am very grateful to the Unraid team.  Unraid provides great features, is stable, continues to evolve and support is beyond expectations.  Even if I can't get this NVMe to work I've certainly gotten good value from my license.

 

Thanks in advance.

Link to comment

Thanks. Also interested in passing through my NVME:

 

1) Is my 760p suffering from the bug? If so, not clear from the link what the fix would be.

 

2) Not even sure whether I understand how to set up the VM. I'd like to install a new Windows VM directly on the NVME. Is the only way to do so by manually changing the NVME? I don't mind if Unraid doesn't have access. If so, easier way than manual edits to the XML to pass it through?

Link to comment
9 hours ago, steve1977 said:

Thanks. Also interested in passing through my NVME:

 

1) Is my 760p suffering from the bug? If so, not clear from the link what the fix would be.

 

2) Not even sure whether I understand how to set up the VM. I'd like to install a new Windows VM directly on the NVME. Is the only way to do so by manually changing the NVME? I don't mind if Unraid doesn't have access. If so, easier way than manual edits to the XML to pass it through?

1) Yes, 760p suffering from the bug.

2) I have installed windows 10 directly on the NVME and it works perfectly. Unfortunately the only way to make it work with windows is to patch kernel since XML solution does not really work or buy another drive.

On 7/12/2019 at 8:28 PM, unraid_user said:

I'm a bit of a newb.  I have an HP EX920 and have run into this bug. Before I dig deep and try to understand and apply this work-around, https://bugzilla.kernel.org/show_bug.cgi?id=202055#c42, can someone confirm that this should work with my NVMe?  I don't want to spend a lot of time/effort and risk breaking my server if it's known to not work.

 

Also I want to say I am very grateful to the Unraid team.  Unraid provides great features, is stable, continues to evolve and support is beyond expectations.  Even if I can't get this NVMe to work I've certainly gotten good value from my license.

 

Thanks in advance.

This workaround works fine with linux guests but I was not be able to make it work with windows.

  • Like 1
Link to comment
  • 3 weeks later...
  • 3 weeks later...
  • 4 months later...
35 minutes ago, danktankk said:

So i assume since noone has replied to this, that the error is gone?  Is this now working and I missed how to do it?  I am using UnRaid 6.8.0 and I get the same error as everyone else.

 

Please tell there is a way to fix this now.

 

 

NOPE! The wonderful solution is to just buy a compatible NVME drive.

Link to comment

Glad I have found this. I have just started migrating 3 hosts to unraid and was about to buy 3 Pro licenses but 2 of them have Intel Corporation SSD 660P NVME and just come up against this on the first one.

 

Would say I will hold off for a fix but seeing as this thread has been around over a year now I guess that's not gunna happen. I hear whats been said about it being a fault with the hardware but the fact there was a fix and its been removed is pretty dull.

 

Nick

 

Link to comment
14 hours ago, nickb512 said:

Glad I have found this. I have just started migrating 3 hosts to unraid and was about to buy 3 Pro licenses but 2 of them have Intel Corporation SSD 660P NVME and just come up against this on the first one.

 

Would say I will hold off for a fix but seeing as this thread has been around over a year now I guess that's not gunna happen. I hear whats been said about it being a fault with the hardware but the fact there was a fix and its been removed is pretty dull.

There isn't a "fix". A "fix" for a niche problem that breaks other stuff that work isn't a fix - it's called a "regression" - so no, it's not dull to remove it.

If you even bother to read the topic, there are workarounds albeit with limitations.

If you can't work around the limitation then buy new hardware I guess. The 660p is dirt cheap for a reason.

Link to comment
There isn't a "fix". A "fix" for a niche problem that breaks other stuff that work isn't a fix - it's called a "regression" - so no, it's not dull to remove it.
If you even bother to read the topic, there are workarounds albeit with limitations.
If you can't work around the limitation then buy new hardware I guess. The 660p is dirt cheap for a reason.


I thought it was that something broke when a fix was applied. I remember last time I looked into it I was still in the return period for my 660p, so I exchanged it for a smaller known good Samsung NVMe drive.


Sent from my iPhone using Tapatalk
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.