Jump to content
  • [6.7.0] - DMAR handling fault


    Ambrotos
    • Solved Minor

    Since upgrading to 6.7.0 a couple days ago, I have started seeing the following message in my system log.

     

    May 16 07:00:02 nas kernel: DMAR: [DMA Read] Request device [03:00.0] fault addr ffabc000 [fault reason 06] PTE Read access is not set

     

    Some quick Googling suggests that this is related somehow to IOMMU, though I don't use hardware passthrough for any of my VMs and anyway I've confirmed that IOMMU is reported as enabled by unRAID. The error message is similar to one raised during the 6.7RC. Maybe the patch that was included to fix the previous issue had an unintended side effect?

     

     

    Unlike the issue reported by Duggie264, I am not using any HP240 controllers. Mine are all IT reflashed m1015s or H310s. Also, note that the PCI device that it's complaining about is my Intel nVME drive that's currently not part of the array and is mounted by UA. Maybe that's related?

     

    Attached are my diagnostics.

     

    Does anyone have any thoughts on this?

     

    Cheers,

     

    -A

     

    P.S. - I should mention that I upgraded direct from 6.6.7. I don't play with RCs on this server, I have a test server for that.

     

    nas-diagnostics-20190516-1634.zip



    User Feedback

    Recommended Comments

    If you don't need IOMMU you can always disable it in the BIOS, should get rid of those errors.

    Share this comment


    Link to comment
    Share on other sites

    I had considered that, but I did have visions of one day finding some spare time to install a graphics card and build a Steam in-home streaming VM. Ideally I'd like to figure out how to fix this without disabling IOMMU ;)

     

    -A

    Share this comment


    Link to comment
    Share on other sites

    I had this problem with my NVME drive. I had previous just disabled IOMMU but recently set iommu=pt and that seems to also get around it.

    Edited by Taddeusz

    Share this comment


    Link to comment
    Share on other sites

    @Taddeusz thanks for the tip. I made the change, rebooted, and haven't seen the message recur since yesterday.

     

    Out of curiosity, does anyone have an idea if this is actually a difference between 6.6 and 6.7, or did I just happen to coincidentally notice it after the upgrade and this has been happening for a while now?

     

    Share this comment


    Link to comment
    Share on other sites


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.