• Disks missing after upgrading to 6.7.0


    40foot

    I had to revert to 6.6.7, both my WDC_WD40EFRX were not detected after upgrading, only Toshiba was found, so only 1 out of 3 disks (SSD cache was found too). The file included is after reverting - for analyzing, do I have to upgrade once more, and make the diagnostics under 6.7 again? Can this lead to more severe troubles? Or are there any log-files to be found, where those upgrade errors were written to? Ah, I feel not good about it, please help.

    durosrv-diagnostics-20190513-0050.zip




    User Feedback

    Recommended Comments



    There are known kernel/driver issues with some but not all Marvell SATA controllers.

    @40foot

    01:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller [1b4b:9230] (rev 11)

    @Kevlar75

    02:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller [1b4b:9230] (rev 11)

    See this answer under the Unraid 6.7 announcement topic for a possible workaround

     

    Link to comment

    For those with drives dropping off Marvell controllers, please add this kernel parameter to your current syslinux boot selection on the append line, eg, from:

    append initrd=/bzroot

    to

    append initrd=/bzroot scsi_mod.use_blk_mq=1

    Let me know is this make the problem better or worse, or no change.  thx

    Link to comment

    This didn't help, but turning off IOMMU in BIOS did the trick... 6.7 now booted with the array up, both VMs running, all Dockers running. Looks good so far. Damn, this gave me thoughts...

    Link to comment

    No change for me.

     

    I've disabled IOMMU until I can get another card. I can test if you come up with another idea.

    Link to comment
    8 hours ago, limetech said:

    For those with drives dropping off Marvell controllers, please add this kernel parameter to your current syslinux boot selection on the append line, eg, from:

    
    append initrd=/bzroot

    to

    
    append initrd=/bzroot scsi_mod.use_blk_mq=1

    Let me know is this make the problem better or worse, or no change.  thx

    No change for me. I amended my syslinux.cfg file under 6.6.7 and rebooted... all good. I then updated to 6.7 and I still have disk 4 and 5 showing as missing. So no change. 

     

    tower-diagnostics-20190514-0226.zip

    Link to comment

    Ok forget about the scsi_mod.use_blk_mq parameter, instead try this kernel parameter instead:

    For Intel-based:

    iommu=pt

    For AMD-based:

    amd_iommu=pt

     

    • Like 1
    • Upvote 1
    Link to comment
    2 hours ago, limetech said:

    Ok forget about the scsi_mod.use_blk_mq parameter, instead try this kernel parameter instead:

    For Intel-based:

    
    iommu=pt

    For AMD-based:

    
    amd_iommu=pt

     

    You Legend!!!

     

    This solves my "Marvell" problem. 6.7 updates perfectly!  For info, I wasn't using any of the marvell based ports on my motherboard however, my PCI card ended up having a marvell controller on it as well. I've ordered a Dell H310 6Gbps SAS HBA  card from eBay which should solve any future problems.  

     

    Thanks so much for your help on this.

     

    Regards,

     

    Kev

    tower-diagnostics-20190514-0612.zip

    Link to comment
    On 5/13/2019 at 1:43 AM, bonienl said:

    There are known kernel/driver issues with some but not all Marvell SATA controllers.

    @40foot

    
    01:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller [1b4b:9230] (rev 11)

    @Kevlar75

    
    02:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller [1b4b:9230] (rev 11)

    See this answer under the Unraid 6.7 announcement topic for a possible workaround

     

    this one worked for me...

    Link to comment
    2 hours ago, Target-Bravo said:

    I am having the same issue with missing drives on one of my unraid towers,

    This seems to be working for most.

    Link to comment
    On 5/14/2019 at 6:04 AM, limetech said:

    Ok forget about the scsi_mod.use_blk_mq parameter, instead try this kernel parameter instead:

    For Intel-based:

    
    iommu=pt

    For AMD-based:

    
    amd_iommu=pt

     

    This worked for me. 

     

    SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11)

    aka 

    StarTech.com 4 Port PCI Express SATA III 6Gbps RAID Controller Card with Heatsink

     

     

     

    Link to comment
    5 hours ago, calypsoSA said:

    Does using the iommu=pt break anything else? IE breaking VM's?

     

    From Redhat description:

     

    Quote

    If intel_iommu=on or amd_iommu=on works, you can try replacing them with iommu=pt or amd_iommu=pt. The pt option only enables IOMMU for devices used in passthrough and will provide better host performance. However, the option may not be supported on all hardware. Revert to previous option if the pt option doesn't work for your host.

     

    In general, VM's in general, and PCI-Passthrough to VM's in particular, are highly hardware dependent.  If h/w is modern and chosen correctly, this works great, but if bios not up to date, or h/w old or misconfigured, you may end up with far less hair than you started with getting it to work.

    • Like 1
    Link to comment

    iommu=pt also worked for me !! holy f_ck there was some heavy breathing going on when ALL my drives where gone after the upgrade I don't mind telling you haha

     

    Awesome to have it solved now, thanks everyone

    • Like 2
    Link to comment

    This worked for me too, just so folks are clear and don't have to google I did the following:

     

    1) edited /boot/syslinux/syslinux.cfg

    2) added 'iommu=pt' to the line containing 'append initrd=/bzroot'

    3) in the end it looks like:

     

    Quote

    append initrd=/bzroot iommu=pt

     

    Thanks for the help this is great, I was freaking out that I was going to have to buy a new card and figure it out on a weekend. Everything seems to mostly be starting up and running fine.

    • Thanks 1
    Link to comment

    Is this going to be fixed in a future update? I have never edited a kernal before and I don't really want to try using a machine that has all my data on.

    Link to comment
    8 minutes ago, Target-Bravo said:

    Is this going to be fixed in a future update? I have never edited a kernal before and I don't really want to try using a machine that has all my data on.

    This is not editing a kernel, it is adding a parameter to the Unraid boot options.

    Link to comment
    3 hours ago, itimpi said:

    This is not editing a kernel, it is adding a parameter to the Unraid boot options.

    also, something I have not done before. 

    Link to comment
    2 minutes ago, Target-Bravo said:

    also, something I have not done before. 

    Fair enough - but it is quite trivial to do if you click on the 'flash' device on the Main tab and scroll down to the syslinux section.

    Link to comment

    I had to do

     

    amd_iommu=off

    to get it to work properly.  Now booting with all disks available using 6.7.1-rc1

     

    01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11)

     

    Link to comment

     

    On 5/28/2019 at 9:01 AM, Target-Bravo said:

    Is this going to be fixed in a future update? I have never edited a kernal before and I don't really want to try using a machine that has all my data on.

    I am also curious about this, I assume it would?

    I have HPE ProLiant MicroServer Gen10, I am not sure which options should be applied to it, and I would honestly much more prefer to wait for the next update which doesn't have this problem. 

    Link to comment

    I have exactly the same machine and disabling IOMMU in BIOS/EFI works. What I haven't tried yet (I hate downtimes) is the other option editing the syslinux and reenabling it in BIOS.

    Link to comment
    On 5/13/2019 at 10:04 PM, limetech said:

    Ok forget about the scsi_mod.use_blk_mq parameter, instead try this kernel parameter instead:

    For Intel-based:

    
    iommu=pt

    For AMD-based:

    
    amd_iommu=pt

     

    This worked for me on intel. Thanks!

    Edited by rpj2
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.