• Unraid OS version 6.9.0-beta30 available


    limetech

    Changes vs. 6.9.0-beta29 include:

     

    Added workaround for mpt3sas not recognizing devices with certain LSI chipsets. We created this file:

    /etc/modprobe.d/mpt3sas-workaround.conf

    which contains this line:

    options mpt3sas max_queue_depth=10000

    When the mpt3sas module is loaded at boot, that option will be specified.  If you add "mpt3sas.max_queue_depth=10000" to syslinux kernel append line, you can remove it.  Likewise, if you manually load the module via 'go' file, can also remove it.  When/if the mpt3sas maintainer fixes the core issue in the driver we'll get rid of this workaround.

     

    Reverted libvirt to v6.5.0 in order to restore storage device passthrough to VM's.

     

    A handful of other bug fixes, including 'unblacklisting' the ast driver (Aspeed GPU driver).  For those using that on-board graphics chips, primarily Supermicro, this should increase speed and resolution of local console webGUI. 

     


     

    Version 6.9.0-beta30 2020-10-05 (vs -beta29)

    Base distro:

    • libvirt: version 6.5.0 [revert from version 6.6.0]
    • php: version 7.4.11 (CVE-2020-7070, CVE-2020-7069)

    Linux kernel:

    • version 5.8.13
    • ast: removed blacklisting from /etc/modprobe.d
    • mpt3sas: added /etc/modprobe.d/mpt3sas-workaround.conf to set "max_queue_depth=10000"

    Management:

    • at: suppress session open/close syslog messages
    • emhttpd: correct 'Erase' logic for unRAID array devices
    • emhtppd: wipefs encrypted device removed from multi-device pool
    • emhttpd: yet another btrfs 'free/used' calculation method
    • webGUI: Update statuscheck
    • webGUI: Fix dockerupdate.php warnings

     

    • Like 5
    • Thanks 5



    User Feedback

    Recommended Comments



    Looks like beta30 doesn't detect my sata controller anymore. Had to revert back to beta29 in order for the device to be detected again. I'm not sure what driver the card uses. 

     

    image.png.76865d1347bb8081cb479f3a43eff783.png

    • Like 1
    Link to comment
    2 minutes ago, ultralaser24 said:

    I'm not sure what driver the card uses. 

    That's a JMB585, it uses the standard AHCI driver, I have multiple of those controllers working with beta30:

    [197b:0585] 04:00.0 SATA controller: JMicron Technology Corp. Device 0585
    [14:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sdp   6.00TB
    [15:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sds   6.00TB
    [16:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sdv   6.00TB
    [17:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sdy   6.00TB
    [18:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sdz   6.00TB

    Full diags might give more clues.

    • Like 1
    Link to comment

    Weird, maybe I'll install 30 again and see if it shows up in system devices. Honestly I only noticed because my VMs are on that cache pool and it disappeared after installing beta30 leading the VM manager to show that I didn't have any VMs configured. 

    Link to comment

    Coming from beta25, beta30 keeps throwing one of my drives into "Unsupported partition layout." Restoring back to 25 fixes it. Drive details attached, not sure if you would like anything else. It happened both attempts. I am using the linuxservio version of beta30.

    2020-10-07 13_57_41-Hades_Device - Brave.png

    Link to comment
    2 minutes ago, wickedathletes said:

    not sure if you would like anything else

    Please post the complete diagnostics after you get the partition error.

    • Like 1
    Link to comment
    1 minute ago, JorgeB said:

    Please post the complete diagnostics after you get the partition error.

    I am kind of scared to do it again, for a 3rd time haha. Before I do, if the drive goes into that state, is that a state that parity can get it out of? I just don't want to lose 8TB of data because a drive is failing but not throwing a "failure" per-say.

    Link to comment
    4 minutes ago, wickedathletes said:

    I just don't want to lose 8TB of data because a drive is failing but not throwing a "failure" per-say.

    Very unlikely to be a drive problem.

    Link to comment
    39 minutes ago, wickedathletes said:

    I am kind of scared to do it again, for a 3rd time haha. Before I do, if the drive goes into that state, is that a state that parity can get it out of? I just don't want to lose 8TB of data because a drive is failing but not throwing a "failure" per-say.

    Just don't click "Format"

    Link to comment
    2 hours ago, JorgeB said:

    That's a JMB585, it uses the standard AHCI driver, I have multiple of those controllers working with beta30:

    
    [197b:0585] 04:00.0 SATA controller: JMicron Technology Corp. Device 0585
    [14:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sdp   6.00TB
    [15:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sds   6.00TB
    [16:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sdv   6.00TB
    [17:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sdy   6.00TB
    [18:0:0:0]   disk    ATA      TOSHIBA HDWD260  0A    /dev/sdz   6.00TB

    Full diags might give more clues.

    Well it appears to be a fluke because I reinstalled beta 30 and both my drives showed up after the reboot as well as my cache pool with the VMs. Maybe I just needed to reboot 3 times....

    • Like 1
    Link to comment

    ** Update, I tested this out on a few of the past beta releases and the problem outlined below began in 6.9.29 and remains a problem in 6.9.30. I tested 6.9.25 and I had expected read speeds of around 150MB/s over smb from the drives on my H310 LSI card.

     

    This is my first post on this forum so please let me know if there is a separate place to report these bugs.

     

    I'm running a Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) and the card does show up, however it cut my read speeds over Samba nearly in half, it also spikes my CPU usage way up compared to 6.8.3.

     

    Over my 10Gb connection I get the full read speeds of my 5200 rpm drives around 130-150 MB/s on 6.8.3. I'm now getting about 75 on average on 6.9 beta. I imagine this is due to these driver problems?

     

    Funny enough if I run a diskspeed test I get my expected hard drive speeds...

     

    Does anyone have any suggestions to fix the Samba speeds? Or just let me know if this is something that needs to be worked out in the driver.

     

     I can post logs if needed. I am going to stay on the betas for now since I just bought a 10th gen i3 and need the igpu support under 6.9.

    Edited by trypowercycle
    Link to comment

    I've just been bitten again by the GPU bug whereby I have to shut down (not restart) the whole host to get it to work.  Logs have been submitted before.  Basically the VM does start and run, but the screen is still on text mode, running perfectly.  Shutting it down seems to get it back into gear.  Latest beta 30.

     

    There is of course a possibility of faulty hardware occurring at the same time as the beta upgrade.  That's a hard one to test.

     

    Scratch that - that doesn't work either.  Downgrading - FYI I can't downgrade to stable, even though it's an option, just reverts back to beta 25 all the time.

    OK downgrading back to beta 25 got it to work.  This might in fact be downloading the machine version.  Haven't checked.

    Edited by Marshalleq
    Link to comment
    13 hours ago, ultralaser24 said:

    Maybe I just needed to reboot 3 times....

    That's strange, make sure you grab the diags if it happens again.

     

    1 hour ago, trypowercycle said:

    Over my 10Gb connection I get the full read speeds of my 5200 rpm drives around 130-150 MB/s on 6.8.3. I'm now getting about 75 on average on 6.9 beta. I imagine this is due to these driver problems?

    Same HBA is performing normally for me, if there's an issue I would suspect some kernel/Samba change first, do you have devices connect to other controllers you could test?

    Link to comment
    6 minutes ago, JorgeB said:

    That's strange, make sure you grab the diags if it happens again.

     

    Same HBA is performing normally for me, if there's an issue I would suspect some kernel/Samba change first, do you have devices connect to other controllers you could test?

    I have unassigned devices connected to the onboard SATA and my nvme cache drive that are both performing normally if I read from them over Samba in terms of CPU usage as well as speeds. Seems to be only devices attached to the HBA... 

    Edited by trypowercycle
    Link to comment
    26 minutes ago, trypowercycle said:

    I have unassigned devices

    Those won't be using shfs, try enabling disk shares, then read directly from a disk share, e.g. \\tower\disk1

    Link to comment
    7 hours ago, JorgeB said:

    Those won't be using shfs, try enabling disk shares, then read directly from a disk share, e.g. \\tower\disk1

    I figured out the problem.

     

    I compared the smb.conf between beta 25 and 30 and the only difference was that the lines below were missing. In 25 there is even a comment in the file about how they are probably not needed anymore. After I put the lines back in on beta 30, file transfers worked as expected on my array on the LSI crd. I seems they are still needed for some reason and should probably be added back in.

     

            aio read size = 0
            aio write size = 4096

     

    Edited by trypowercycle
    • Like 1
    Link to comment
    6 minutes ago, JorgeB said:

    I remember this also caused an issue with btrfs failing to give an i/o error if data corruption was found, I still have those entries in my Samba extra settings to disable aio.

     

     

    Nice! That makes a ton of sense why your's was working then on the same hardware.

     

    Do you know if it is preferable to make these changes in the smb.conf file vs the Samba extra settings? I imagine the smb.conf can be overwritten when updates come out possibly whereas the Samba extra settings would persist?

    Link to comment
    16 minutes ago, trypowercycle said:

    I imagine the smb.conf can be overwritten when updates come out possibly whereas the Samba extra settings would persist?

    I believe so.

    Link to comment
    28 minutes ago, trypowercycle said:

    Nice! That makes a ton of sense why your's was working then on the same hardware.

     

    Do you know if it is preferable to make these changes in the smb.conf file vs the Samba extra settings? I imagine the smb.conf can be overwritten when updates come out possibly whereas the Samba extra settings would persist?

    Yes put user defined settings/overrides in config/smb-extra.conf

     

    There were some changes made in samba aio, I think starting with samba v4.12.  In reviewing this, the 'man smb.conf' page changed, which I pasted below, where you can see the samba defaults are now to enable aio for both read and write.  Further, doesn't matter the value '4096' - just 0 means disabled and non-zero means enabled.  Interesting, your card seems to perform better with read aio off.

     

     

           aio max threads (G)
    
               The integer parameter specifies the maximum number of threads each smbd process will create when doing parallel asynchronous IO calls. If the number of outstanding calls is
               greater than this number the requests will not be refused but go onto a queue and will be scheduled in turn as outstanding requests complete.
    
               Related command: aio read size
    
               Related command: aio write size
    
               Default: aio max threads = 100
    
           aio read size (S)
    
               If this integer parameter is set to a non-zero value, Samba will read from files asynchronously when the request size is bigger than this value. Note that it happens only for
               non-chained and non-chaining reads and when not using write cache.
    
               The only reasonable values for this parameter are 0 (no async I/O) and 1 (always do async I/O).
    
               Related command: write cache size
    
               Related command: aio write size
    
               Default: aio read size = 1
    
               Example: aio read size = 0 # Always do reads synchronously
    
           aio write behind (S)
    
               If Samba has been built with asynchronous I/O support, Samba will not wait until write requests are finished before returning the result to the client for files listed in
               this parameter. Instead, Samba will immediately return that the write request has been finished successfully, no matter if the operation will succeed or not. This might speed
               up clients without aio support, but is really dangerous, because data could be lost and files could be damaged.
    
               The syntax is identical to the veto files parameter.
    
               Default: aio write behind =
    
               Example: aio write behind = /*.tmp/
    
           aio write size (S)
    
               If this integer parameter is set to a non-zero value, Samba will write to files asynchronously when the request size is bigger than this value. Note that it happens only for
               non-chained and non-chaining reads and when not using write cache.
    
               The only reasonable values for this parameter are 0 (no async I/O) and 1 (always do async I/O).
    
               Compared to aio read size this parameter has a smaller effect, most writes should end up in the file system cache. Writes that require space allocation might benefit most
               from going asynchronous.
    
               Related command: write cache size
    
               Related command: aio read size
    
               Default: aio write size = 1
    
               Example: aio write size = 0 # Always do writes synchronously
    
    

     

    Link to comment
    6 minutes ago, limetech said:

    Do you know if anyone has done any testing with aio write enabled/disabled?

    In one of my main servers I have both aio read and write set to 0, on another just read, never noticed any issue with writes, but didn't do much testing, my original issue was btrfs ignoring cheksum errors on reads, performance was similar when I tested at the time.

    Link to comment

    How'd you introduce a checksum error?  In the topic from a couple years ago I think you said it did report error if the corruption was big enough?  Maybe the problem you were seeing had to do with caching where somehow it was still returning cached "correct" data instead of fetching your corrupted data.

    Link to comment
    3 minutes ago, limetech said:

    How'd you introduce a checksum error?  In the topic from a couple years ago I think you said it did report error if the corruption was big enough?  Maybe the problem you were seeing had to do with caching where somehow it was still returning cached "correct" data instead of fetching your corrupted data.

    IIRC I used dd to write some zeros on existing data, the larger the write (corruption), the more likely it would cause an i/o error during an SMB transfer, a single corrupt sector would always (or almost always) go through.

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.