• Unraid OS version 6.9.0-rc2 available


    limetech

    As always, prior to updating, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".

     

    Hopefully spin-up/down sorted:

    • External code (docker containers) using 'smartctl -n standby' should work ok with SATA drives.  This will remain problematic for SAS until/unless smartmontools v7.2 is released with support for '-n standby' with SAS.
    • SMART is unconditionally enabled on devices upon boot.  This solves problem where some newly installed devices may not have SMART enabled.
    • Unassigned devices will get spun-down according to 'Settings/Disk Settings/Default spin down delay'.

     

    Updated to 5.10 Linux kernel (5.10.1).

     

    Updated docker.

     

    Fixed bug joining AD domains.

     


     

    Version 6.9.0-rc2 2020-12-18 (vs -rc1)

    Base distro:

    • bind: verison 9.16.8
    • docker: version 19.03.14
    • krb5: version 1.18.2

    Linux kernel:

    • version 5.10.1

    Management:

    • emhttpd: fix external 'smartctl -n standby' causing device spinup
    • emhttpd: enable SMART on devices upon startup
    • emhttpd: unassigned devices spin-down according to global default
    • emhttpd: restore 'poll_attributes' event callout
    • smb: fixed AD join issue
    • webgui: do not try to display SMART info that causes spin-up for devices that are spun-down
    • webgui: avoid php syntax error if autov() source file does not exist

    Edited by limetech

    • Like 8
    • Thanks 4


    User Feedback

    Recommended Comments



    1 hour ago, noties said:

    Feb 19 15:13:06 teraserver kernel: general protection fault, probably for non-canonical address 0x315a4c61f4cef4d8: 0000 [#1] SMP PTI

    Have you done memtest?

     

    Also

     

    Link to comment
    Share on other sites

    Hi

     

    unsure if this is a bug

     

    i have created a pool which i have an ISCSI connection to it

     

    The FileIO total 8TB

     

    image.thumb.png.bfeb8f96e589f073905e546178d03350.png

     

    but in the main section it shows 17TB used

     

    image.thumb.png.f48f35681f62c75b53e00c0d6974f010.png

     

    how do i reclaim unused space if thats what it is doing?

    Link to comment
    Share on other sites
    2 hours ago, Joedy said:

    unsure if this is a bug

    Known BTRFS bug in how it reports used/free space for raid1 pools with uneven amount of drives.

    Edited by tjb_altf4
    Link to comment
    Share on other sites
    6 hours ago, tjb_altf4 said:

    Known BTRFS bug in how it reports used/free space for raid1 pools with uneven amount of drives.

    It is a btrfs bug, but it affects pools with an odd number of devices.

    Link to comment
    Share on other sites
    18 hours ago, Darksurf said:

    I could be wrong, but IIRC Unraid is based on Slackware. The delayed release of 6.9 may have something to do with this magical announcement : https://www.itsfoss.net/slackware-linux-15-0-alpha1-release/

    This is pure speculation on my part. I'm super happy and excited to see my first Linux distro is still alive and well! Thank you Slackware for being such an amazing distro over the years! 

    I doubt they'd be considering putting Slackware 15.0 in the UNRAID OS until it's released as stable, it's not even in the beta phase yet.

    Link to comment
    Share on other sites
    On 2/20/2021 at 9:54 AM, trurl said:

    Have you done memtest?

     

    Yes, ran a memtest on each of the RAM sticks individually.  I was having BTRFS issues with my previous RAM and that sparked me to test the memory.  I've not seen BTRFS errors since swapping out the RAM and the RAM passed memtest.

     

    I changed a few things all at once and this seems to be biting me.  I swapped motherboards, RAM, and upgraded to 6.9rc2.  I had zero problems with my previous mobo, RAM and 6.8.3.

    Link to comment
    Share on other sites
    4 hours ago, noties said:

    Yes, ran a memtest on each of the RAM sticks individually.  I was having BTRFS issues with my previous RAM and that sparked me to test the memory.  I've not seen BTRFS errors since swapping out the RAM and the RAM passed memtest.

     

    I changed a few things all at once and this seems to be biting me.  I swapped motherboards, RAM, and upgraded to 6.9rc2.  I had zero problems with my previous mobo, RAM and 6.8.3.

    Note that the version of memtest built into UNRAID is a much older version that doesn't have the newer testing capabilities of the current version located on memtest's website. This is no fault of unraid, the people behind memtest won't allow anything newer to be installed by 3rd parties.

    Link to comment
    Share on other sites
    On 2/24/2021 at 1:01 AM, jbartlett said:

    I doubt they'd be considering putting Slackware 15.0 in the UNRAID OS until it's released as stable, it's not even in the beta phase yet.

     

    We're actually on Slackware-current already - so we've been on Slackware 15.0 - pre-alpha for a while now.

    ### Version 6.8.0 2019-12-10
    
    Base distro:
    
    - aaa_elflibs: version 15.0 build 16

     

    Link to comment
    Share on other sites

    I took RC2 for a test run and all was well until the mpt2sas driver had an issue with my Seagate Ironwolf 8TB set up as parity (ST8000VN004-2M2101).

     

    No issues for weeks under 6.8.3.  I didn't manage to capture the logs (sorry!) for a myriad of reasons.  Looks like the driver <-> drive didn't like something and the drive dropped out, leading to read errors.

     

    I didn't see any errors under my other drives, all WD 40E* and 80E* units.

     

    Not sure if there is a quirk here with the drive.  Quick SMART test shows fine so i'm blaming the driver in 5.10.1.

    I'm rebuilding the parity now under 6.8.3 with the 4.19 kernel to give a solid test of things.

     

    FYI.

     

    TY!

     

    Kev.

     

    Link to comment
    Share on other sites
    34 minutes ago, TDD said:

    I didn't manage to capture the logs

     

    Without diagnostics to prove otherwise I'd assume it's a cable problem - either data or power to the drive.

     

    Link to comment
    Share on other sites
    On 2/24/2021 at 11:38 AM, jbartlett said:

    Note that the version of memtest built into UNRAID is a much older version that doesn't have the newer testing capabilities of the current version located on memtest's website. This is no fault of unraid, the people behind memtest won't allow anything newer to be installed by 3rd parties.

    I ran full memtest with newest version of memtest (9.0) and all memory passed testing.  I'm now thinking this is related to my docker networking, user defined networks, and the specific ethernet hardware I have.  I think it is related to the folliowing:  

     

     

    I have since moved my containers off of the Unraid IP physical port and moved them to another physical port.  I have not had crashes or errors in the last 24 hours, but will report back after a longer period of time.  I believe this to be my issue.

     

    Link to comment
    Share on other sites
    1 hour ago, TDD said:

     

     

    Quote

    Impossible.  Nothing internally was touched.  Only change was software.

     

    I will add the rebuild now (at 10%) under 6.8.3, still untouched and maxing the said drive and array over 14 disks has given zero log/drive errors.

     

    The logs were pretty clear about the connection being reset and the drive dropping - I have seen enough of them when it comes to sketchy hardware to pick out SATA errors over a link.  Clearly a driver/kernel issue - which may be troubling to deal with as the software as a whole marches ahead.  I will give this another go once we enter release of 6.9 and perhaps a shift to kernel 5.11?

     

     

     

     

    Quote

     

     

     

     

     

     

     

     

    took RC2 for a test run and all was well until the mpt2sas driver had an issue with my Seagate Ironwolf 8TB set up as parity (ST8000VN004-2M2101).

     

    No issues for weeks under 6.8.3.  I didn't manage to capture the logs (sorry!) for a myriad of reasons.  Looks like the driver <-> drive didn't like something and the drive dropped out, leading to read errors.

     

    I didn't see any errors under my other drives, all WD 40E* and 80E* units.

     

    Not sure if there is a quirk here with the drive.  Quick SMART test shows fine so i'm blaming the driver in 5.10.1.

    I'm rebuilding the parity now under 6.8.3 with the 4.19 kernel to give a solid test of things.

     

    FYI.

     

    TY!

     

    Kev.

     

     

    Link to comment
    Share on other sites



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.