• [6.9.2] Parity Drive will not spin down via GUI or Schedule


    SimonF
    • Minor

    Since upgrading to 6.9.2 I cannot spin down parity via GUI, as soon as I spin down it comes back up.

     

    Apr 8 10:18:45 Tower emhttpd: spinning down /dev/sdf
    Apr 8 10:18:58 Tower SAS Assist v0.85: Spinning down device /dev/sdf
    Apr 8 10:18:58 Tower emhttpd: read SMART /dev/sdf

     

    Revert to 6.9.1 issue no longer happens.

     

    I can manually spin down the drive. All other array drives which are also SAS spin down fine.

    root@Tower:~# sg_start -rp 3 /dev/sdf
    root@Tower:~# sdparm -C sense /dev/sdf
        /dev/sdf: SEAGATE   ST4000NM0023      XMGJ
    Additional sense: Standby condition activated by command

     

     

    Also is it possible to get an updated version of smartctl added.

     

    Will continue to do more testing.

     




    User Feedback

    Recommended Comments



    @limetech is it possible to revert/disable these changes so we can look to see if its kernel/driver specific.

     

    emhttpd: detect out-of-band device spin-up

     

    I have reverted to 6.9.1 for now.

     

    For info I have replaced doron's Smartctl wrapper with r5215 of smartctl and its working fine in 6.9.1. and 6.9.2 for both SAS and SATA. Could it be updated for 6.10 or next 6.9 release?

     

    root@Tower:/usr/sbin# ls smart*
    smartctl*  smartctl.doron*  smartctl.real*  smartd*
    root@Tower:/usr/sbin# smartctl
    smartctl 7.3 2021-04-07 r5215 [x86_64-linux-5.10.21-Unraid] (CircleCI)
    Copyright (C) 2002-21, Bruce Allen, Christian Franke, www.smartmontools.org
    
    ERROR: smartctl requires a device name as the final command-line argument.
    
    
    Use smartctl -h to get a usage summary
    
    root@Tower:/usr/sbin# smartctl -n standby /dev/sde
    smartctl 7.3 2021-04-07 r5215 [x86_64-linux-5.10.21-Unraid] (CircleCI)
    Copyright (C) 2002-21, Bruce Allen, Christian Franke, www.smartmontools.org
    
    Device is in ACTIVE or IDLE mode
    root@Tower:/usr/sbin# smartctl -n standby /dev/sdf
    smartctl 7.3 2021-04-07 r5215 [x86_64-linux-5.10.21-Unraid] (CircleCI)
    Copyright (C) 2002-21, Bruce Allen, Christian Franke, www.smartmontools.org
    
    Device is in STANDBY BY COMMAND mode, exit(2)
    root@Tower:/usr/sbin# 

     

    Edited by SimonF
    • Like 1
    Link to comment

    At the behest of SimonF - here is an issue that I've found (and only corrected by reverting back to 6.9.1)

    I recently got my first HBA (LSI 9305-16i) with SAS-to-4xSATA cables to be able to add more SATA drives to my rig...the drives are a mixture of:
     

    • ST8000VN004
    • ST4000VN008
    • WDC_WD60EFAX

     

    I did the hardware change AND the update to 6.9.2 at the same time and noticed that none of my drives would spin-down automatically (which they had earlier) and thought it was due to the change in hardware.  I could force the drives and they would spin-down but not automatically after the set '15 minute' timeout (had 'Enable Spinup Groups' also set to 'No').  After looking at the logs - I would never see spindown requests?

     

    Reverted back to 6.9.1 and it automagically worked with no additional changes.

     

    HTH and looking forward to 6.9.3?

     

    -dave

    Link to comment

    @limetechAre you able to advise on the out of band checking for 6.9.2? 

     

    emhttpd: detect out-of-band device spin-up how is this being done? It may be kernel or driver issue also as not all devices are affected.

    Link to comment

    So what's the answer? This is happening to me and I don't like it at. It's a replicable bug that is causing ware and tear on the system. This is a 40 day old bug that would not even be acceptable on FREE open source software. Does anyone know that is up? 

    Link to comment
    16 hours ago, SFord said:

    So what's the answer? This is happening to me and I don't like it at. It's a replicable bug that is causing ware and tear on the system. This is a 40 day old bug that would not even be acceptable on FREE open source software. Does anyone know that is up? 

    Have you tried installing 6.9.1?

    Link to comment
    1 hour ago, SimonF said:

    Have you tried installing 6.9.1?

    No. I don’t like that idea. Does a roll back clean up every bit of code? I’ll have to check every single change I made to the system, that’s kind of maddening. Then do I need to delete and reinstall all the Community plug-ins? It also looks like problems stated showing up in 6.9 and not just 6.9.2 so there were many changes to that code base. I’m surprised official SAS support took this long but hell you can’t even setup UnRAID as a iSCSI initiator. I guess this is all a “minor” bug and data is not lost. I’m not going to mess with a running system.

     

    I’m ranting a bit. Thanks for the suggestion. It will get fixed at some point it’s just weird that it’s taking so long. 

    Link to comment
    53 minutes ago, SFord said:

    Does a roll back clean up every bit of code?

    The rootfs is stored in the bzfiles so it will revert to 6.9.1 code at reboot.

     

    54 minutes ago, SFord said:

    Then do I need to delete and reinstall all the Community plug-ins

    Plugin are reloaded each reboot, you dont need to delete and reinstall.

     

    As long as they are compatible with 6.9.1 they will be loaded.

    55 minutes ago, SFord said:

    stated showing up in 6.9

    6.9.1 for me spun down SATA and SAS drives correct as long as you had doron SAS helper plugin.

     

     

    Link to comment

    Hi Guys,

    I'm facing some strange issues. My disks do spin down, but spin up again after a few minutes. I did try turning off docker and all scripts, but it still behaves the same way. VMs are not even installed.

    I have no clue how to track down the issue. I tried the SAS plugin, but nothing changed there.

    natasha-diagnostics-20210524-1829.zip

    Link to comment
    48 minutes ago, Jaster said:

    Hi Guys,

    I'm facing some strange issues. My disks do spin down, but spin up again after a few minutes. I did try turning off docker and all scripts, but it still behaves the same way. VMs are not even installed.

    I have no clue how to track down the issue. I tried the SAS plugin, but nothing changed there.

    natasha-diagnostics-20210524-1829.zip 198.57 kB · 0 downloads

    If you only have sata drives then SAS plugin is unlikely to help.

     

    Have you tried installing 6.9.1?

    Link to comment
    14 hours ago, SimonF said:

    If you only have sata drives then SAS plugin is unlikely to help.

     

    Have you tried installing 6.9.1?

    "solved" :(

    Link to comment
    On 4/11/2021 at 11:18 AM, SimonF said:

    @limetech is it possible to revert/disable these changes so we can look to see if its kernel/driver specific.

     

    emhttpd: detect out-of-band device spin-up

     

    I have reverted to 6.9.1 for now.

     

    For info I have replaced doron's Smartctl wrapper with r5215 of smartctl and its working fine in 6.9.1. and 6.9.2 for both SAS and SATA. Could it be updated for 6.10 or next 6.9 release?

     

    
    root@Tower:/usr/sbin# ls smart*
    smartctl*  smartctl.doron*  smartctl.real*  smartd*
    root@Tower:/usr/sbin# smartctl
    smartctl 7.3 2021-04-07 r5215 [x86_64-linux-5.10.21-Unraid] (CircleCI)
    Copyright (C) 2002-21, Bruce Allen, Christian Franke, www.smartmontools.org
    
    ERROR: smartctl requires a device name as the final command-line argument.
    
    
    Use smartctl -h to get a usage summary
    
    root@Tower:/usr/sbin# smartctl -n standby /dev/sde
    smartctl 7.3 2021-04-07 r5215 [x86_64-linux-5.10.21-Unraid] (CircleCI)
    Copyright (C) 2002-21, Bruce Allen, Christian Franke, www.smartmontools.org
    
    Device is in ACTIVE or IDLE mode
    root@Tower:/usr/sbin# smartctl -n standby /dev/sdf
    smartctl 7.3 2021-04-07 r5215 [x86_64-linux-5.10.21-Unraid] (CircleCI)
    Copyright (C) 2002-21, Bruce Allen, Christian Franke, www.smartmontools.org
    
    Device is in STANDBY BY COMMAND mode, exit(2)
    root@Tower:/usr/sbin# 

     

    @limetech Do you have any feedback, there are users that cannot move off 6.9.1 as their drives no longer spindown > 6.9.1

    • Like 1
    Link to comment

    There appears to be another condition (related/unrelated?) that this applies to. As you can see in my image I have an Unmountable: No file system assigned to Disk 8 of the array (I do not want unraid to use at this time). 

     

    Like everyone else here I am not able to keep this drive spun down and Unraid keeps trying to read from it as you can see from the 13 million reads.

     

    In my mind Unraid shouldn't be doing anything with this drive until a file system is created

    Unmount.jpg

    Link to comment
    11 minutes ago, Draffutt said:

    In my mind Unraid shouldn't be doing anything with this drive until a file system is created

    If it's a member of the parity protected array, every bit of it is used to calculate parity. File system or not, usable data or corrupt data, all bits of all drives in the main array are part of parity.

     

    Formating and the resulting empty file system are all bits that must be emulated if the drive fails, so nothing is left out.

    Link to comment

    @limetech any ETA on the spin down issue here.  I basically have my server and basement AC fighting with each other on this issue due to the extra heat the spinners are generating.   To make things more complicated I am using Areca controllers which have their challenges in UNraid.  Everything worked fine in 6.9.1.   I tried to downgrade from 6.9.2 to 6.9.1 and the issue is still present. 

    • Like 1
    Link to comment

    I hate silence from tech companies.  At least give some BS canned answer like "we are looking into the issue".  When you email support they say go to the forums and check..........  🙄

    • Like 1
    Link to comment
    On 7/28/2021 at 6:28 AM, chris0583 said:

    I hate silence from tech companies.  At least give some BS canned answer like "we are looking into the issue".  When you email support they say go to the forums and check..........  🙄

    Sorry I've been head down in development and other responsibilities the last couple of months and didn't see this until a developer we've been working with pointed it out.

     

    We don't always have all the answers but you'll get no B.S. from me and you can always email directly:

    [email protected]

    Link to comment

    Tom,

     

    Thanks for the reply.  Do you have any update for the community on this issue?  Is it currently being worked on other than there is a big submitted?  Will there be a patch or will we have to wait until next full release?  Do you need any further information to help correct the issue?  Logs etc...  I am willing to provide what I have and I am sure others will do the same.

    Link to comment

    If I may jump in, I have upgraded my server to 6.10 RC1 because I was annoyed with this behavior and so far it's looking fine. I can't report on the auto spindown issue, as this one was working for me, only for the disks to spin up again immediately with the read SMART message in the log. 

    So far, the disks have remained spun down after telling them to spin down manually. So this is encouraging so far, as before with 6.9.2 they wouldn't stay spun down for even 5 seconds after spinning back up with the aforementioned read SMART output in the log. So looking good so far.

     

    Additionally I have to say that I do not and have not in 6.9.2 used any of the plugins autofan, telegraf or turbo write which often get referred to as the offenders. But it still didn't work in 6.9.2 but seems to do in 6.10 RC1 so this is looking good. Now I just hope it stays this way (especially in future releases and the final).

    Link to comment

    I tried 6.10 RC1 but no luck.  The drives revert back to 'active' when I manually set the drive into 'stand-by'.

    I've tried with and without the Spin Down SAS plugin, no luck.

     

    Reverting back to 9.6.1 and confirmed I can set my drives to idle again.

     

    My drives are ST4000NM0023.

    Edited by kroms
    Link to comment
    37 minutes ago, kroms said:

    My drives are ST4000NM0023.

     

    Curious: Does anyone have this problem in 6.9.2 or in 6.10.0-rc1 with drives that are not Seagate?

    Trying to narrow down on the root cause.

    Link to comment
    13 minutes ago, doron said:

     

    Curious: Does anyone have this problem in 6.9.2 or in 6.10.0-rc1 with drives that are not Seagate?

    Trying to narrow down on the root cause.

    Yes bonienl was sata drives non Seagate via Adaptec HBA

     

     

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.