Jump to content
  • 6.6.0-rc1 Disk Reporting Read Error and Disabled


    fmp4m
    • Closed
    disk6 (MB4000GCWDC_Z1Z963EC) is disabled  

    disk6 (MB4000GCWDC_Z1Z963EC) has read errors

     

    The problem is, smart short and smart extended shows no errors. This drive also has less than 30 days power-on.

     

    1 Raw read error rate 0x000f 074 063 044 Pre-fail Always Never 27582192
    3 Spin up time 0x0003 092 092 070 Pre-fail Always Never 0
    5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0
    7 Seek error rate 0x000f 076 060 030 Pre-fail Always Never 4340119908
    9 Power on hours 0x0032 100 100 000 Old age Always Never 659 (27d, 11h)
    10 Spin retry count 0x0013 100 100 097 Pre-fail Always Never 0
    180 Unknown HDD attribute 0x003b 100 100 030 Pre-fail Always Never 1559174187
    194 Temperature celsius 0x0022 042 049 000 Old age Always Never 42 (0 24 0 0 0)
    196 Reallocated event count 0x0033 100 100 010 Pre-fail Always Never 0


    User Feedback

    Recommended Comments

    Sep 2 16:43:19 NAS kernel: mdcmd (7): import 6 sdl 64 3907018532 0 MB4000GCWDC_Z1Z963EC
    Sep 2 16:43:19 NAS kernel: md: import disk6: (sdl) MB4000GCWDC_Z1Z963EC size: 3907018532
    Sep 2 16:43:20 NAS emhttpd: shcmd (39): /usr/local/sbin/set_ncq sdl 1
    Sep 2 16:43:20 NAS root: set_ncq: setting sdl queue_depth to 1
    Sep 2 16:43:20 NAS emhttpd: shcmd (40): echo 128 > /sys/block/sdl/queue/nr_requests
    Sep 3 16:07:58 NAS kernel: sd 15:0:6:0: [sdl] tag#2 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e5 00
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 60 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071631496
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 64 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071632520
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 68 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071633544
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 6c 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071634568
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 70 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071635592
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 74 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071636616
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 78 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071637640
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 7c 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071638664
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 80 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071639688
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 Sense Key : 0x2 [current]
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 ASC=0x4 ASCQ=0x0
    Sep 3 16:08:01 NAS kernel: sd 15:0:6:0: [sdl] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 b7 15 84 88 00 00 04 00 00 00
    Sep 3 16:08:01 NAS kernel: print_req_error: I/O error, dev sdl, sector 3071640712
    Sep 3 22:50:04 NAS emhttpd: shcmd (360): /usr/local/sbin/set_ncq sdl 1
    Sep 3 22:50:04 NAS emhttpd: shcmd (361): echo 128 > /sys/block/sdl/queue/nr_requests
    Sep 3 22:52:02 NAS emhttpd: shcmd (388): /usr/local/sbin/set_ncq sdl 1
    Sep 3 22:52:02 NAS emhttpd: shcmd (389): echo 128 > /sys/block/sdl/queue/nr_requests
    Sep 3 22:52:44 NAS emhttpd: shcmd (416): /usr/local/sbin/set_ncq sdl 1
    Sep 3 22:52:44 NAS emhttpd: shcmd (417): echo 128 > /sys/block/sdl/queue/nr_requests

    Share this comment


    Link to comment
    Share on other sites

    This almost certainly has nothing to do with this release. You just have gotten a disabled disk, which could happen on any version of unRAID going back to the beginning. unRAID disables a disk when a write to it fails. Often the reason for the write failure has nothing to do with the disk itself but is something else, like a bad connection.

     

    Instead of adding even more lengthy but incomplete excerpts from some of your diagnostic information, you should just go to Tools - Diagnostics and post the complete diagnostics zip.

     

    This whole thing should have been put in a General Support thread instead of here.

    • Upvote 1

    Share this comment


    Link to comment
    Share on other sites

    TRURL:

     

    I disagree and do not like the attitude,  I understand I may deserve it by being slightly vague but I also have 9,000 other things I am working with at the same time.

     

    I reported this as a bug in the PR due to it ONLY occurring on the 6.6.0-RC1.  My downgraded drive has zero issues.  Now I am attaching my diagnostics as I believe it is a driver that is not playing nice,  as just moments ago ALL drives reported the exact same error. with over 22,000 errors on all drives and the cache pool disappearing.

     

    nas-diagnostics-20180904-2246.zip

    Share this comment


    Link to comment
    Share on other sites

    Diags are unfortunately after rebooting, but disk6 dropped offline and not being detected, check connections.

     

    4 hours ago, fmp4m said:

    as just moments ago ALL drives reported the exact same error. with over 22,000 errors on all drives and the cache pool disappearing.

    You should diagnostics showing this.

    Share this comment


    Link to comment
    Share on other sites
    8 hours ago, fmp4m said:

    My downgraded drive has zero issues.

    Nobody ever said the drive had issues.

     

    On 9/4/2018 at 12:16 AM, trurl said:

    unRAID disables a disk when a write to it fails. Often the reason for the write failure has nothing to do with the disk itself but is something else, like a bad connection.

     

    Share this comment


    Link to comment
    Share on other sites

    Close this,  I will remain on 6.5.3 where there are no issues with any of the disks or controllers. I can see you would rather belittle me than help here.

     

    By downgraded drive, I meant UNRAID 6.5.3 boot drive.  6.6.0-rc1 is the only one that has the issues and is ISOLATED to this build.

    Share this comment


    Link to comment
    Share on other sites

    You may want to try 6.6.0-rc2, it has a newer kernel and see if the problem returns.

    Share this comment


    Link to comment
    Share on other sites
    2 hours ago, fmp4m said:

    Close this,  I will remain on 6.5.3 where there are no issues with any of the disks or controllers. I can see you would rather belittle me than help here.

     

    By downgraded drive, I meant UNRAID 6.5.3 boot drive.  6.6.0-rc1 is the only one that has the issues and is ISOLATED to this build.

    Pretty sure no disrespect was intended, it's very easy to misinterpret stuff posted on forums by busy people.

    Thank you for the report and we will take a look at your diags.

    • Upvote 1

    Share this comment


    Link to comment
    Share on other sites

    I apologize if I was a little terse.

     

    If you didn't rebuild the disabled disk don't be surprised if you have some parity sync errors.

    Share this comment


    Link to comment
    Share on other sites


    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = This issue causes a server crash or data loss.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.