Jump to content

c010rb1indusa

Members
  • Posts

    9
  • Joined

  • Last visited

Posts posted by c010rb1indusa

  1. 13 minutes ago, JorgeB said:

    If there's no old parity2 and assuming old disk2 is also dead your best bet it to try and copy everything you can from the emulate disk2, this way you'll now what data fails to copy, you could also clone old parity with ddrescue then rebuild, but that way no way of knowing the affected files on the rebuilt disk, unless you have pre-existing checksums for all files.

     

    Oh duh why didn't I think of that :). Will give that a go and report back.

  2. 23 minutes ago, trurl said:

    So much going on in syslog with disk assignments I'm not sure I fully understand how you got to this point. Looks like the original parity disk is assigned as disk2 now, it thinks parity2 is OK, and parity is new.

     

    Probably parity swap wasn't even necessary since you had dual parity and trying to copy failing parity was the beginning of the confusion.

     

    The main question is can the array still emulate disk2 correctly.

     

    I wonder if it will let you start the array with nothing assigned as disk2.

     

    Wait and see if anybody else has any ideas.

     

    Yeah it's a mess, no excuses. Thank you for bearing with my stupidity.

     

    Yes disk2 still seems to be emulating correctly as far as I can tell. If there's any confusion attached a screenshot of how the array is setup RN.

     

    If disk 2 is emulated correctly. Should be okay to proceed replace the failing 6TB drive? I realize nothing is guaranteed with this mess.

    781199548_ScreenShot2022-06-13at12_58_27PM.thumb.png.95a7996affa005525dcf5f96b73ec77a.png

     

     

  3. So I'm in a bit of a bind concerning my array. I have a dual parity drive system (2x 6TB) and had both a parity drive and a data drive fail. I ordered 3x 16TB drives with the intention of replacing the failed drives, as well as 16TB upgrade for the 'working' parity drive so I can upgrade the max drive size in the array.

     

    I was going to begin by doing a doing the parity copy/swap procedure outlined in SpaceInvaderOne's Video 'How to Replace a Failed Data Drive with one LARGER than the Parity' https://youtu.be/MMlR0TMeKsI?t=238 @3:58

     

    However when I began the copy it would fail almost immediately and stop the array. Read errors with the remaining 6TB parity drive...

     

    This is where I might have messed up. Thinking I was going to lose the data on the failed data drive. I just added the 16TB drive as a 2nd parity drive and hoped it would build the parity for it successfully. The parity build completed but with 800+ errors.

     

    Now my first question is was the new 2nd parity drive able to build its parity correctly even though there is a missing data drive in the array? 

     

    If the new parity on the new drive was built correctly, can I now replace the remaining 6TB parity drive, the one throwing up errors, with the a new 16TB drive and still keep the data on the the failed/missing data drive intact? And if so should I do a manual error correcting parity check before I replace the remaining 6TB parity disks w/ the errors?

     

    Thank you in advance for the help. I attached the SMART report the 6TB parity drive throwing up errors and a diagnostic log. Can post relevant logs/info if need be, just let me know what I need to provide.

    zeus-diagnostics-20220613-1205.zip zeus-smart-20220613-1204.zip

  4. Updated my LSI firmware so everything should be good to go. And tried using on-board data but still no luck booting w/o disabled. System can still see the drives. Followed @constructor link on rebuilding a drive onto itself It's rebuilding now. Will report back tomorrow morning when it's hopefully completed

    44 minutes ago, trurl said:

    You can't move or delete open files. Go to Settings - Docker, disable then try again to move or delete.

     

    Thanks will give that a try.

  5. 2 hours ago, trurl said:

    For future reference, Diagnostics already includes SMART for all attached disks, syslog, and many other things. Take a look.

     

    SMART for both disks looks OK. Syslog resets on reboot so unless you have an older syslog we can't see what happened. Bad connections are much, much more common than bad disks.

     

    You have to rebuild them. You can rebuild both at once.

     

    https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself

     

    Okay I will try to see if other connection options result in anything different.

     

    2 hours ago, trurl said:

    Also, your appdata has files on the array.

     

    I know, there is a folder that I can't delete either via console or tools like krusader

    2 hours ago, JorgeB said:

    1st LSI should be updated to latest firmware, this one has known issues:

    
    Jun 29 10:04:09 Zeus kernel: mpt2sas_cm0: LSISAS2008: FWVersion(20.00.04.00), ChipRevision(0x03), BiosVersion(07.39.02.00)

     

    2nd one is fine:

    
    Jun 29 10:04:09 Zeus kernel: mpt2sas_cm1: LSISAS2008: FWVersion(20.00.07.00), ChipRevision(0x03), BiosVersion(07.39.02.00)

     

     

    Thank you for pointing this out. Will do this and report back.

  6. I had two HDDs show up as disabled overnight. One is the second parity drive the other is a data drive. Both were active in a mover operation over night. I just want to make sure the failures are legit and aren't caused by something else. I've moved the drives to different backplanes in my case, although they're all still going through the same HBA card and they both still showed up as disabled.

     

    I've attached the smart report for both drives and as my system diagnostic files. Anything look funny to the experts out there?

     

    If the drive failures are real, should I rebuild the parity drive first, or the data drive? Thanks in advance for any help into this.

    zeus-smart-20210629-1223 (1).zip zeus-smart-20210629-1223.zip zeus-diagnostics-20210629-1208.zip

×
×
  • Create New...