Jump to content

Gov

Members
  • Posts

    11
  • Joined

  • Last visited

Posts posted by Gov

  1. @JorgeB Don't time for that, so I've put in a new drive as this is the second time this drive has came up unmounted.

     

    However, the array has started to rebuild, but i am a bit confused as to why it says format? I assume the rebuild will take care of this ? or should I have ticked it ? 

     

    I've also order two of JMB585  cards.

     

    I not to worried about the data on the disks, as I have a 10TB  USB drive backing up the Unraid data from the arrays, so I can always restore that, at a later stage.

     

    Thanks for you help today, Gov

     

    image.thumb.png.34822ec99654befa6055efd9bebcb358.png

  2. @JorgeBNo idea what this mean, but I read it as this drive is gone , may be I am wrong 

     

    Quote

    :~# xfs_repair -v /dev/md7
    Phase 1 - find and verify superblock...
            - block cache size set to 1322784 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 4 tail block 4
            - scan filesystem freespace and inode maps...
    clearing needsrepair flag and regenerating metadata
    sb_icount 448, counted 32
    sb_ifree 43, counted 29
    sb_fdblocks 201490920, counted 244071377
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 3
            - agno = 2
            - agno = 1
    Phase 5 - rebuild AG headers and trees...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    SB summary counter sanity check failed
    Metadata corruption detected at 0x47a15b, xfs_sb block 0x0/0x200
    libxfs_bwrite: write verifier failed on xfs_sb bno 0x0/0x1
    SB summary counter sanity check failed
    Metadata corruption detected at 0x47a15b, xfs_sb block 0x0/0x200
    libxfs_bwrite: write verifier failed on xfs_sb bno 0x0/0x1
    xfs_repair: Releasing dirty buffer to free list!
    xfs_repair: Refusing to write a corrupt buffer to the data device!
    xfs_repair: Lost a write to the data device!

    fatal error -- File system metadata writeout failed, err=117.  Re-run xfs_repair.

     

  3. @JorgeB maybad, one these disk are connected to the Marvell controller.

     

    Mother mother shows six.  so it looks like the Marvell controller is the issuse.

     

    Going to order two of the JMB585 

     

    Quote

    PCIE SATA Card 5 Port, PCIE to SATA Expansion Card PCI‑E to 5 Ports SATA3.0 Module Adapter Converter

     

    Would you recommend this one ? as I don't really have time to be flashing the Asmedia ASM1166?

     

    image.png.a47ead8a8115c8a795ba6c6c114f2ff7.png

  4. Hi 

    I read in another topic to best open your topic due to different results and advice to each situation.

     

    I have been running Unriad over the last 2 years with no issues  until last week, my disk 5 had errors and was failing.

     

    I replaced the drive with a brand new Seagate IronWolf 1 TB,  the drive rebuild, reboot and three of my disk where shown "No Device".

     

    I read in an other topic that is could be a sata cable issue, so I brought 12 6Gbps High Speed Sata Cable 3 Ⅲ Sas Cables and replaced each one.

     

    All disc can back up and the array started.

     

    Then next day disk 7, 3 where in a unmounted state, no major issue here it just reformatted the discs all came backup.

     

    The new Disk 5 started to show errors, however I accepted them as I  taught this is a brand new drive there should be no issues.

     

    Restored my data and logged in this morning to see where the status was at, it was at success with errors, 100 items could be copy over, issue  "No access".

     

    So open the folder from Windows desktop  and tried to create a folder directly ..same thing "No access".

     

    Return to Unraid dashboard and disk 7  was in a Unmountable disk present state.

     

    Ran xfs_repair -v /dev/md7 this was success, reboot, and now I have three disk  missing and I am baffled as to why see below screenshot.

     

    If anyone can review the diagnostics logs attached and advise me what causing this, I would be much great full, as I  don't know if its a failing raid card or a power supply issue.

     

    Thanks in advance Gov

     

     

    image.png.bd3dc6c66a765cbf27b125a22899f577.png

     

    mcgovern-diagnostics-20230331-0953.zip

×
×
  • Create New...