Jump to content

Pjhal

Members
  • Posts

    28
  • Joined

  • Last visited

Posts posted by Pjhal

  1. Something is very wrong with my system.

     

    I am using five:  8TB 5400 rpm  HDDs they are now installed in two backplanes with  4  SATA slots  in a Silverstone C381 case.
    Using a Silverstone branded: Mini-SAS HD SFF8643 cable this then
    connects to a Lsi/broadcom SAS 9300-8i Host Bus Adapter, that is inserted in a
    Asrock Rack X470D4U motherboard pcie slot.

    My CPU is a 3700x and i am running a single stick of  16 GB ECC (unregistered), set to auto (ECC on)  . (want to run more then 1 ram stick, but it hasn't been delivered yet).

    The system has had issues previously including Unraid reporting 8.900.000 errors on the  parity drive.

    No important data is stored on the system. I'm just starting out with new hardware and am new to Unraid.

    Have had multiple XFS corruptions so far. I suspect one of my new devices is broken but i don't know how to narrow this down properly. 

     

    tower-diagnostics-20191202-1526.zip

  2. 12 minutes ago, itimpi said:

    Rerun the xfs_repair without the -n (no-modify) flag and the drive should mount fine once it has finished.

     

    regarding you original issue if the drive had shown with the same serial number when being moved from the USB enclosure to the internal connection Unraid would have just picked it up fine.    However it appears that the drive was reported differently when in the USB enclosure compared to when it was directly attached which is why Unraid did not realise it was the same drive.

    Thank you for your response! Yes it was listed as WD_My_ Book .....  . For future reference is it possible to ''explain'', to Unraid that it is the same device?
    I did run the xfs_repair and it does mount.

    Still a little skeptical about trusting this system with my data i previously had 8.900.000  errors on the parity drive all confined to a certain moment in time.

    All 5 drived are now hooked up to 1 of the backplanes in a Silverstone C381 case.
    Ussing a Silverstone branded: Mini-SAS HD SFF8643 cable this then
    connects to a Lsi/broadcom SAS 9300-8i Host Bus Adapter, that is inserted in a
    Asrock Rack X470D4U motherboard pcie slot.

    If i have a faulty component, then i don't really know how to narrow it down to which one.

    Are there any recommended tests? I did run pre-clears on the discs.

     

    errors.parity.disk log.2019-11-28.png

  3. So i am new to Unraid.

    I have had dozens of issues so far, the latest one i that i moved a hard drive that was part of the array from a external USB enclosure to an internal bay.

    Now i had hope that Unraid was  smart enough to figure this out automatically and just recognize the disc.

    It wasn't, i imagine there is perhaps some way too manually intervene  and get Unraid to recognize the disc under its new name.

    But it was getting late and i just wanted to go to bed, so i set it to rebuild. This setup is completely new and stores no vital data (only test files), so its not like i could lose anything.

    After it completed the rebuild with zero errors it was now reporting

    Quote

    Unmountable: unsupported partition layout.

    So i rebooted the server and now it reports:

    Quote

    Unmountable: No file system.

    Kinda strange because it also reports the disc as having the xfs file system.

    And it won't even mount in unassigned devices.

    This is not the first time i had problems i had to format a drive 2 times after a pre-clear, before Unraid was willing to mount it.

    Now suddenly while i was typing this is, Unraid decided that:

    Quote

     

    Unraid Disk 1 message: 2019-12-01 17:53

    Notice [TOWER] - Disk 1 returned to normal operation

     

    I assume this is just a delayed reaction, to me i putting it back in the array, after seeing if Unassigned Devices could mount it.

    After starting the array it reports:

    Quote

    Unmountable: No file system

    Again.

    What is going on here?

    Edit: I checked the disc log. Forgot that was a thing you could click on, still getting used to this OS, see image:

    Edit 2:

    xfs_repair -n :
     

    Quote

     

    Phase 1 - find and verify superblock...
            - block cache size set to 706952 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 184488 tail block 184488
            - scan filesystem freespace and inode maps...
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 1
            - agno = 4
            - agno = 7
            - agno = 3
            - agno = 5
            - agno = 6
            - agno = 2
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

            XFS_REPAIR Summary    Sun Dec  1 18:46:12 2019

    Phase        Start        End        Duration
    Phase 1:    12/01 18:46:12    12/01 18:46:12
    Phase 2:    12/01 18:46:12    12/01 18:46:12
    Phase 3:    12/01 18:46:12    12/01 18:46:12
    Phase 4:    12/01 18:46:12    12/01 18:46:12
    Phase 5:    Skipped
    Phase 6:    12/01 18:46:12    12/01 18:46:12
    Phase 7:    12/01 18:46:12    12/01 18:46:12

    Total run time:

     

     

    2019-12-01.18.24.Disc1.png

×
×
  • Create New...