Jump to content

PandaGod

Members
  • Posts

    9
  • Joined

Posts posted by PandaGod

  1. I ran the Disk test and these were the results I don't see anything that would indicate a success or failure 

    Phase 1 - find and verify superblock...
            - block cache size set to 720848 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 871803 tail block 871781
    ALERT: The filesystem has valuable metadata changes in a log which is being
    ignored because the -n option was used.  Expect spurious inconsistencies
    which may be resolved by first mounting the filesystem to replay the log.
            - scan filesystem freespace and inode maps...
    sb_fdblocks 657257137, counted 664072051
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    Maximum metadata LSN (5:873552) is ahead of log (5:871803).
    Would format log to cycle 8.
    No modify flag set, skipping filesystem flush and exiting.
    
            XFS_REPAIR Summary    Wed May  3 09:26:43 2023
    
    Phase		Start		End		Duration
    Phase 1:	05/03 09:24:02	05/03 09:24:02
    Phase 2:	05/03 09:24:02	05/03 09:24:07	5 seconds
    Phase 3:	05/03 09:24:07	05/03 09:25:31	1 minute, 24 seconds
    Phase 4:	05/03 09:25:31	05/03 09:25:32	1 second
    Phase 5:	Skipped
    Phase 6:	05/03 09:25:32	05/03 09:26:43	1 minute, 11 seconds
    Phase 7:	05/03 09:26:43	05/03 09:26:43
    
    Total run time: 2 minutes, 41 seconds

     So I am not too sure what I need to do next

  2. Thanks for the response. I have since rebooted the machine so I only have the logs since that boot. It was initially working but I am getting the following showing up on my drive logs.

     

    May  2 15:01:20 NAS kernel: ata5.00: exception Emask 0x50 SAct 0xc000 SErr 0x4090800 action 0xe frozen
    May  2 15:01:20 NAS kernel: ata5.00: irq_stat 0x00400040, connection status changed
    May  2 15:01:20 NAS kernel: ata5: SError: { HostInt PHYRdyChg 10B8B DevExch }
    May  2 15:01:20 NAS kernel: ata5.00: failed command: READ FPDMA QUEUED
    May  2 15:01:20 NAS kernel: ata5.00: cmd 60/80:70:00:24:2f/01:00:79:00:00/40 tag 14 ncq dma 196608 in
    May  2 15:01:20 NAS kernel: ata5.00: status: { DRDY }
    May  2 15:01:20 NAS kernel: ata5.00: failed command: READ FPDMA QUEUED
    May  2 15:01:20 NAS kernel: ata5.00: cmd 60/c8:78:58:0a:72/00:00:74:00:00/40 tag 15 ncq dma 102400 in
    May  2 15:01:20 NAS kernel: ata5.00: status: { DRDY }
    May  2 15:01:20 NAS kernel: ata5: hard resetting link
    May  2 15:01:25 NAS kernel: ata5: link is slow to respond, please be patient (ready=0)
    May  2 15:01:27 NAS kernel: ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    May  2 15:01:27 NAS kernel: ata5.00: ACPI cmd f5/00:00:00:00:00:00(SECURITY FREEZE LOCK) filtered out
    May  2 15:01:27 NAS kernel: ata5.00: ACPI cmd b1/c1:00:00:00:00:00(DEVICE CONFIGURATION OVERLAY) filtered out
    May  2 15:01:27 NAS kernel: ata5.00: ACPI cmd f5/00:00:00:00:00:00(SECURITY FREEZE LOCK) filtered out
    May  2 15:01:27 NAS kernel: ata5.00: ACPI cmd b1/c1:00:00:00:00:00(DEVICE CONFIGURATION OVERLAY) filtered out
    May  2 15:01:27 NAS kernel: ata5.00: configured for UDMA/133
    May  2 15:01:27 NAS kernel: ata5: EH complete

    nas-diagnostics-20230502-1511.zip

  3. Today my scheduled parity check ran which completed without any errors found. However I did get a "Unraid array errors" message for three of my drives with the following info.

     

    Parity disk - (sdd) (errors 190)
    Disk 1 - (sdc) (errors 197)
    Disk 2 - (sde) (errors 183)

     

    I have attached my logs below it seems to keep repeating the following line which has only shown up today. 

     

    program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO

     

    As there seems to be multiple disk related issues I am not sure where I should begin to try to solve this issue.

    nas-syslog-20230501-2112.zip

  4. Hi,

     

    I am currently trying to get Unraid running on my ProLiant DL360e Gen8, I have managed to get it to boot once from the SD card but that is not useable due to the unique GUID requirement.

     

    To solve that I made an image of the working SD card and wrote that to a USB stick which worked once and the system rebooted and I was able to activate. However I have now restarted the server and cant get it to boot once again. 

     

    I have followed the wiki (https://wiki.unraid.net/USB_Flash_Drive_Preparation) and used the HP tool as well as creating a 1GB partition on my USB stick and manually copying the files, and cant seem to get it to boot more than once. 

×
×
  • Create New...