Jump to content

chybber

Members
  • Posts

    5
  • Joined

  • Last visited

Posts posted by chybber

  1. Ok 

    Get this 

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_repair.  If you are unable to mount the filesystem, then use
    the -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.


    Stop array and start and repeat?

  2. Hi


    I got this error in the logs 

    Apr 19 04:30:02 Tower kernel: XFS (md3p1): Metadata CRC error detected at xfs_dir3_data_read_verify+0x7c/0xf1 [xfs], xfs_dir3_data block 0x300000020 
    Apr 19 04:30:02 Tower kernel: XFS (md3p1): Unmount and run xfs_repair
    Apr 19 04:30:02 Tower kernel: XFS (md3p1): First 128 bytes of corrupted metadata buffer:
    Apr 19 04:30:02 Tower kernel: 00000000: a6 a9 fe 8c 9f 21 9f d8 69 01 00 01 65 52 2f 59  .....!..i...eR/Y
    Apr 19 04:30:02 Tower kernel: 00000010: 00 00 00 a8 69 1e d3 4e d8 6e 1c ec 63 9d 4c 7d  ....i..N.n..c.L}
    Apr 19 04:30:02 Tower kernel: 00000020: fe ad 67 05 a2 95 33 c1 65 52 2f 7a 00 00 00 9e  ..g...3.eR/z....
    Apr 19 04:30:02 Tower kernel: 00000030: 68 40 0e c0 3c 12 02 00 00 90 01 00 00 00 00 00  h@..<...........
    Apr 19 04:30:02 Tower kernel: 00000040: 00 00 00 00 01 00 32 91 04 32 38 34 67 52 2f 39  ......2..284gR/9
    Apr 19 04:30:02 Tower kernel: 00000050: 00 00 01 80 e9 00 00 8a 5b 74 6b 77 02 00 10 50  ........[tkw...P
    Apr 19 04:30:02 Tower kernel: 00000060: 00 00 00 01 ca 6a 69 26 03 32 38 36 02 00 00 60  .....ji&.286...`
    Apr 19 04:30:02 Tower kernel: 00000070: 00 00 00 00 00 00 23 6b 12 e6 20 18 06 7b 41 68  ......#k.. ..{Ah

     

    xfs_repair status:

    
    Phase 1 - find and verify superblock...
            - block cache size set to 18479768 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 3458494 tail block 3457332
    ALERT: The filesystem has valuable metadata changes in a log which is being
    ignored because the -n option was used.  Expect spurious inconsistencies
    which may be resolved by first mounting the filesystem to replay the log.
            - scan filesystem freespace and inode maps...
    agf_freeblks 44852899, counted 44852876 in ag 6
    sb_ifree 67448, counted 67446
    sb_fdblocks 358863294, counted 379292305
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
    data fork in ino 3684145 claims free block 460632
    data fork in ino 3684145 claims free block 460642
    data fork in ino 3684145 claims free block 460711
    data fork in ino 3684145 claims free block 461157
    data fork in ino 3684151 claims free block 460645
    data fork in ino 3684151 claims free block 460695
    data fork in ino 3684151 claims free block 460699
    data fork in ino 3684151 claims free block 460731
    data fork in ino 3684151 claims free block 461169
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
    Metadata CRC error detected at 0x4626c8, xfs_dir3_data block 0x300000020/0x1000
    bad directory block magic # 0xa6a9fe8c in block 1 for directory inode 12884902022
    corrupt block 1 in directory inode 12884902022
    	would junk block
            - agno = 7
            - agno = 8
            - agno = 9
            - agno = 10
    data fork in ino 21478349691 claims free block 721475607
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 2
            - agno = 3
            - agno = 7
            - agno = 1
            - agno = 4
            - agno = 6
            - agno = 8
            - agno = 9
            - agno = 10
            - agno = 5
    bad directory block magic # 0xa6a9fe8c in block 1 for directory inode 12884902022
    corrupt block 1 in directory inode 12884902022
    	would junk block
    setting reflink flag on inode 3684145
    setting reflink flag on inode 3684146
    setting reflink flag on inode 3684151
    Incorrect reference count: saw (0/459209) len 61 nlinks 2; should be (0/461161) len 8 nlinks 2
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
    Metadata CRC error detected at 0x4626c8, xfs_dir3_data block 0x300000020/0x1000
    corrupt block 1 in directory inode 12884902022: would junk block
    bad hash table for directory inode 12884902022 (no data entry): would rebuild
    would rebuild directory inode 12884902022
            - agno = 7
            - agno = 8
            - agno = 9
            - agno = 10
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    disconnected dir inode 9067, would move to lost+found
    disconnected dir inode 12945, would move to lost+found
    disconnected dir inode 22103, would move to lost+found
    disconnected dir inode 28238514, would move to lost+found
    disconnected dir inode 2147483786, would move to lost+found
    disconnected dir inode 2203176189, would move to lost+found
    disconnected dir inode 2203345648, would move to lost+found
    disconnected dir inode 2203424215, would move to lost+found
    disconnected dir inode 2206619842, would move to lost+found
    disconnected dir inode 4325480149, would move to lost+found
    disconnected dir inode 4325480156, would move to lost+found
    disconnected dir inode 4325487749, would move to lost+found
    disconnected dir inode 4347179621, would move to lost+found
    disconnected dir inode 6476893262, would move to lost+found
    disconnected dir inode 17180194241, would move to lost+found
    disconnected dir inode 19327372056, would move to lost+found
    Phase 7 - verify link counts...
    would have reset inode 12884902022 nlinks from 268 to 252
    No modify flag set, skipping filesystem flush and exiting.
    
            XFS_REPAIR Summary    Fri Apr 19 08:30:55 2024
    
    Phase		Start		End		Duration
    Phase 1:	04/19 08:30:30	04/19 08:30:30
    Phase 2:	04/19 08:30:30	04/19 08:30:32	2 seconds
    Phase 3:	04/19 08:30:32	04/19 08:30:46	14 seconds
    Phase 4:	04/19 08:30:46	04/19 08:30:46
    Phase 5:	Skipped
    Phase 6:	04/19 08:30:46	04/19 08:30:55	9 seconds
    Phase 7:	04/19 08:30:55	04/19 08:30:55
    
    Total run time: 25 seconds

     

    I've manged to figure out that I should run xfs_repair, So I did run xfs_repair -v /dev/sdj (disk3)
    And it started but it has now ran about 7 hours so far just echoing .......... endlessly.
    Is it normal to take so long? When searching the forum/google most ppl seems to have <-60 sec runtime when running xfs_repair.

    tower-diagnostics-20240419-0821.zip

  3. Having issues with TLS Let's Encrypt.

     

    [2022-08-01 15:40:07] LEScript.INFO: Getting list of URLs for API
    [2022-08-01 15:40:07] LEScript.INFO: Requesting new nonce for client communication
    [2022-08-01 15:40:08] LEScript.INFO: Account already registered. Continuing.
    [2022-08-01 15:40:08] LEScript.INFO: Sending registration to letsencrypt server
    [2022-08-01 15:40:08] LEScript.INFO: Sending signed request to https://acme-v02.api.letsencrypt.org/acme/new-acct
    [2022-08-01 15:40:08] LEScript.INFO: Account: https://acme-v02.api.letsencrypt.org/acme/acct/656824306
    [2022-08-01 15:40:08] LEScript.INFO: Starting certificate generation process for domains
    [2022-08-01 15:40:08] LEScript.INFO: Requesting challenge for ctuning.se
    [2022-08-01 15:40:08] LEScript.INFO: Sending signed request to https://acme-v02.api.letsencrypt.org/acme/new-order
    [2022-08-01 15:40:09] LEScript.INFO: Sending signed request to https://acme-v02.api.letsencrypt.org/acme/authz-v3/137072252896
    [2022-08-01 15:40:09] LEScript.INFO: Got challenge token for ctuning.se
    [2022-08-01 15:40:09] LEScript.INFO: Token for ctuning.se saved at /opt/www//.well-known/acme-challenge/tl2Yeb_5abIUPYyRhDMJeLyJow5D5MdxfWrC2IZB7r4 and should be available at http://ctuning.se/.well-known/acme-challenge/tl2Yeb_5abIUPYyRhDMJeLyJow5D5MdxfWrC2IZB7r4
    [2022-08-01 15:40:09] LEScript.ERROR: Please check http://ctuning.se/.well-known/acme-challenge/tl2Yeb_5abIUPYyRhDMJeLyJow5D5MdxfWrC2IZB7r4 - token not available
    [2022-08-01 15:40:09] LEScript.ERROR: #0 /opt/admin/src/Base/Handler/LeHandler.php(62): Analogic\ACME\Lescript->signDomains(Array)
    [2022-08-01 15:40:09] LEScript.ERROR: #1 /opt/admin/src/Base/Controller/LeController.php(71): App\Base\Handler\LeHandler->renew(true)
    [2022-08-01 15:40:09] LEScript.ERROR: #2 /opt/admin/vendor/symfony/http-kernel/HttpKernel.php(158): App\Base\Controller\LeController->issueAction(Object(Symfony\Component\HttpFoundation\Request))
    [2022-08-01 15:40:09] LEScript.ERROR: #3 /opt/admin/vendor/symfony/http-kernel/HttpKernel.php(80): Symfony\Component\HttpKernel\HttpKernel->handleRaw(Object(Symfony\Component\HttpFoundation\Request), 1)
    [2022-08-01 15:40:09] LEScript.ERROR: #4 /opt/admin/vendor/symfony/http-kernel/Kernel.php(201): Symfony\Component\HttpKernel\HttpKernel->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true)
    [2022-08-01 15:40:09] LEScript.ERROR: #5 /opt/admin/public/index.php(25): Symfony\Component\HttpKernel\Kernel->handle(Object(Symfony\Component\HttpFoundation\Request))
    [2022-08-01 15:40:09] LEScript.ERROR: #6 {main}

    Anyone have any ideas?

  4. Password set successfull
    *** Booting runit daemon...
    *** Runit started as PID 67
    [Nov 27 10:51:51 PM] Starting the server in 5 seconds. See the log directory in your config directory for server logs.
    [Nov 27 10:51:51 PM] Not using archive cache
    Nov 27 22:51:51 20f92adf2710 cron[71]: (CRON) INFO (pidfile fd = 3)
    Nov 27 22:51:51 20f92adf2710 cron[71]: (CRON) INFO (Running @reboot jobs)
    /bin/sh: 1: ip: not found
    /bin/sh: 1: ip: not found
    /bin/sh: 1: ip: not found
    /bin/sh: 1: ip: not found
    /bin/sh: 1: ip: not found
    /bin/sh: 1: ip: not found

    What to do? Anyone got any advice?

×
×
  • Create New...