Jump to content

AlbertoGa

Members
  • Posts

    24
  • Joined

  • Last visited

Posts posted by AlbertoGa

  1. 11 minutes ago, itimpi said:

    According to your screen shot there is still an Extended SMART test running (bottom left image).

     

    The FAILED column in the upper image is normally irrelevant unless it say something like "Failing Now"

    Yes , I've been reattempting the test. Is there anything there that would indicate the problem is this drive? Just so I can use it when claiming warranty

  2. How can I find out if it's a read failure?

    9 minutes ago, itimpi said:

    Has it failed with an indication there was a read failure?   If so you can probably make a warranty claim.

     

    note that the SATA overall health assessment takes no account of the effect of running tests.   It is based purely on whether any of the attributes have a “failing now” status so is frequently not a useful indication of failure.

     

  3. 57 minutes ago, itimpi said:

    In principle that should be fine - with Molex you can normally go to 4xSATA.

     

    However beware of those splitters where the cable goes vertically into a moulded connector at the SATA end - I have seen them being reported as a potential fire risk as the connectors can accidentally touch if badly manufactured.

    I've checked the smart attributes. Also I've done both a SMART short self-test and SMART extended self-test. First one was ok, second one doesn't complete. It says: Interrupted (host reset)

    1 hour ago, AlbertoGa said:

    I'm, quite honestly, a novice. How do I get to the bottom of this?

     

    smarttest.png

  4. 1 minute ago, itimpi said:

    Those errors cause retries, and you will only get an error reported if all retries fail.   

     

     

    That is expected as the continual retries are slowing everything down.   Personally I would not bother checking until you can get to the bottom of why you keep getting these retries.   

     

    You are not by any chance using power splitters on the cabling to the drive?

     

    I'm, quite honestly, a novice. How do I get to the bottom of this?

  5. Just now, itimpi said:

    Those errors cause retries, and you will only get an error reported if all retries fail.   

     

     

    That is expected as the continual retries are slowing everything down.   Personally I would not bother checking until you can get to the bottom of why you keep getting these retries.   

     

    You are not by any chance using power splitters on the cabling to the drive?

     

    Yes I am using splitters. Is this bad? Shouldn't it have been a problem from the start?

  6. Just now, AlbertoGa said:

     

    I shut down the server, checked the wiring and booted again. I've started a new check but the disk still looks more or less the same. The weird thing is that the disk status (disk 2) shows as healthy:

    disklog2.png

    DISKSTATUS.png

    Also, the check is VERY slow. It mayu take several days according to the estimate.

  7.  

    39 minutes ago, itimpi said:

    That tells you which drive is generating those errors.

    I shut down the server, checked the wiring and booted again. I've started a new check but the disk still looks more or less the same. The weird thing is that the disk status (disk 2) shows as healthy:

    disklog2.png

    DISKSTATUS.png

  8. 1 minute ago, itimpi said:

    Were you doing this via the GUI or the command line?  If the command line what device name were you using?

    I used the GUI, and followed the instructions in that thread.

     

    1 minute ago, itimpi said:

    Not sure if it can do it directly.    However it could definitely cause cables to start working themselves loose.

    I'll have to check the cables.

     

    5 minutes ago, itimpi said:

    The diagnostics show you are getting continual resets on whatever device is ata3.   This will be badly slowing down performance.  The diagnostics do not go back far enough for me to see which device that is, but you should be able to easily find out by clicking on the icon at the beginning of the Identification column on each drive.   

     

    You should carefully check the cabling (both power and SAT) to the drive as it looks like the sort of error we see if a SATA cable is not properly seated or there are issues getting sufficient power to the drive.

    The disk icon opens this log:

    ata3.png

  9. 5 minutes ago, itimpi said:

    You definitely want to get to the bottom of this!  It might be indicative of an underlying issue.  Are these checks correcting or non-correcting?  Have you had any unclean shutdowns?

     

    You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback.  It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.

     

    BTW:  You might want to consider installing the Parity Check Tuning plugin.  Even if you do not make use of its other features the Parity History entries will start being enhanced to give more information about the check such as whether it was correcting or not.

     

     

    Thanks for the reply, here's the ZIP.

     

    I've had a disk show up as unmountable a couple times but fixed it with this:

     

    I've installed the plugin but I don't know how to distinguish between correcting and non-correcting errors.

    nas-diagnostics-20240225-1822.zip

  10. I use Octoprint in Docker to control my 3D Printer via USB connected to my Unraid server. Since the last update, I've been having trouble I didn't have before with my BLTouch. There's M112 and M999 errors that requires a reset of the printer. Sometimes after this it works again, sometimes it doesn't.

     

     

    How do I go to the previous Octoprint version?

     

    Thank you for your help.

  11. Hi! newbie here.

     

    I have a disk in my array (disk 2) that suddenly decided to show as unmountable. It had data in it and I don't think the parity was updated soon before this happened. Is there any way to repair this without losing data?

     

    [See picture attached]

     

     

    I followed this guide until step 8 of Running the Test using the webGui. This gives the output seen below. I've stopped the array and parity check.

     

    What do I do? I appreciate any help you can give me.

     

    This is the output of Check Filesystem Status.

     

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ALERT: The filesystem has valuable metadata changes in a log which is being
    ignored because the -n option was used.  Expect spurious inconsistencies
    which may be resolved by first mounting the filesystem to replay the log.
            - scan filesystem freespace and inode maps...
    agf_freeblks 265551874, counted 265547915 in ag 6
    agf_freeblks 261338108, counted 261336673 in ag 11
    agf_freeblks 266317171, counted 266324713 in ag 9
    agi_freecount 8, counted 12 in ag 9
    agi_freecount 8, counted 12 in ag 9 finobt
    sb_ifree 425, counted 433
    sb_fdblocks 3820086862, counted 3847370238
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
    data fork in ino 4318270386 claims free block 540663306
    data fork in ino 4318270387 claims free block 540664904
            - agno = 3
            - agno = 4
    data fork in ino 8603859092 claims free block 1075482721
    data fork in ino 8603859093 claims free block 1075483880
    data fork in ino 8603859094 claims free block 1075484155
    data fork in ino 8603859095 claims free block 1075496492
    data fork in ino 8603859096 claims free block 1075500123
    data fork in ino 8603859097 claims free block 1075502180
    data fork in ino 8603859098 claims free block 1075506861
    data fork in ino 8603859099 claims free block 1075507938
            - agno = 5
            - agno = 6
    imap claims a free inode 12907952427 is in use, would correct imap and clear inode
    imap claims a free inode 12907952428 is in use, would correct imap and clear inode
            - agno = 7
            - agno = 8
    imap claims a free inode 17210296093 is in use, would correct imap and clear inode
            - agno = 9
    imap claims a free inode 19343621299 is in use, would correct imap and clear inode
            - agno = 10
            - agno = 11
    Metadata corruption detected at 0x4379a3, xfs_inode block 0x583628cc8/0x4000
    bad CRC for inode 23679110432
    bad magic number 0x0 on inode 23679110432
    bad version number 0x0 on inode 23679110432
    bad next_unlinked 0x0 on inode 23679110432
    inode identifier 0 mismatch on inode 23679110432
    bad CRC for inode 23679110433
    bad magic number 0x0 on inode 23679110433
    bad version number 0x0 on inode 23679110433
    bad next_unlinked 0x0 on inode 23679110433
    inode identifier 0 mismatch on inode 23679110433
    bad CRC for inode 23679110434
    bad magic number 0x0 on inode 23679110434
    bad version number 0x0 on inode 23679110434
    bad next_unlinked 0x0 on inode 23679110434
    inode identifier 0 mismatch on inode 23679110434
    bad CRC for inode 23679110435
    bad magic number 0x0 on inode 23679110435
    bad version number 0x0 on inode 23679110435
    bad next_unlinked 0x0 on inode 23679110435
    inode identifier 0 mismatch on inode 23679110435
    bad CRC for inode 23679110436
    bad magic number 0x0 on inode 23679110436
    bad version number 0x0 on inode 23679110436
    bad next_unlinked 0x0 on inode 23679110436
    inode identifier 0 mismatch on inode 23679110436
    bad CRC for inode 23679110437
    bad magic number 0x0 on inode 23679110437
    bad version number 0x0 on inode 23679110437
    bad next_unlinked 0x0 on inode 23679110437
    inode identifier 0 mismatch on inode 23679110437
    bad CRC for inode 23679110438
    bad magic number 0x0 on inode 23679110438
    bad version number 0x0 on inode 23679110438
    bad next_unlinked 0x0 on inode 23679110438
    inode identifier 0 mismatch on inode 23679110438
    bad CRC for inode 23679110439
    bad magic number 0x0 on inode 23679110439
    bad version number 0x0 on inode 23679110439
    bad next_unlinked 0x0 on inode 23679110439
    inode identifier 0 mismatch on inode 23679110439
    bad CRC for inode 23679110440
    bad magic number 0x0 on inode 23679110440
    bad version number 0x0 on inode 23679110440
    bad next_unlinked 0x0 on inode 23679110440
    inode identifier 0 mismatch on inode 23679110440
    bad CRC for inode 23679110441
    bad magic number 0x0 on inode 23679110441
    bad version number 0x0 on inode 23679110441
    bad next_unlinked 0x0 on inode 23679110441
    inode identifier 0 mismatch on inode 23679110441
    bad CRC for inode 23679110442
    bad magic number 0x0 on inode 23679110442
    bad version number 0x0 on inode 23679110442
    bad next_unlinked 0x0 on inode 23679110442
    inode identifier 0 mismatch on inode 23679110442
    bad CRC for inode 23679110443
    bad magic number 0x0 on inode 23679110443
    bad version number 0x0 on inode 23679110443
    bad next_unlinked 0x0 on inode 23679110443
    inode identifier 0 mismatch on inode 23679110443
    bad CRC for inode 23679110444
    bad magic number 0x0 on inode 23679110444
    bad version number 0x0 on inode 23679110444
    bad next_unlinked 0x0 on inode 23679110444
    inode identifier 0 mismatch on inode 23679110444
    bad CRC for inode 23679110445
    bad magic number 0x0 on inode 23679110445
    bad version number 0x0 on inode 23679110445
    bad next_unlinked 0x0 on inode 23679110445
    inode identifier 0 mismatch on inode 23679110445
    bad CRC for inode 23679110446
    bad magic number 0x0 on inode 23679110446
    bad version number 0x0 on inode 23679110446
    bad next_unlinked 0x0 on inode 23679110446
    inode identifier 0 mismatch on inode 23679110446
    bad CRC for inode 23679110447
    bad magic number 0x0 on inode 23679110447
    bad version number 0x0 on inode 23679110447
    bad next_unlinked 0x0 on inode 23679110447
    inode identifier 0 mismatch on inode 23679110447
    bad CRC for inode 23679110448
    bad magic number 0x0 on inode 23679110448
    bad version number 0x0 on inode 23679110448
    bad next_unlinked 0x0 on inode 23679110448
    inode identifier 0 mismatch on inode 23679110448
    bad CRC for inode 23679110449
    bad magic number 0x0 on inode 23679110449
    bad version number 0x0 on inode 23679110449
    bad next_unlinked 0x0 on inode 23679110449
    inode identifier 0 mismatch on inode 23679110449
    bad CRC for inode 23679110450
    bad magic number 0x0 on inode 23679110450
    bad version number 0x0 on inode 23679110450
    bad next_unlinked 0x0 on inode 23679110450
    inode identifier 0 mismatch on inode 23679110450
    bad CRC for inode 23679110451
    bad magic number 0x0 on inode 23679110451
    bad version number 0x0 on inode 23679110451
    bad next_unlinked 0x0 on inode 23679110451
    inode identifier 0 mismatch on inode 23679110451
    bad CRC for inode 23679110452
    bad magic number 0x0 on inode 23679110452
    bad version number 0x0 on inode 23679110452
    bad next_unlinked 0x0 on inode 23679110452
    inode identifier 0 mismatch on inode 23679110452
    bad CRC for inode 23679110453
    bad magic number 0x0 on inode 23679110453
    bad version number 0x0 on inode 23679110453
    bad next_unlinked 0x0 on inode 23679110453
    inode identifier 0 mismatch on inode 23679110453
    bad CRC for inode 23679110454
    bad magic number 0x0 on inode 23679110454
    bad version number 0x0 on inode 23679110454
    bad next_unlinked 0x0 on inode 23679110454
    inode identifier 0 mismatch on inode 23679110454
    bad CRC for inode 23679110455
    bad magic number 0x0 on inode 23679110455
    bad version number 0x0 on inode 23679110455
    bad next_unlinked 0x0 on inode 23679110455
    inode identifier 0 mismatch on inode 23679110455
    bad CRC for inode 23679110456
    bad magic number 0x0 on inode 23679110456
    bad version number 0x0 on inode 23679110456
    bad next_unlinked 0x0 on inode 23679110456
    inode identifier 0 mismatch on inode 23679110456
    bad CRC for inode 23679110457
    bad magic number 0x0 on inode 23679110457
    bad version number 0x0 on inode 23679110457
    bad next_unlinked 0x0 on inode 23679110457
    inode identifier 0 mismatch on inode 23679110457
    bad CRC for inode 23679110458
    bad magic number 0x0 on inode 23679110458
    bad version number 0x0 on inode 23679110458
    bad next_unlinked 0x0 on inode 23679110458
    inode identifier 0 mismatch on inode 23679110458
    bad CRC for inode 23679110459
    bad magic number 0x0 on inode 23679110459
    bad version number 0x0 on inode 23679110459
    bad next_unlinked 0x0 on inode 23679110459
    inode identifier 0 mismatch on inode 23679110459
    bad CRC for inode 23679110460
    bad magic number 0x0 on inode 23679110460
    bad version number 0x0 on inode 23679110460
    bad next_unlinked 0x0 on inode 23679110460
    inode identifier 0 mismatch on inode 23679110460
    bad CRC for inode 23679110461
    bad magic number 0x0 on inode 23679110461
    bad version number 0x0 on inode 23679110461
    bad next_unlinked 0x0 on inode 23679110461
    inode identifier 0 mismatch on inode 23679110461
    bad CRC for inode 23679110462
    bad magic number 0x0 on inode 23679110462
    bad version number 0x0 on inode 23679110462
    bad next_unlinked 0x0 on inode 23679110462
    inode identifier 0 mismatch on inode 23679110462
    bad CRC for inode 23679110463
    bad magic number 0x0 on inode 23679110463
    bad version number 0x0 on inode 23679110463
    bad next_unlinked 0x0 on inode 23679110463
    inode identifier 0 mismatch on inode 23679110463
    imap claims inode 23679110432 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110433 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110434 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110435 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110436 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110437 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110438 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110439 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110440 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110441 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110442 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110443 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110444 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110445 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110446 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110447 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110448 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110449 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110450 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110451 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110452 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110453 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110454 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110455 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110456 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110457 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110458 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110459 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110460 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110461 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110462 is present, but inode cluster is sparse, would correct imap
    imap claims inode 23679110463 is present, but inode cluster is sparse, would correct imap
            - agno = 12
            - agno = 13
    imap claims in-use inode 27930508196 is free, would correct imap
    imap claims in-use inode 27930508197 is free, would correct imap
    imap claims in-use inode 27930508198 is free, would correct imap
    imap claims in-use inode 27930508199 is free, would correct imap
            - agno = 14
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
    entry "Season 2" in shortform directory 4318270390 references free inode 17210296093
    would have junked entry "Season 2" in directory inode 4318270390
    would have corrected i8 count in directory 4318270390 from 2 to 1
            - agno = 6
            - agno = 7
    entry "1rLliuramentCA_Solucionat (2)(1).pdf" in shortform directory 12907952426 references free inode 12907952427
    would have junked entry "1rLliuramentCA_Solucionat (2)(1).pdf" in directory inode 12907952426
    entry "2nLliuramentCA_Solucionat(1).pdf" in shortform directory 12907952426 references free inode 12907952428
    would have junked entry "2nLliuramentCA_Solucionat(1).pdf" in directory inode 12907952426
    would have corrected i8 count in directory 12907952426 from 3 to 1
            - agno = 8
            - agno = 9
            - agno = 10
            - agno = 11
    entry "Foundation.S02E04.Where.the.Stars.Are.Scattered.Thinly.2160p.10bit.ATVP.WEB-DL.DDP5.1.HEVC-Vyndros.mkv" in shortform directory 19343621298 references free inode 19343621299
    would have junked entry "Foundation.S02E04.Where.the.Stars.Are.Scattered.Thinly.2160p.10bit.ATVP.WEB-DL.DDP5.1.HEVC-Vyndros.mkv" in directory inode 19343621298
    would have corrected i8 count in directory 19343621298 from 2 to 1
            - agno = 12
    entry "2023-2024" in shortform directory 25769829481 references free inode 21491243199
    would have junked entry "2023-2024" in directory inode 25769829481
    would have corrected i8 count in directory 25769829481 from 6 to 5
            - agno = 13
            - agno = 14
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
    entry "Season 2" in shortform directory inode 4318270390 points to free inode 17210296093
    would junk entry
    would fix i8count in inode 4318270390
    entry "1rLliuramentCA_Solucionat (2)(1).pdf" in shortform directory inode 12907952426 points to free inode 12907952427
    would junk entry
    entry "2nLliuramentCA_Solucionat(1).pdf" in shortform directory inode 12907952426 points to free inode 12907952428
    would junk entry
    would fix i8count in inode 12907952426
    entry "Foundation.S02E04.Where.the.Stars.Are.Scattered.Thinly.2160p.10bit.ATVP.WEB-DL.DDP5.1.HEVC-Vyndros.mkv" in shortform directory inode 19343621298 points to free inode 19343621299
    would junk entry
    would fix i8count in inode 19343621298
    entry "2023-2024" in shortform directory inode 25769829481 points to free inode 21491243199
    would junk entry
    would fix i8count in inode 25769829481
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    disconnected dir inode 19343621298, would move to lost+found
    Phase 7 - verify link counts...
    would have reset inode 4318270390 nlinks from 4 to 3
    would have reset inode 25769829481 nlinks from 7 to 6
    Maximum metadata LSN (1:364353) is ahead of log (1:362649).
    Would format log to cycle 4.
    No modify flag set, skipping filesystem flush and exiting.

     

    error.png

×
×
  • Create New...