Jump to content

brambo23

Members
  • Posts

    40
  • Joined

  • Last visited

Posts posted by brambo23

  1. I recently upgraded my unraid server to 6.12.8 from 6.9.2 due to other issues.

     

    After the upgrade it seemed the docker settings slightly changed (i also accidentally upgraded to the latest version when I was on 22.2.2, but updated the tag to the version i'm currently running.)  The port switched back to 443 (default port). I had changed it back to the previous setting port 4445 (noted in the config file below).

     

    I did run a new permissions as i found an issue with my plex server and upgrading past 6.10.

     

    Since the upgrade my nextcloud is now returning a 404 error on any page I visit. 

     

    I'm struggling to find any additional clues on where to look to find the problem.

     

    The docker logs show:

    Quote

    GID/UID
    -------------------------------------

    User uid:    99
    User gid:    100
    -------------------------------------

    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 40-config: executing...
    [cont-init.d] 40-config: exited 0.
    [cont-init.d] 50-install: executing...
    [cont-init.d] 50-install: exited 0.
    [cont-init.d] 60-memcache: executing...
    [cont-init.d] 60-memcache: exited 0.
    [cont-init.d] 70-aliases: executing...
    [cont-init.d] 70-aliases: exited 0.
    [cont-init.d] 90-custom-folders: executing...
    [cont-init.d] 90-custom-folders: exited 0.
    [cont-init.d] 99-custom-files: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-files: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.

     

    but then later i went to look at it and saw this repeatedly

     

    Quote

    s6-applyuidgid: fatal: unable to exec php: No such file or directory

     

     

    The port on the docker container matches the config file

    Quote

    'trusted_domains' =>
      array (
        0 => 'x.x.x.x:4445',
        1 => 'cloud.<domain>.net',
      ),
      'dbtype' => 'mysql',
      'version' => '22.2.0.2',
      'overwrite.cli.url' => 'https://cloud.<domain>.net',
      'overwritehost' => 'cloud.<domain>.net',
      'overwriteprotocol' => 'https'

     

    The access logs show regardless of where i go (nextcloud/log/ngix/access.log)

    Quote

    [18/Mar/2024:19:46:54 -0400] "GET /favicon.ico HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"

     

    Any tips on where I can find the source of the problem here?

  2. 1 minute ago, trurl said:

    You did this on the physical disks again? I ask because repairing the emulated disks is the usual method.

    On the physical disks yes, I did not do anything with the emulated disks (that I know of)

    2 minutes ago, trurl said:

    If the physical disks are mountable then yes. Do they have lost+found folders after this additional repair?

    I repaired, mounted, and looked in the drive itself again, found no folder or entity titled "lost+found"

     

  3. 5 minutes ago, trurl said:

    What exactly do you mean by this? If you starting rebuilding on the physical disks, you have already altered their contents and they are most likely unmountable now.

    What I meant by is, I started the array and it immediately began the rebuild process.  Out of fear I cancelled the process, and the drives were not mountable until I did a file system check.  It repaired the issue again then I was able the mount the drives again.

     

    It would appear the corruption is already logged in the parity

     

    3 minutes ago, trurl said:

    Do you have another copy of anything important and irreplaceable? Parity is not a substitute for backups. Plenty of ways to lose data besides failed disks, including user error.

    There's nothing terribly lost, mostly just time.  I don't store important personal data on this array for that reason. If it were to die, all I would of lost was time.  My goal is to save myself time if at all possible.  I built this server many years ago but when it comes to the details of how everything works, I will admit I am not the most knowledgeable (if that isn't already apparent).

     

    7 minutes ago, trurl said:

    Rebuild makes the physical disks have the exact same contents as the emulated disks. That is all it can do.

     

    So having said that, it seems if i want to keep the data and the fixed drives, the best move is to reset the array configuration?

  4. 7 hours ago, trurl said:

    That is exactly what I mean by "reassign".

     

    It will rebuild them with the contents of the emulated disks, which were unmountable. 

    so the second problem with that is i realized the emulated data is missing the data that's on those physical disks

     

    what happens if i rebuild the array with that status?  will it rebuild the array as emulated then the data from the physical disks gets added? or the physical disks look more like the emulated disks and i lose that data anyway?

  5. 3 minutes ago, trurl said:

     

    Can you actually see your data on each of the disks?

     

    If you reassign them it will want to rebuild. You will have to New Config them back into the array and rebuild parity.

     

    https://docs.unraid.net/unraid-os/manual/storage-management/#reset-the-array-configuration

    I can see data on the disks.

     

    Do I have to reassign them?  Can i just add them back to the disk they were originally assigned to?

    image.png

    image.png

  6. 2 minutes ago, trurl said:

    Do it again without -n. If it asks for it use -L

    I ran it again (didn't change any flags) now it says no data corruption detected

     

    Quote

    FS: xfs

    Executing file system check: /sbin/xfs_repair -n '/dev/sdc1' 2>&1
    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
    - zero log...
    - scan filesystem freespace and inode maps...
    - found root inode chunk
    Phase 3 - for each AG...
    - scan (but don't clear) agi unlinked lists...
    - process known inodes and perform inode discovery...
    - agno = 0
    - agno = 1
    - agno = 2
    - agno = 3
    - agno = 4
    - agno = 5
    - agno = 6
    - agno = 7
    - agno = 8
    - agno = 9
    - agno = 10
    - agno = 11
    - agno = 12
    - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
    - setting up duplicate extent list...
    - check for inodes claiming duplicate blocks...
    - agno = 0
    - agno = 1
    - agno = 4
    - agno = 7
    - agno = 8
    - agno = 10
    - agno = 5
    - agno = 2
    - agno = 9
    - agno = 12
    - agno = 6
    - agno = 3
    - agno = 11
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
    - traversing filesystem ...
    - traversal finished ...
    - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

    No file system corruption detected!

     

  7. So after this test, I mounted and unmounted the drives (unassigned) and it was able to read the amount of data on the drive, I ran the file system check on both drives and now both of them say no file system corruption detected.

     

    Also I did buy and install a new card based on the recommendation of @trurl 

    should I be ok to mount these drives back in the array and try to start the array again?

  8. On 3/15/2024 at 6:18 AM, JonathanM said:

    Pretty sure you can, I can't think of any changes that would effect your ability to recover.

    So i upgraded, installed

     

    On 3/14/2024 at 8:51 PM, trurl said:

    Check filesystem on each of those disks, using the webUI. Capture the output and post it.

    For drive in SDC (252MG) :

    Quote

    FS: xfs

    Executing file system check: /sbin/xfs_repair -n '/dev/sdc1' 2>&1
    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
    - zero log...
    ALERT: The filesystem has valuable metadata changes in a log which is being
    ignored because the -n option was used. Expect spurious inconsistencies
    which may be resolved by first mounting the filesystem to replay the log.
    - scan filesystem freespace and inode maps...
    sb_fdblocks 3210815618, counted 3234651569
    - found root inode chunk
    Phase 3 - for each AG...
    - scan (but don't clear) agi unlinked lists...
    - process known inodes and perform inode discovery...
    - agno = 0
    - agno = 1
    - agno = 2
    - agno = 3
    - agno = 4
    - agno = 5
    - agno = 6
    - agno = 7
    - agno = 8
    - agno = 9
    - agno = 10
    - agno = 11
    - agno = 12
    - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
    - setting up duplicate extent list...
    - check for inodes claiming duplicate blocks...
    - agno = 0
    - agno = 1
    - agno = 3
    - agno = 8
    - agno = 10
    - agno = 4
    - agno = 5
    - agno = 6
    - agno = 7
    - agno = 11
    - agno = 9
    - agno = 2
    - agno = 12
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
    - traversing filesystem ...
    - traversal finished ...
    - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

    File system corruption detected!

     

    For drive in sdb (RD0B)

    Quote

    FS: xfs

    Executing file system check: /sbin/xfs_repair -n '/dev/sdb1' 2>&1
    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
    - zero log...
    - scan filesystem freespace and inode maps...
    - found root inode chunk
    Phase 3 - for each AG...
    - scan (but don't clear) agi unlinked lists...
    - process known inodes and perform inode discovery...
    - agno = 0
    - agno = 1
    - agno = 2
    - agno = 3
    - agno = 4
    - agno = 5
    - agno = 6
    - agno = 7
    - agno = 8
    - agno = 9
    - agno = 10
    - agno = 11
    - agno = 12
    - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
    - setting up duplicate extent list...
    - check for inodes claiming duplicate blocks...
    - agno = 0
    - agno = 1
    - agno = 4
    - agno = 5
    - agno = 9
    - agno = 11
    - agno = 12
    - agno = 7
    - agno = 8
    - agno = 3
    - agno = 6
    - agno = 2
    - agno = 10
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
    - traversing filesystem ...
    - traversal finished ...
    - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

    No file system corruption detected!

     

  9. 1 minute ago, itimpi said:

    NO.   

     

    The drives are not 'needing' a format'  They are needing their corrupt file system to be repaired.   If you attempt to format the drives with them disabled it will simply format the 'emulated' drives to contain an empty file system and update parity to reflect that so in effect you wipe all your data,    A format operation is NEVER part of a data recovery action unless you WANT to remove the data on the drives you format.

     

     

    Understood. So if I replaced those drives with NEW drives. It would repopulate the data correct?

  10. 8 hours ago, itimpi said:

    Since the drives are currently marked as 'disabled' then Unraid has stopped using them and should be emulating them.   You can see if the process for handling unmountable drives in the online documentation accessible via the Manual link at the bottom of the Unraid GUI works for the emulated drive.   

     

    If not there is a good chance that all (or at least most) of the contents can be recovered from the physical drives.

    So out of curiosity,

     

    since there are two drives that are needing a reformat. If I ended up just reformatting those, wouldn’t the parity restore the data to those drives?

  11. 3 hours ago, JorgeB said:

    I would really recommend avoiding SATA port multipliers:

     

    Mar 13 04:02:57 LLNNAS1337 kernel: ata4.15: Port Multiplier detaching
    Mar 13 04:02:57 LLNNAS1337 kernel: ahci 0000:03:00.0: FBS is disabled
    Mar 13 04:02:57 LLNNAS1337 kernel: ata4.01: disabled
    Mar 13 04:02:57 LLNNAS1337 kernel: ata4.02: disabled
    Mar 13 04:02:57 LLNNAS1337 kernel: ata4.03: disabled
    Mar 13 04:02:57 LLNNAS1337 kernel: ata4.04: disabled
    Mar 13 04:02:57 LLNNAS1337 kernel: ata4.00: disabled

     

    If detached and dropped all connected disks, reboot and post new diags after array start.

    I assume that's talking about my pcie sata extension card?

  12. Hello,

     

    I woke up this morning and found some of my dockers weren't working this morning.  However, I was busy with work so I couldn't get to it in the evening, when I looked at my unraid server I saw that my most recent 3 drives were listed as missing from the raid (the ones from the first screenshot.

     

    I have a 12 drive array with 2 parity drives.  I have a Silverstone SST-CS308B case with a startech 3 drive hotswap bay with a PCIE drive extension card (not sure which one but I know i bought it from the known working list over 5 years ago).

     

    The 3 drives in question are in that 3 drive hotswap bay.  I disabled the array (downloaded the diagnostics) and tried to get the drives detectable again but with no avail.  I rebooted the server and now 1 of the drives are usable, but 2 are now listed as detected but unmountable.

     

    They've been in the system since December or so, they don't have too much data but i want to see if there is a way to make them mountable at this point.

     

     

    Thanks in advance.image.thumb.png.d2aaaa6c5b30f2ed6ab562dcd4027f3a.png
     

     

     

    llnnas1337-diagnostics-20240313-1909.zip

×
×
  • Create New...