Jump to content
LAST CALL on the Unraid Summer Sale! 😎 ⌛ ×

Lost my Appdata! No Backups Present


Omeed
Go to solution Solved by Omeed,

Recommended Posts

I am really frustrated. I messed around with my appdata folder a couple weeks ago like an idiot and changed the cache setting to "no". I had a cache pool that I was trying to break up due to differing sizes and thought that having the mover move everything to the array would be the safest way to move forward. Last night, my wife told me the Plex wasn't working. I checked this morning and the docker was not running and my fix common problems had an error which I dont recall. My appdata share was missing. I went to shares to add it thinking this would fix the problem, but I wasn't able to add the share. I can't add any shares now. I restart my server hoping something would kick in, but nothing. I go in to my backup/restore appdata, but there are no backups! All the backups are missing even though it is on a schedule. My common problems alerts me that it is unable to write to disk 3. I go to the Main tab and see there are no notifications. I stopped my array and restarted in maintenance mode. I ran a disk check on disk 3 and here are the results:

Phase 1 - find and verify superblock...
        - block cache size set to 702768 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 232535 tail block 231826
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used.  Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
        - scan filesystem freespace and inode maps...
sb_ifree 42382, counted 42395
sb_fdblocks 1108438279, counted 1122532236
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
Metadata CRC error detected at 0x44d87d, xfs_bmbt block 0x3116e9cf0/0x1000
btree block 6/36557732 is suspect, error -74
bad magic # 0x241a9c92 in inode 12968150401 (data fork) bmbt block 1647170468
bad data fork in inode 12968150401
would have cleared inode 12968150401
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 4
        - agno = 5
        - agno = 3
        - agno = 6
        - agno = 7
bad magic # 0x241a9c92 in inode 12968150401 (data fork) bmbt block 1647170468
bad data fork in inode 12968150401
would have cleared inode 12968150401
entry "docker.img" in shortform directory 13017619199 references free inode 12968150401
would have junked entry "docker.img" in directory inode 13017619199
would have corrected i8 count in directory 13017619199 from 2 to 1
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
entry "docker.img" in shortform directory inode 13017619199 points to free inode 12968150401
would junk entry
would fix i8count in inode 13017619199
        - agno = 7
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
Maximum metadata LSN (1984442613:-916355275) is ahead of log (3:232535).
Would format log to cycle 1984442616.
No modify flag set, skipping filesystem flush and exiting.

        XFS_REPAIR Summary    Sun Oct 23 09:39:11 2022

Phase		Start		End		Duration
Phase 1:	10/23 09:37:51	10/23 09:37:52	1 second
Phase 2:	10/23 09:37:52	10/23 09:37:55	3 seconds
Phase 3:	10/23 09:37:55	10/23 09:38:34	39 seconds
Phase 4:	10/23 09:38:34	10/23 09:38:34
Phase 5:	Skipped
Phase 6:	10/23 09:38:34	10/23 09:39:11	37 seconds
Phase 7:	10/23 09:39:11	10/23 09:39:11

Total run time: 1 minute, 20 seconds

 

What do I do next? I would really appreciate help. I have so many dockers that have taken years to get all the settings and integrations right. I am fearful I will have to start from scratch.

Link to comment

Please tell me you’re backing them up with with the Backup / Restore Appdata plug-in?

 

If not and the appdata folder is gone then I think you’re up a creek.

 

ETA: So the zip files aren’t in the path specified in CA Backup / Restore?

 

ETA: We’re you getting emails from Backup / Restore when it ran?

Edited by jlficken
Link to comment

I tried to run a repair and this is what I got next:

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
sb_ifree 42382, counted 42395
sb_fdblocks 1108438279, counted 1122532236
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
Metadata CRC error detected at 0x44d87d, xfs_bmbt block 0x3116e9cf0/0x1000
btree block 6/36557732 is suspect, error -74
bad magic # 0x241a9c92 in inode 12968150401 (data fork) bmbt block 1647170468
bad data fork in inode 12968150401
cleared inode 12968150401
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
entry "docker.img" in shortform directory 13017619199 references free inode 12968150401
junking entry "docker.img" in directory inode 13017619199
corrected i8 count in directory 13017619199, was 2, now 1
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Metadata corruption detected at 0x44d778, xfs_bmbt block 0x3116e9cf0/0x1000
libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x3116e9cf0/0x1000
Maximum metadata LSN (1984442613:-916355275) is ahead of log (1:2).
Format log to cycle 1984442616.
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Refusing to write a corrupt buffer to the data device!
xfs_repair: Lost a write to the data device!

fatal error -- File system metadata writeout failed, err=117.  Re-run xfs_repair.

 

Link to comment
3 minutes ago, jlficken said:

Please tell me you’re backing them up with with the Backup / Restore Appdata plug-in?

 

If not and the appdata folder is gone then I think you’re up a creek.

I have the Backup / Restore Appdata plug-in and it was running fine, but when I went to restore there were now backups available! I don't know what happened!

Link to comment

Another run on xfs_repair:

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Maximum metadata LSN (3:231829) is ahead of log (1:2).
Format log to cycle 6.
done

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...