Jump to content

Lost appdata, docker.img and disk2 error (SOLVED)


Go to solution Solved by JorgeB,

Recommended Posts

Hoping someone can help me out here. I browsed the forums and the very thorough FAQ but Im a little worried that there is more going on here than I realize. There was no power outage/unclean shutdown but this morning I was unable to access my dockers and when I logged in I was greeted with "unable to write disk2" error and "unable to write to Docker Image" error. I then noticed that my appdata share is missing in the Shares tab but is present if I browse the cache disk manually. When I browse the file system of Disk2 it is present but carries a lot of files I'm not familiar with as if Disk 2 has become some kind of system drive and not a part of the data pool. 

 

A start of month scheduled parity check is currently in progress so I'm hoping that this isn't being written to parity.

 

1879588030_Screenshot2023-04-02142523.thumb.png.8cc0bed2ec6fa7e382ee6f9e632567b7.png134570936_Screenshot2023-04-02142543.thumb.png.a1bc028425a933819b034d9e2bc76d23.png

 

I can't get my docker to restart and haven't tried getting too deep as I'm afraid of data loss. I will be out of town 3 days panicking about this taking out my PFsense VM as well... the gf will kill me if the internet tanks when im gone for 3 days. Diagnostics zip attached as well.

Screenshot 2023-04-02 114654.png

tower-diagnostics-20230402-1353.zip

Edited by Randomhero
spelling/clarity
Link to comment
24 minutes ago, Squid said:

Run the check disk filesystem against disk 2

 Well things have descended further. I cancelled the parity check so that I could stop the array and enter maintenance mode. The array refused to stop on the command (array still displayed as on after 15 mins) so I requested that it safe reboot which it managed to complete after about 10 minutes. Disk 2 on reboot now displays unmountable and when in maintenance mode the check file system option is unavailable. 

Link to comment
19 hours ago, JorgeB said:

Post the complete output you are getting.

That was in fact the full output - I am new to this form of error so I think I have missed a step based on my reading. Disk2 shows unmountable so I don't think my array is actually entering maintenance mode when requested but cant find any info regarding this in the unraid manual. The storage management guide simply states to start the array in maintenance mode - but when I select that box and start the array the disks remain selectable as if the array is off and the "Check Filesystem Status" in the GUI informs me that I need to be in maintenance mode. 

 

I am away from home for work until Thursday morning now but I will post further output and diagnostic when I can. I may also go buy another 8TB HDD to swap in if this can be solved with a HDD replacement. (SMART and UnRaid both list it as healthy though). Hopefully Im not missing something super simple here, Ive watched SpaceInvaders video, read the manual on storage and browsed the forum here but cant seem to get xfs repair to actually run on this disk.

Link to comment
5 hours ago, JorgeB said:

Are you using Firefox? If yes try with a different browser.

I am only because I am administering the server directly and firefox appears to be what loads by default in the GUI. I lost remote management when I stopped the array and my Pfsense VM died (it is also now missing on reboot so thats another nightmare to add to this). I will re route my network temporarily when I get home and give Chrome a shot with this. Still seems strange that the terminal request for xfs_repair came back as "killed" in verbose mode.

Link to comment

 

Sorry about the bad formatting, lots of errors as well as some entry errors indicating they would be "junked"

Finally got a file system check done and I was greeted with this:

 

Phase 1 - find and verify superblock... - reporting progress in intervals of 15 minutes - block cache size set to 700248 entries Phase 2 - using internal log - zero log... zero_log: head block 102922 tail block 102715 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - 09:25:03: zeroing log - 119233 of 119233 blocks done - scan filesystem freespace and inode maps... bad magic number bad on-disk superblock 16 - bad magic number primary/secondary superblock 16 conflict - AG superblock geometry info conflicts with filesystem geometry would zero unused portion of secondary superblock (AG #16) would fix compat feature mismatch in AG 16 super, 0x0 != 0x50ddc0f9 would fix incompat feature mismatch in AG 16 super, 0x3 != 0xfd6ffbf2 would fix ro compat feature mismatch in AG 16 super, 0x5 != 0x46cb2ed0 would fix log incompat feature mismatch in AG 16 super, 0x0 != 0x39423c7e would reset bad sb for ag 16 bad uncorrected agheader 16, skipping ag... sb_icount 1472, counted 2240 sb_ifree 1365, counted 1343 sb_fdblocks 1806905172, counted 1428095951 - 09:25:04: scanning filesystem freespace - 31 of 32 allocation groups done - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - 09:25:04: scanning agi unlinked lists - 32 of 32 allocation groups done - process known inodes and perform inode discovery... - agno = 15 - agno = 30 - agno = 0 - agno = 16 - agno = 17 - agno = 1 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 31 Metadata corruption detected at 0x4379a3, xfs_inode block 0x3868765c0/0x4000 bad CRC for inode 16645100832 bad magic number 0x241a on inode 16645100832 bad version number 0x6d on inode 16645100832 bad next_unlinked 0xdeda645c on inode 16645100832 inode identifier 1575615604729810958 mismatch on inode 16645100832 bad CRC for inode 16645100833 bad magic number 0x241a on inode 16645100833 bad version number 0x6d on inode 16645100833 bad next_unlinked 0xdeda645c on inode 16645100833 inode identifier 1575615604729810958 mismatch on inode 16645100833 bad CRC for inode 16645100834 bad magic number 0x241a on inode 16645100834 bad version number 0x6d on inode 16645100834 bad next_unlinked 0xdeda645c on inode 16645100834 inode identifier 1575615604729810958 mismatch on inode 16645100834 bad CRC for inode 16645100835 bad magic number 0x241a on inode 16645100835 bad version number 0x6d on inode 16645100835 bad next_unlinked 0xdeda645c on inode 16645100835 inode identifier 1575615604729810958 mismatch on inode 16645100835 bad CRC for inode 16645100836 bad magic number 0x241a on inode 16645100836 bad version number 0x6d on inode 16645100836 bad next_unlinked 0xdeda645c on inode 16645100836 inode identifier 1575615604729810958 mismatch on inode 16645100836 bad CRC for inode 16645100837 bad magic number 0x241a on inode 16645100837 bad version number 0x6d on inode 16645100837 bad next_unlinked 0xdeda645c on inode 16645100837 inode identifier 1575615604729810958 mismatch on inode 16645100837 bad CRC for inode 16645100838 bad magic number 0x241a on inode 16645100838 bad version number 0x6d on inode 16645100838 bad next_unlinked 0xdeda645c on inode 16645100838 inode identifier 1575615604729810958 mismatch on inode 16645100838 bad CRC for inode 16645100839 bad magic number 0x241a on inode 16645100839 bad version number 0x6d on inode 16645100839 bad next_unlinked 0xdeda645c on inode 16645100839 inode identifier 1575615604729810958 mismatch on inode 16645100839 bad CRC for inode 16645100840 bad magic number 0x241a on inode 16645100840 bad version number 0x6d on inode 16645100840 bad next_unlinked 0xdeda645c on inode 16645100840 inode identifier 1575615604729810958 mismatch on inode 16645100840 bad CRC for inode 16645100841 bad magic number 0x241a on inode 16645100841 bad version number 0x6d on inode 16645100841 bad next_unlinked 0xdeda645c on inode 16645100841 inode identifier 1575615604729810958 mismatch on inode 16645100841 bad CRC for inode 16645100842 bad magic number 0x241a on inode 16645100842 bad version number 0x6d on inode 16645100842 bad next_unlinked 0xdeda645c on inode 16645100842 inode identifier 1575615604729810958 mismatch on inode 16645100842 bad CRC for inode 16645100843 bad magic number 0x241a on inode 16645100843 bad version number 0x6d on inode 16645100843 bad next_unlinked 0xdeda645c on inode 16645100843 inode identifier 1575615604729810958 mismatch on inode 16645100843 bad CRC for inode 16645100844 bad magic number 0x241a on inode 16645100844 bad version number 0x6d on inode 16645100844 bad next_unlinked 0xdeda645c on inode 16645100844 inode identifier 1575615604729810958 mismatch on inode 16645100844 bad CRC for inode 16645100845 bad magic number 0x241a on inode 16645100845 bad version number 0x6d on inode 16645100845 bad next_unlinked 0xdeda645c on inode 16645100845 inode identifier 1575615604729810958 mismatch on inode 16645100845 bad CRC for inode 16645100846 bad magic number 0x241a on inode 16645100846 bad version number 0x6d on inode 16645100846 bad next_unlinked 0xdeda645c on inode 16645100846 inode identifier 1575615604729810958 mismatch on inode 16645100846 bad CRC for inode 16645100847 bad magic number 0x241a on inode 16645100847 bad version number 0x6d on inode 16645100847 bad next_unlinked 0xdeda645c on inode 16645100847 inode identifier 1575615604729810958 mismatch on inode 16645100847 bad CRC for inode 16645100848 bad magic number 0x241a on inode 16645100848 bad version number 0x6d on inode 16645100848 bad next_unlinked 0xdeda645c on inode 16645100848 inode identifier 1575615604729810958 mismatch on inode 16645100848 bad CRC for inode 16645100849 bad magic number 0x241a on inode 16645100849 bad version number 0x6d on inode 16645100849 bad next_unlinked 0xdeda645c on inode 16645100849 inode identifier 1575615604729810958 mismatch on inode 16645100849 bad CRC for inode 16645100850 bad magic number 0x241a on inode 16645100850 bad version number 0x6d on inode 16645100850 bad next_unlinked 0xdeda645c on inode 16645100850 inode identifier 1575615604729810958 mismatch on inode 16645100850 bad CRC for inode 16645100851 bad magic number 0x241a on inode 16645100851 bad version number 0x6d on inode 16645100851 bad next_unlinked 0xdeda645c on inode 16645100851 inode identifier 1575615604729810958 mismatch on inode 16645100851 bad CRC for inode 16645100852 bad magic number 0x241a on inode 16645100852 bad version number 0x6d on inode 16645100852 bad next_unlinked 0xdeda645c on inode 16645100852 inode identifier 1575615604729810958 mismatch on inode 16645100852 bad CRC for inode 16645100853 bad magic number 0x241a on inode 16645100853 bad version number 0x6d on inode 16645100853 bad next_unlinked 0xdeda645c on inode 16645100853 inode identifier 1575615604729810958 mismatch on inode 16645100853 bad CRC for inode 16645100854 bad magic number 0x241a on inode 16645100854 bad version number 0x6d on inode 16645100854 bad next_unlinked 0xdeda645c on inode 16645100854 inode identifier 1575615604729810958 mismatch on inode 16645100854 bad CRC for inode 16645100855 bad magic number 0x241a on inode 16645100855 bad version number 0x6d on inode 16645100855 bad next_unlinked 0xdeda645c on inode 16645100855 inode identifier 1575615604729810958 mismatch on inode 16645100855 bad CRC for inode 16645100856 bad magic number 0x241a on inode 16645100856 bad version number 0x6d on inode 16645100856 bad next_unlinked 0xdeda645c on inode 16645100856 inode identifier 1575615604729810958 mismatch on inode 16645100856 bad CRC for inode 16645100857 bad magic number 0x241a on inode 16645100857 bad version number 0x6d on inode 16645100857 bad next_unlinked 0xdeda645c on inode 16645100857 inode identifier 1575615604729810958 mismatch on inode 16645100857 bad CRC for inode 16645100858 bad magic number 0x241a on inode 16645100858 bad version number 0x6d on inode 16645100858 bad next_unlinked 0xdeda645c on inode 16645100858 inode identifier 1575615604729810958 mismatch on inode 16645100858 bad CRC for inode 16645100859 bad magic number 0x241a on inode 16645100859 bad version number 0x6d on inode 16645100859 bad next_unlinked 0xdeda645c on inode 16645100859 inode identifier 1575615604729810958 mismatch on inode 16645100859 bad CRC for inode 16645100860 bad magic number 0x241a on inode 16645100860 bad version number 0x6d on inode 16645100860 bad next_unlinked 0xdeda645c on inode 16645100860 inode identifier 1575615604729810958 mismatch on inode 16645100860 bad CRC for inode 16645100861 bad magic number 0x241a on inode 16645100861 bad version number 0x6d on inode 16645100861 bad next_unlinked 0xdeda645c on inode 16645100861 inode identifier 1575615604729810958 mismatch on inode 16645100861 bad CRC for inode 16645100862 bad magic number 0x241a on inode 16645100862 bad version number 0x6d on inode 16645100862 bad next_unlinked 0xdeda645c on inode 16645100862 inode identifier 1575615604729810958 mismatch on inode 16645100862 bad CRC for inode 16645100863 bad magic number 0x241a on inode 16645100863 bad version number 0x6d on inode 16645100863 bad next_unlinked 0xdeda645c on inode 16645100863 inode identifier 1575615604729810958 mismatch on inode 16645100863 imap claims inode 16645100832 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100833 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100834 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100835 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100836 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100837 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100838 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100839 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100840 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100841 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100842 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100843 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100844 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100845 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100846 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100847 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100848 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100849 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100850 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100851 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100852 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100853 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100854 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100855 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100856 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100857 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100858 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100859 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100860 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100861 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100862 is present, but inode cluster is sparse, correcting imap imap claims inode 16645100863 is present, but inode cluster is sparse, correcting imap - agno = 24 - agno = 2 - agno = 25 - agno = 3 - agno = 4 - agno = 26 - agno = 27 - agno = 5 - agno = 6 - agno = 7 - agno = 28 - agno = 8 - agno = 9 - agno = 10 - agno = 29 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - 09:25:18: process known inodes and inode discovery - 2240 of 1472 inodes done - process newly discovered inodes... - 09:25:18: process newly discovered inodes - 64 of 32 allocation groups done Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - 09:25:18: setting up duplicate extent list - 32 of 32 allocation groups done - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 entry "appdata" in shortform directory 128 references non-existent inode 8589935744 would have junked entry "appdata" in directory inode 128 would have corrected i8 count in directory 128 from 3 to 2 entry "Wrong Turn (2021)" at block 0 offset 1320 in directory inode 132 references non-existent inode 8589935745 would clear inode number in entry at offset 1320... - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 entry "Season 4" in shortform directory 5905580227 references non-existent inode 8589935747 would have junked entry "Season 4" in directory inode 5905580227 would have corrected i8 count in directory 5905580227 from 7 to 6 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - agno = 28 - agno = 29 - agno = 30 - agno = 31 - 09:25:18: check for inodes claiming duplicate blocks - 2240 of 1472 inodes done No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 30 - agno = 15 entry "appdata" in shortform directory 128 references non-existent inode 8589935744 would junk entry would fix i8count in inode 128 entry "Wrong Turn (2021)" in directory inode 132 points to non-existent inode 8589935745, would junk entry bad hash table for directory inode 132 (no data entry): would rebuild would rebuild directory inode 132 - agno = 1 - agno = 31 - agno = 16 - agno = 2 - agno = 17 - agno = 3 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - agno = 28 - agno = 29 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 entry "Season 4" in shortform directory 5905580227 references non-existent inode 8589935747 would junk entry would fix i8count in inode 5905580227 - agno = 12 - agno = 13 - agno = 14 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected dir inode 9128415644, would move to lost+found Phase 7 - verify link counts... would have reset inode 128 nlinks from 8 to 7 would have reset inode 132 nlinks from 33 to 32 would have reset inode 5905580227 nlinks from 8 to 7 - 09:25:18: verify and correct link counts - 32 of 32 allocation groups done No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Apr 6 09:25:18 2023 Phase Start End Duration Phase 1: 04/06 09:25:03 04/06 09:25:03 Phase 2: 04/06 09:25:03 04/06 09:25:04 1 second Phase 3: 04/06 09:25:04 04/06 09:25:18 14 seconds Phase 4: 04/06 09:25:18 04/06 09:25:18 Phase 5: Skipped Phase 6: 04/06 09:25:18 04/06 09:25:18 Phase 7: 04/06 09:25:18 04/06 09:25:18 Total run time: 15 seconds

 

 

Edited by Randomhero
Link to comment
14 minutes ago, JorgeB said:

Run it again without -n, and if it asks for it use -L

Done and I have managed to mount the disk again. Now I just need to figure out how to restore my VMs and Docker. I am following the FAQ for Docker recreation posted by Squid but the delete button is missing for me. Tells me the docker path doesnt exist so I will just make a new one on cache.

Link to comment
4 minutes ago, JorgeB said:

Ideally you would have a backup of that file, but you can create a new one, the VMs will not appear, you can create new ones with the same settings, then point them to the existing vdisks and it should be OK.

 

Lesson learned on that one I guess. I do not have a backup unfortunately, Ill do my best to recreate them. Thank you for all your help Ill mark this as solved and create a new topic if I run into more issues. Do you have any best guesses on what would have caused such a cascading failure of file system corruption, docker and VM failure?

Link to comment
  • Randomhero changed the title to Lost appdata, docker.img and disk2 error (SOLVED)

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...