zoo Posted December 22, 2023 Share Posted December 22, 2023 (edited) I'm at a loss. I had my unraid down for a week while redoing the room where it lives. I think it got angry with me. It starts up fiine but all shares are missing and thus no other services start. Docker, VM etc. I dont really know where to start other than asking here. Attached my diagnostics. Please help. tower-diagnostics-20231222-1328.zip Edited December 22, 2023 by zoo missing word Quote Link to comment
itimpi Posted December 22, 2023 Share Posted December 22, 2023 Your syslog in the diagnostics has: XFS (md1p1): corrupt dinode 1073741953, (btree extents). Dec 22 13:27:45 Tower kernel: XFS (md1p1): Metadata corruption detected at xfs_iread_bmbt_block+0x76/0x1dd [xfs], inode 0x40000081 xfs_iread_bmbt_block Dec 22 13:27:45 Tower kernel: XFS (md1p1): Unmount and run xfs_repair so at very least you need to run a check filesystem on disk1 as file system corruption can stop shares showing up as expected. It would not do any harm to also do the other drives just in case. Quote Link to comment
zoo Posted December 22, 2023 Author Share Posted December 22, 2023 Thanks for the reply, I'll check back after the xfs_repair has finished. Quote Link to comment
zoo Posted December 22, 2023 Author Share Posted December 22, 2023 xfs_repair does not seem to work it out. Any suggestions on how to move forward? What options would be useful here? Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 data fork in ino 1073741953 claims free block 300776786 data fork in ino 1073741953 claims free block 301055346 data fork in ino 1073741953 claims free block 301055348 data fork in ino 1073741953 claims free block 304081467 data fork in ino 1073741953 claims free block 304081477 data fork in ino 1073741953 claims free block 304081479 data fork in ino 1073741953 claims free block 304081496 data fork in ino 1073741953 claims free block 304081683 data fork in ino 1073741953 claims free block 304081692 data fork in ino 1073741953 claims free block 136792407 data fork in ino 1073741953 claims free block 135861395 data fork in ino 1073741953 claims free block 135855666 data fork in ino 1073741953 claims free block 135857768 data fork in ino 1073741953 claims free block 135857787 data fork in ino 1073741953 claims free block 135857753 data fork in ino 1073741953 claims free block 135857811 data fork in ino 1073741953 claims free block 135853331 data fork in ino 1073741953 claims free block 135858278 data fork in ino 1073741953 claims free block 135862823 data fork in ino 1073741953 claims free block 135861689 data fork in ino 1073741953 claims free block 135862322 data fork in ino 1073741953 claims free block 136065710 data fork in ino 1073741953 claims free block 135862900 data fork in ino 1073741953 claims free block 135932088 data fork in ino 1073741953 claims free block 135864090 data fork in ino 1073741953 claims free block 135864445 data fork in ino 1073741953 claims free block 135855564 data fork in ino 1073741953 claims free block 135864784 data fork in ino 1073741953 claims free block 135865045 data fork in ino 1073741953 claims free block 135851372 data fork in ino 1073741953 claims free block 135865365 data fork in ino 1073741953 claims free block 135865802 data fork in ino 1073741953 claims free block 135865805 data fork in ino 1073741953 claims free block 135865808 data fork in ino 1073741953 claims free block 135865811 data fork in ino 1073741953 claims free block 135865814 data fork in ino 1073741953 claims free block 135865816 data fork in ino 1073741953 claims free block 135865825 data fork in ino 1073741953 claims free block 135865833 data fork in ino 1073741953 claims free block 135865329 data fork in ino 1073741953 claims free block 135866011 data fork in ino 1073741953 claims free block 135866017 data fork in ino 1073741953 claims free block 135866020 data fork in ino 1073741953 claims free block 135867283 data fork in ino 1073741953 claims free block 135867286 data fork in ino 1073741953 claims free block 135867288 out-of-order bmap key (file offset) in inode 1073741953, data fork, fsbno 135867306 bad data fork in inode 1073741953 would have cleared inode 1073741953 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 2 - agno = 1 entry "docker.img" in shortform directory 1073741952 references free inode 1073741953 would have junked entry "docker.img" in directory inode 1073741952 out-of-order bmap key (file offset) in inode 1073741953, data fork, fsbno 135867306 would have cleared inode 1073741953 xfs_repair: rmap.c:1279: fix_inode_reflink_flags: Assertion `(irec->ino_is_rl & irec->ir_free) == 0' failed. Missing reference count record for (2/32341329) len 1 count 2 Quote Link to comment
JorgeB Posted December 22, 2023 Share Posted December 22, 2023 Run it again without -n or nothing will be done, and if it asks for -L use it. Quote Link to comment
zoo Posted December 22, 2023 Author Share Posted December 22, 2023 Ok, progress. Shares are back but the docker page in the GUI contains no dockers. xfs_repair did complan about docker.img but I cant seem to find the xfs_repair log again. At this point Im thinking about swapping in a hot spare and rebuild from the parity disk. But I something fails there I'll be screwed so if I can exhaust this path first. Is it possible to just rebuild docker.img from the parity disk? Quote Link to comment
JorgeB Posted December 22, 2023 Share Posted December 22, 2023 5 minutes ago, zoo said: Shares are back but the docker page in the GUI contains no dockers. Docker image can easily be recreated or post new diags. Quote Link to comment
zoo Posted December 22, 2023 Author Share Posted December 22, 2023 Here are the diag. If there are nothing of value there I'll just go ahead and recreate my dockers. tower-diagnostics-20231222-1959.zip Quote Link to comment
JorgeB Posted December 22, 2023 Share Posted December 22, 2023 Docker image is mounting, but it's a new one, see the instructions above to recreate with your containers. Quote Link to comment
zoo Posted December 22, 2023 Author Share Posted December 22, 2023 Ok, that is unfortunate. Anyway, I'll recreate the dockers, is there any feasible way to check for other files that may have gone bad? I'm searching for a xfs_repair log or something. Quote Link to comment
JorgeB Posted December 22, 2023 Share Posted December 22, 2023 Look for a lost+found folder. Quote Link to comment
trurl Posted December 22, 2023 Share Posted December 22, 2023 34 minutes ago, zoo said: rebuild from the parity disk Parity is in sync with that disk so rebuild can't give any different results. 10 minutes ago, zoo said: that is unfortunate. Anyway, I'll recreate the dockers Just in case you didn't read far enough to know how to reinstall your containers after recreating docker.img, here is a link to that part a little further down. https://docs.unraid.net/unraid-os/manual/docker-management/#re-create-the-docker-image-file Not related to your problems, but if your appdata, domains, system shares have files on the array, Docker/VM performance will be impacted by slower array, and array disks can't spin down since these files are always open. Quote Link to comment
zoo Posted December 22, 2023 Author Share Posted December 22, 2023 Thanks for all the help! Saved me bunch of time. Some of my dockers lost settings. No problem, just a tad frustrating. My biggest gripe is that wireguard-easy completely disappeared, like it never existed. And I cant get the built in vpn working. Quote Link to comment
trurl Posted December 22, 2023 Share Posted December 22, 2023 19 minutes ago, zoo said: Some of my dockers lost settings I assume you mean the settings within the app, not the settings within the dockers page, which are on the templates on flash. If you lost appdata for an app, then it would be like a new install of that container. Quote Link to comment
zoo Posted December 22, 2023 Author Share Posted December 22, 2023 52 minutes ago, trurl said: I assume you mean the settings within the app, not the settings within the dockers page, which are on the templates on flash. If you lost appdata for an app, then it would be like a new install of that container. I'll break it down. Plex was depricated Wireguard-easy is missing, qBitorrent had only default appdata So one or two can be connected to my disk problem, but more likely my ignorance. I want to blame someone, but I cant. - Quote Link to comment
trurl Posted December 22, 2023 Share Posted December 22, 2023 DId you use Previous Apps on the Apps page? Quote Link to comment
zoo Posted January 6 Author Share Posted January 6 On 12/22/2023 at 11:29 PM, trurl said: DId you use Previous Apps on the Apps page? Took me a while to get my head around it. But I slowly get it. Quote Link to comment
trurl Posted January 7 Share Posted January 7 If you want further advice post new diagnostics. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.