DiskTech Posted April 1 Share Posted April 1 Have an Unraid 6.12.3 system in a Community radio station that has been working perfectly until a recent power event (available time on a UPS ran out). Performed a reboot after power was restored and all came back up fine. VM's all fired up ok. There were still a couple of power issues at the station and one of the volunteers - thinking that they would be 'helpful' - started powering off circuits in the main switch box... with the (almost) inevitable shutdown on the running Unraid server. That's when the following occurred. Shares are no longer visible ("There are no exportable user shares"). Same with VM's & Dockers (libvirt and dockers services failed to start). Shares exist within the config file on the USB flash drive. Have set the array to maintenance mode and run a file system check on disc 1 (using the -n option) but just wanted to get some advice on the best steps to take before going any further - or doing anything that might cause further issues. Output of the disk check is below: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... block (8,9007624-9008232) multiply claimed by bno space tree, state - 1 block (8,7890368-7890450) multiply claimed by cnt space tree, state - 2 agf_freeblks 8499350, counted 8500856 in ag 8 sb_fdblocks 625233138, counted 638871888 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 data fork in ino 1076434240 claims free block 1522923626 data fork in ino 1076434240 claims free block 815855494 data fork in ino 1076434240 claims free block 815855596 data fork in ino 1076434240 claims free block 1570823075 data fork in ino 1076434240 claims free block 815855800 data fork in ino 1076434240 claims free block 1524938185 out-of-order bmap key (file offset) in inode 1076434240, data fork, fsbno 244038259 bad data fork in inode 1076434240 would have cleared inode 1076434240 - agno = 2 - agno = 3 data fork in ino 3524192129 claims free block 815855195 data fork in ino 3524192129 claims free block 845106376 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (8,5325878-5325882) only seen by one free space btree free space (8,5539848-5539848) only seen by one free space btree free space (8,5855840-5855863) only seen by one free space btree free space (8,7805644-7806016) only seen by one free space btree free space (8,7889354-7890367) only seen by one free space btree free space (8,8379179-8379184) only seen by one free space btree - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 9 - agno = 1 - agno = 4 - agno = 6 - agno = 5 - agno = 10 entry "docker.img" in shortform directory 1073741954 references free inode 1076434240 would have junked entry "docker.img" in directory inode 1073741954 - agno = 12 - agno = 11 - agno = 13 - agno = 14 - agno = 15 - agno = 2 - agno = 8 - agno = 7 out-of-order bmap key (file offset) in inode 1076434240, data fork, fsbno 244038259 would have cleared inode 1076434240 xfs_repair: rmap.c:1279: fix_inode_reflink_flags: Assertion `(irec->ino_is_rl & irec->ir_free) == 0' failed. Quote Link to comment
DiskTech Posted April 1 Author Share Posted April 1 Diagnostics addedrocket-diagnostics-20240401-2024.zip Quote Link to comment
DiskTech Posted April 1 Author Share Posted April 1 Thanks for the reply, JorgeB - realised I left them off and was uploading at the same time as your reply Quote Link to comment
JorgeB Posted April 1 Share Posted April 1 Run xfs_repair again without -n, and if it asks for -L use it. Quote Link to comment
DiskTech Posted April 1 Author Share Posted April 1 Thanks JorgeB ...In maintenance mode or as normal (disks mounted)? Quote Link to comment
DiskTech Posted April 1 Author Share Posted April 1 Report added below... (it did not ask to use -L)... ------------------------------------------------------------------------------------------------------------ Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... block (8,9007624-9008232) multiply claimed by bno space tree, state - 1 block (8,7890368-7890450) multiply claimed by cnt space tree, state - 2 agf_freeblks 8499350, counted 8500856 in ag 8 sb_fdblocks 625233138, counted 638871888 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 data fork in ino 1076434240 claims free block 1522923626 data fork in ino 1076434240 claims free block 815855494 data fork in ino 1076434240 claims free block 815855596 data fork in ino 1076434240 claims free block 1570823075 data fork in ino 1076434240 claims free block 815855800 data fork in ino 1076434240 claims free block 1524938185 out-of-order bmap key (file offset) in inode 1076434240, data fork, fsbno 244038259 bad data fork in inode 1076434240 would have cleared inode 1076434240 - agno = 2 - agno = 3 data fork in ino 3524192129 claims free block 815855195 data fork in ino 3524192129 claims free block 845106376 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (8,5325878-5325882) only seen by one free space btree free space (8,5539848-5539848) only seen by one free space btree free space (8,5855840-5855863) only seen by one free space btree free space (8,7805644-7806016) only seen by one free space btree free space (8,7889354-7890367) only seen by one free space btree free space (8,8379179-8379184) only seen by one free space btree - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 8 - agno = 11 - agno = 13 - agno = 2 - agno = 6 - agno = 7 - agno = 10 - agno = 1 - agno = 12 - agno = 4 - agno = 14 entry "docker.img" in shortform directory 1073741954 references free inode 1076434240 would have junked entry "docker.img" in directory inode 1073741954 - agno = 15 - agno = 9 - agno = 5 out-of-order bmap key (file offset) in inode 1076434240, data fork, fsbno 244038259 would have cleared inode 1076434240 Missing reference count record for (1/92743954) len 4 count 2 Missing reference count record for (1/92744070) len 8 count 2 Missing reference count record for (1/92746486) len 64 count 2 xfs_repair: rmap.c:1279: fix_inode_reflink_flags: Assertion `(irec->ino_is_rl & irec->ir_free) == 0' failed. Quote Link to comment
JorgeB Posted April 1 Share Posted April 1 1 hour ago, DiskTech said: Report added below. That's not without -n Quote Link to comment
DiskTech Posted April 1 Author Share Posted April 1 Apologies JorgeB, re-ran the check and it did indeed ask for the -L option... Resultant log is below... Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata block (8,9007624-9008232) multiply claimed by bno space tree, state - 1 block (8,7890368-7890450) multiply claimed by cnt space tree, state - 2 agf_freeblks 8499350, counted 8500856 in ag 8 sb_fdblocks 625233138, counted 638871888 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 data fork in ino 1076434240 claims free block 1522923626 data fork in ino 1076434240 claims free block 815855494 data fork in ino 1076434240 claims free block 815855596 data fork in ino 1076434240 claims free block 1570823075 data fork in ino 1076434240 claims free block 815855800 data fork in ino 1076434240 claims free block 1524938185 out-of-order bmap key (file offset) in inode 1076434240, data fork, fsbno 244038259 bad data fork in inode 1076434240 cleared inode 1076434240 - agno = 2 - agno = 3 data fork in ino 3524192129 claims free block 815855195 data fork in ino 3524192129 claims free block 845106376 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 4 - agno = 7 - agno = 11 - agno = 1 - agno = 15 - agno = 9 - agno = 10 - agno = 3 - agno = 12 entry "docker.img" in shortform directory 1073741954 references free inode 1076434240 junking entry "docker.img" in directory inode 1073741954 - agno = 5 - agno = 14 - agno = 6 - agno = 8 - agno = 13 clearing reflink flag on inodes when possible Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (3:689913) is ahead of log (1:2). Format log to cycle 6. done Quote Link to comment
DiskTech Posted April 1 Author Share Posted April 1 Restarted the array and shares & VM's appear to havve returned. Dockers are still empty. Thank you JorgeB... that is at least much better than before... Any suggestions re the missing Dockers or do they just need to be reinstalled? Quote Link to comment
Solution JorgeB Posted April 1 Solution Share Posted April 1 If you still have appdata you can recreate the image: https://docs.unraid.net/unraid-os/manual/docker-management/#re-create-the-docker-image-file Also see below if you have any custom docker networks: https://docs.unraid.net/unraid-os/manual/docker-management/#docker-custom-networks Quote Link to comment
DiskTech Posted April 1 Author Share Posted April 1 Thankyou so much for your help JorgeB... Sincerely appreciated. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.