Jump to content

Shares missing


zoo

Recommended Posts

I'm at a loss. I had my unraid down for a week while redoing the room where it lives. I think it got angry with me. It starts up fiine but all shares are missing and thus no other services start. Docker, VM etc.

 

I dont really know where to start other than asking here. Attached my diagnostics.

 

Please help.

tower-diagnostics-20231222-1328.zip

Edited by zoo
missing word
Link to comment

Your syslog in the diagnostics has:

XFS (md1p1): corrupt dinode 1073741953, (btree extents).
Dec 22 13:27:45 Tower kernel: XFS (md1p1): Metadata corruption detected at xfs_iread_bmbt_block+0x76/0x1dd [xfs], inode 0x40000081 xfs_iread_bmbt_block
Dec 22 13:27:45 Tower kernel: XFS (md1p1): Unmount and run xfs_repair

so at very least you need to run a check filesystem on disk1 as file system corruption can stop shares showing up as expected.  It would not do any harm to also do the other drives just in case.

Link to comment

xfs_repair does not seem to work it out. Any suggestions on how to move forward? What options would be useful here?

 

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used.  Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
data fork in ino 1073741953 claims free block 300776786
data fork in ino 1073741953 claims free block 301055346
data fork in ino 1073741953 claims free block 301055348
data fork in ino 1073741953 claims free block 304081467
data fork in ino 1073741953 claims free block 304081477
data fork in ino 1073741953 claims free block 304081479
data fork in ino 1073741953 claims free block 304081496
data fork in ino 1073741953 claims free block 304081683
data fork in ino 1073741953 claims free block 304081692
data fork in ino 1073741953 claims free block 136792407
data fork in ino 1073741953 claims free block 135861395
data fork in ino 1073741953 claims free block 135855666
data fork in ino 1073741953 claims free block 135857768
data fork in ino 1073741953 claims free block 135857787
data fork in ino 1073741953 claims free block 135857753
data fork in ino 1073741953 claims free block 135857811
data fork in ino 1073741953 claims free block 135853331
data fork in ino 1073741953 claims free block 135858278
data fork in ino 1073741953 claims free block 135862823
data fork in ino 1073741953 claims free block 135861689
data fork in ino 1073741953 claims free block 135862322
data fork in ino 1073741953 claims free block 136065710
data fork in ino 1073741953 claims free block 135862900
data fork in ino 1073741953 claims free block 135932088
data fork in ino 1073741953 claims free block 135864090
data fork in ino 1073741953 claims free block 135864445
data fork in ino 1073741953 claims free block 135855564
data fork in ino 1073741953 claims free block 135864784
data fork in ino 1073741953 claims free block 135865045
data fork in ino 1073741953 claims free block 135851372
data fork in ino 1073741953 claims free block 135865365
data fork in ino 1073741953 claims free block 135865802
data fork in ino 1073741953 claims free block 135865805
data fork in ino 1073741953 claims free block 135865808
data fork in ino 1073741953 claims free block 135865811
data fork in ino 1073741953 claims free block 135865814
data fork in ino 1073741953 claims free block 135865816
data fork in ino 1073741953 claims free block 135865825
data fork in ino 1073741953 claims free block 135865833
data fork in ino 1073741953 claims free block 135865329
data fork in ino 1073741953 claims free block 135866011
data fork in ino 1073741953 claims free block 135866017
data fork in ino 1073741953 claims free block 135866020
data fork in ino 1073741953 claims free block 135867283
data fork in ino 1073741953 claims free block 135867286
data fork in ino 1073741953 claims free block 135867288
out-of-order bmap key (file offset) in inode 1073741953, data fork, fsbno 135867306
bad data fork in inode 1073741953
would have cleared inode 1073741953
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 3
        - agno = 2
        - agno = 1
entry "docker.img" in shortform directory 1073741952 references free inode 1073741953
would have junked entry "docker.img" in directory inode 1073741952
out-of-order bmap key (file offset) in inode 1073741953, data fork, fsbno 135867306
would have cleared inode 1073741953
xfs_repair: rmap.c:1279: fix_inode_reflink_flags: Assertion `(irec->ino_is_rl & irec->ir_free) == 0' failed.
Missing reference count record for (2/32341329) len 1 count 2

 

 

Link to comment

Ok, progress. Shares are back but the docker page in the GUI contains no dockers.

 

xfs_repair did complan about docker.img but I cant seem to find the xfs_repair log again.

 

At this point Im thinking about swapping in a hot spare and rebuild from the parity disk. But I something fails there I'll be screwed so if I can exhaust this path first. Is it possible to just rebuild docker.img from the parity disk?

Link to comment
34 minutes ago, zoo said:

rebuild from the parity disk

Parity is in sync with that disk so rebuild can't give any different results.

 

10 minutes ago, zoo said:

that is unfortunate. Anyway, I'll recreate the dockers

Just in case you didn't read far enough to know how to reinstall your containers after recreating docker.img, here is a link to that part a little further down.

https://docs.unraid.net/unraid-os/manual/docker-management/#re-create-the-docker-image-file

 

Not related to your problems, but if your appdata, domains, system shares have files on the array, Docker/VM performance will be impacted by slower array, and array disks can't spin down since these files are always open.

 

Link to comment

Thanks for all the help! Saved me bunch of time.

 

Some of my dockers lost settings. No problem, just a tad frustrating. My biggest gripe is that wireguard-easy completely disappeared, like it never existed. And I cant get the built in vpn working.

Link to comment
52 minutes ago, trurl said:

I assume you mean the settings within the app, not the settings within the dockers page, which are on the templates on flash. If you lost appdata for an app, then it would be like a new install of that container.

I'll break it down.

Plex was depricated

Wireguard-easy is missing, 

qBitorrent had only default appdata

 

So one or two can be connected to my disk problem, but more likely my ignorance. I want to blame someone, but I cant. -:)

Link to comment
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...