Jump to content
helpermonkey

I made a booboo - I think I messed up with New Config

54 posts in this topic Last Reply

Recommended Posts

Posted (edited)

okay here is the report from xfs repair:

root@Buddha:~# xfs_repair -v /dev/md5
Phase 1 - find and verify superblock...
        - block cache size set to 737008 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 505816 tail block 505816
        - scan filesystem freespace and inode maps...
sb_ifree 1411, counted 1417
sb_fdblocks 8636304, counted 9131157
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
data fork in ino 156931303 claims free block 19618443
data fork in ino 156931303 claims free block 19618444
imap claims in-use inode 156931303 is free, correcting imap
data fork in ino 159222676 claims free block 19903014
attr fork in ino 159222676 claims free block 19906546
imap claims in-use inode 159222676 is free, correcting imap
data fork in ino 159222697 claims free block 19905936
data fork in ino 159222697 claims free block 19905937
imap claims in-use inode 159222697 is free, correcting imap
imap claims in-use inode 159222699 is free, correcting imap
data fork in ino 159222706 claims free block 19906096
data fork in ino 159222706 claims free block 19906097
imap claims in-use inode 159222706 is free, correcting imap
imap claims in-use inode 159222708 is free, correcting imap
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 3
        - agno = 0
        - agno = 1
        - agno = 2
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Maximum metadata LSN (1:505836) is ahead of log (1:505816).
Format log to cycle 4.

        XFS_REPAIR Summary    Wed Jun 12 10:58:04 2019

Phase           Start           End             Duration
Phase 1:        06/12 10:56:10  06/12 10:56:14  4 seconds
Phase 2:        06/12 10:56:14  06/12 10:56:14
Phase 3:        06/12 10:56:14  06/12 10:56:16  2 seconds
Phase 4:        06/12 10:56:16  06/12 10:56:16
Phase 5:        06/12 10:56:16  06/12 10:56:16
Phase 6:        06/12 10:56:16  06/12 10:56:17  1 second
Phase 7:        06/12 10:56:17  06/12 10:56:17

Total run time: 7 seconds
done
root@Buddha:~#

 

so it's flipping back!!! you rock!!

 

so two outstanding issues that i have questions about:

1) my plex docker  seems to have disappeared... however, my settings and directory and such is still on my drive so can i just "reinstall it"? or should i wipe those and start with a fresh install?

2) is there any reason to (or effective way to) move some of the data from my almost filled up drives to either of my drives with a good chunk of space?

Edited by helpermonkey

Share this post


Link to post
25 minutes ago, helpermonkey said:

1) my plex docker  seems to have disappeared... however, my settings and directory and such is still on my drive so can i just "reinstall it"? or should i wipe those and start with a fresh install?

Should be able to just add it back from CA's previous apps.

25 minutes ago, helpermonkey said:

2) is there any reason to (or effective way to) move some of the data from my almost filled up drives to either of my drives with a good chunk of space?

Not really, as long as there are a few GB free, drives should never be filled to 100%, always leave at least 10 or 20GB free.

Share this post


Link to post
Posted (edited)

You rock! Thank you so much for all your help. this is why i love unraid - the software is cool - the people are fantastic.

1983049688_ScreenShot2019-06-12at11_54_10PM.png.e54221bfaaba53524707674cbc1a4b06.png

Edited by helpermonkey

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.