Tried to rename pool drive, now it's unmountable. Can I recover it?


Go to solution Solved by JorgeB,

Recommended Posts

In short, I had a single pool nvme drive ("cache") that has appdata, system & downloads (currently empty) on it.  I added a HDD to a new pool ("cachepool") that I wanted to use for downloads.

 

I decided, stupidly and in haste, to also rename the original "cache" drive to "systempool".  Upon reboot, both drives are unmountable.

 

I know the new "cachepool" just needs to be formated.  That's easy.

 

But I would like to get the original "cache" back and running, rather than rebuilding all my dockers.  I tried changing the name back to "cache" but it is still unmountable.

 

Is there any way to fix this, or do I need to start rebuilding my dockers all over again?

 

Thanks for any help!

Link to comment
Posted (edited)

diagnostics attached.  I have restarted the array 2 other times since renaming but besides changing "systempool" back to "cache" I haven't done anything.

 

Also, I added the HDD to the "cache" pool originally, but I don't believe I ever started the array before I moved it to it's own pool "cachepool".  Not sure this matters but figured I'd mention it.

 

Please let me know if there's more needed, thanks again.

illmatic-diagnostics-20240326-1325.zip

Edited by miggity
Link to comment
Posted (edited)

Yes, originally I added the HDD to the existing "cache" pool with the NVME drive, but I don't remember if I starter the array or not before I moved the HDD into it's own pool.

 

I did all of this in a few minutes during lunch so I was moving quick and clearly not paying full attention.  Which I've learned always gets me in trouble with Unraid, yet here I am again...

Edited by miggity
Link to comment
Posted (edited)
/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="7013-DA05" BLOCK_SIZE="512" TYPE="vfat"
/dev/loop1: TYPE="squashfs"
/dev/nvme0n1p1: UUID="71d7119e-49bf-45a6-b170-d4d72162860c" BLOCK_SIZE="512" TYPE="xfs"
/dev/sdd1: UUID="9312d6d1-8645-4fbe-934b-b32720144681" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="e851406a-cbd6-47c0-92a9-a5a3abf4a819"
/dev/md2p1: UUID="9312d6d1-8645-4fbe-934b-b32720144681" BLOCK_SIZE="512" TYPE="xfs"
/dev/sdb1: PARTUUID="85897617-0c65-4ac8-9127-e801a434c7c2"
/dev/md1p1: UUID="cb1e31c1-9a7f-4e6d-91fe-62caeb5e4563" BLOCK_SIZE="512" TYPE="xfs"
/dev/loop0: TYPE="squashfs"
/dev/sdc1: UUID="cb1e31c1-9a7f-4e6d-91fe-62caeb5e4563" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="3b44a867-05dc-450e-a026-5e3021db7281"

 

Output from blkid above.

Edited by miggity
Link to comment

Yes it was.  Output below:

 

Phase 1 - find and verify superblock...
        - block cache size set to 755024 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 367882 tail block 367882
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...

        XFS_REPAIR Summary    Tue Mar 26 14:52:46 2024

Phase           Start           End             Duration
Phase 1:        03/26 14:52:45  03/26 14:52:45
Phase 2:        03/26 14:52:45  03/26 14:52:45
Phase 3:        03/26 14:52:45  03/26 14:52:46  1 second
Phase 4:        03/26 14:52:46  03/26 14:52:46
Phase 5:        03/26 14:52:46  03/26 14:52:46
Phase 6:        03/26 14:52:46  03/26 14:52:46
Phase 7:        03/26 14:52:46  03/26 14:52:46

Total run time: 1 second
done

 

Link to comment

Yes, it was still set to 2 slots but I changed it back to 1 slot during that quick process.

 

Either way it is working so thanks.  I'm gonna try to rename these now and nothing else.  Hopefully it still works like it should.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.