Jump to content

Cache - Unmountable: No pool uuid


Go to solution Solved by itimpi,

Recommended Posts

Hello everyone. have tried a bunch of solutions from other users i found on google, but to no avail. this is my issue.

 

I had my Cache pool running on 1 nvme drive. this is been working fine for weeks.

now i have added an additional nvme so that the pool will have 2 drives for redundancy. Both drives are nvme 2TB. same brand.

However when i added the second drive this broke my system.

Now im getting an error of Unmountable:no pool uuid.

Things i tried:

Removing the second drive and reverting back to just one drive. same error.

cache settings. changing format to auto , then back to btrfs. same error.

then tried running the command:

btrfs-select-super -s 1 /dev/nvme0n1

but i get error message :

No valid Btrfs found on /dev/nvme0n1

 

after each attempt i rebooted the system but still same error.

 

Note: 

Following Spaceinvaderone tutorial i created 2 different cache pools.

 

the nvme cache pool is the one with this issue and this is where my "appdata" and "system" shares are located and due to the magic of Unraid my server still running.

 

Any ideas on how to solve this issue ?

Link to comment

The original device was:

nvme0n1

Then added:

nvme1n1

 

here are the results of btrfs fi show:

 

Label: none  uuid: 1bd0e1e9-6de2-4ee0-960d-15b56e76552b
        Total devices 2 FS bytes used 144.00KiB
        devid    1 size 931.51GiB used 3.03GiB path /dev/sde1
        devid    2 size 931.51GiB used 3.03GiB path /dev/sdf1

Label: none  uuid: c7ace299-0623-44cf-947d-63136c927b24
        Total devices 1 FS bytes used 2.73GiB
        devid    1 size 20.00GiB used 3.52GiB path /dev/loop2

Label: none  uuid: 92d8dabc-0f64-4309-886d-635b8865f2c2
        Total devices 1 FS bytes used 412.00KiB
        devid    1 size 1.00GiB used 126.38MiB path /dev/loop3

Label: none  uuid: 64aae5f5-4d68-4f61-87f5-1b788f38e7f7
        Total devices 1 FS bytes used 144.00KiB
        devid    1 size 1.82TiB used 2.02GiB path /dev/nvme1n1p1

 

Thanks

 

 

Link to comment

OK,

btrfs select super -s 1 /dev/nvme0n1p1 

That command gave me help screen for the btrfs command showing all options.

so i tried:

btrfs-select-super -s 1 /dev/nvme0n1p1 (I had already tried this solution from another post)

That gave me the following:

No valid Btrfs found on /dev/nvme0n1p1
ERROR: open ctree failed

 

Then I typed the second command:

btrfs fi show

Output:

Label: none  uuid: 1bd0e1e9-6de2-4ee0-960d-15b56e76552b
        Total devices 2 FS bytes used 144.00KiB
        devid    1 size 931.51GiB used 3.03GiB path /dev/sde1
        devid    2 size 931.51GiB used 3.03GiB path /dev/sdf1

Label: none  uuid: c7ace299-0623-44cf-947d-63136c927b24
        Total devices 1 FS bytes used 2.73GiB
        devid    1 size 20.00GiB used 3.52GiB path /dev/loop2

Label: none  uuid: 92d8dabc-0f64-4309-886d-635b8865f2c2
        Total devices 1 FS bytes used 412.00KiB
        devid    1 size 1.00GiB used 126.38MiB path /dev/loop3

Label: none  uuid: 64aae5f5-4d68-4f61-87f5-1b788f38e7f7
        Total devices 1 FS bytes used 144.00KiB
        devid    1 size 1.82TiB used 2.02GiB path /dev/nvme1n1p1

Edited by beachbum
Link to comment

after running:

xfs_repair -v /dev/nvme0n1p1

 

Output:

Phase 1 - find and verify superblock...
        - block cache size set to 1519704 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 772597 tail block 772597
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 0
        - agno = 3
        - agno = 2
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...

        XFS_REPAIR Summary    Wed May  3 15:11:13 2023

Phase           Start           End             Duration
Phase 1:        05/03 15:11:13  05/03 15:11:13
Phase 2:        05/03 15:11:13  05/03 15:11:13
Phase 3:        05/03 15:11:13  05/03 15:11:13
Phase 4:        05/03 15:11:13  05/03 15:11:13
Phase 5:        05/03 15:11:13  05/03 15:11:13
Phase 6:        05/03 15:11:13  05/03 15:11:13
Phase 7:        05/03 15:11:13  05/03 15:11:13

Total run time: 
done

 

----------

FYI:

This is the status under the MAIN tab.

 

541186089_Screenshot2023-05-03at6_17_02PM.thumb.png.05085af5e73e46d49907dc170b417378.png

Edited by beachbum
added image
Link to comment

From the xfs_repair output it appears that when you had it set as a single drive in the pool it was explicitly set as xfs (the default for pools is btrfs) but when you added an additional drive the format was then configured to be btrfs as xfs does not support multi-drive pools.   If you set it back to being a single drive pool and set the format to be xfs I think you will find the drive mounts fine,

 

if you still want to move to a multi-drive pool you first need to copy the data off that drive so it can be reformatted as btrfs so that the pool can be multi-drive.

Link to comment

Ok,

 

I stopped the array, selected 1 drive only for the pool.

Then went to the settings and selected xfs. restarted the array and voila !! the drive mounted and the pool is now working again.

 

I will copy the data off the drive but then:

Do i delete the pool and create a new one as a multidrive and the drives get formatted automatically ?

or

Do i manually format each drive as btrfs and just re-add to the existing pool ?

 

Will the data copy back automatically or do i do it manually ?

 

I want to make sure i do it correctly as this pool holds my 

appdata, domains, and  system folders and i want to avoid any corruption of the files in those folders.

 

Thank you for all your help.

 

 

 

 

 

Link to comment
  • Solution
19 minutes ago, beachbum said:

Do i delete the pool and create a new one as a multidrive and the drives get formatted automatically ?

or

Do i manually format each drive as btrfs and just re-add to the existing pool ?

If you now add the second drive to the pool the file system will automatically be set to be btrfs., but will not initially be formatted.   You then need to format it from within Unraid (which does both drives).   

 

Note that with the soon to be released 6.12 release you will also have the option of using zfs in multi-drive pools as an alternative to btrfs so if you are interested in going that route you can install 6.12-rc5 (or wait until it goes stable which is expected to be within days) and then create the multi-drive pool selecting zfs as the file system to be used.

 

20 minutes ago, beachbum said:

Will the data copy back automatically or do i do it manually ?

It depends where you copied the data to.  Copying it manually will work and is probably fastest, but if it was copied to a share of the correct name on the main array then setting the relevant shares to Use Cache-Prefer and running mover (with docker and VM services disabled) then mover can be used to copy the files back.

Link to comment

After following your instructions I copied back the data after reformatting as a 2 drive pool btrfs.

Happy to report the system is now working as expected.

 

Thank you for your help on this issue.

 

Have a great day !!

525336424_Screenshotlast.thumb.png.781e387e4da4134940719064d51376e9.png/Users/eli/Desktop/Screenshot last.png
 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...