Jump to content

SOS: Cache disk "Unmountable: Wrong or no file system"


Go to solution Solved by JorgeB,

Recommended Posts

Hello everyone! Hopefully someone can offer me some advice . . .

 

I was on a mission today to remove a 1TB SSD from Array Devices and add it to my cache pool alongside a 1 TB NVMe. I roughly followed this video from @SpaceInvaderOne to "safely shrink" my array. He offers two methods in the video and I opted for the latter method despite not even having any parity discs. Don't ask me why...I don't know lol. 

 

  • I shut down Docker and VM
  • I used unBalance to export the data to the others disks and then a script he provided via link to zero out that SSD.
  • I stopped the array, ran New Config with "Preserve current assignments" set to "All".
  • Unassigned that SSD. 
  • Started the array.

 

This is where I deviated from the steps in his video . . . The SSD was now an unassigned device and instead of shutting down the server to pull the drive, I figured it was now safe to add it to the cache pool. 

 

At first, both SSDs in the cache pool said they couldn't be mounted because they had no uuid. I stopped the array once again, unassigned that 1TB SSD and started the array. My thought here was that at least my original cache drive would mount and I could carry on with a working server, surviving another day to try adding that second SSD to the cache at a later time. Wrong!

 

The original cache disk was still saying it didn't have a uuid. Instead of realizing that I had to change the cache slots back to 1, I changed the file system from btrfs to auto (I thought I had seen that somewhere once). That didn't work so I changed it back to btrfs . Now I'm noticing that the drive is saying "Unmountable: Wrong or no file system". Despite saying that the drive is unmountable, it still appear in my cache pool instead of under unassigned devices. 

 

I briefly read through the documentation for handling Unmountable Disks and saw that is is recommended to scrub rather than repair a btrfs drive, but I can't. Even with the array started, UnRaid tells me that "Scrub is only available when the array is Started". I'm going to wait for some feedback before proceeding to screw anything else up. 

 

void-diagnostics-20230416-0157.zip

Edited by RS7588
Link to comment

Thanks for your time, @JorgeB

 

 I am fairly certain it was btrfs, but now that you have me thinking about it . . . I'm questioning my memory 🙃.

 

Anyway, here is the output you requested:

root@void:~# blkid
/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat"
/dev/loop1: TYPE="squashfs"
/dev/sdb1: UUID="b05e35d3-1bfb-4be9-b396-471c733eaba5" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="1433ad33-cf37-44eb-bd36-c83981f24c2f"
/dev/loop0: TYPE="squashfs"
/dev/sde1: UUID="0a83b1a8-338d-4de9-a744-8c5cbb7a28d0" BLOCK_SIZE="512" TYPE="xfs"
/dev/sdc1: UUID="d424188f-bf7b-490c-a130-0b7db11b6546" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="70116823-ebca-4d02-bf75-7ef2d941149d"
/dev/md2: UUID="b05e35d3-1bfb-4be9-b396-471c733eaba5" BLOCK_SIZE="512" TYPE="xfs"
/dev/md1: UUID="d424188f-bf7b-490c-a130-0b7db11b6546" BLOCK_SIZE="512" TYPE="xfs"

 

Edited by RS7588
Link to comment
root@void:~# xfs_repair -v /dev/sde1
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
.found candidate secondary superblock...
verified secondary superblock...
writing modified primary superblock
        - block cache size set to 6166120 entries
sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129
resetting superblock realtime bitmap inode pointer to 129
sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130
resetting superblock realtime summary inode pointer to 130
Phase 2 - using internal log
        - zero log...
zero_log: head block 236210 tail block 236210
        - scan filesystem freespace and inode maps...
sb_icount 0, counted 506304
sb_ifree 0, counted 84912
sb_fdblocks 244071381, counted 174348275
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 3
        - agno = 0
        - agno = 2
clearing reflink flag on inode 1611998035
clearing reflink flag on inode 1612335074
clearing reflink flag on inode 1075665249
clearing reflink flag on inode 937479
clearing reflink flag on inode 1612335078
clearing reflink flag on inode 1075736762
clearing reflink flag on inode 937481
clearing reflink flag on inode 581027892
clearing reflink flag on inode 1612701675
clearing reflink flag on inode 937483
clearing reflink flag on inode 937485
clearing reflink flag on inode 1075767383
clearing reflink flag on inode 581101113
clearing reflink flag on inode 1075767397
clearing reflink flag on inode 1075769280
clearing reflink flag on inode 1613499577
clearing reflink flag on inode 1613499579
clearing reflink flag on inode 1613499581
clearing reflink flag on inode 581101115
clearing reflink flag on inode 581101117
clearing reflink flag on inode 581101118
clearing reflink flag on inode 937487
clearing reflink flag on inode 1613499583
clearing reflink flag on inode 1613503680
clearing reflink flag on inode 1201332
clearing reflink flag on inode 1401909
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Note - stripe unit (0) and width (0) were copied from a backup superblock.
Please reset with mount -o sunit=<value>,swidth=<value> if necessary

        XFS_REPAIR Summary    Tue Apr 18 12:25:59 2023

Phase           Start           End             Duration
Phase 1:        04/18 12:25:56  04/18 12:25:56
Phase 2:        04/18 12:25:56  04/18 12:25:56
Phase 3:        04/18 12:25:56  04/18 12:25:57  1 second
Phase 4:        04/18 12:25:57  04/18 12:25:58  1 second
Phase 5:        04/18 12:25:58  04/18 12:25:58
Phase 6:        04/18 12:25:58  04/18 12:25:59  1 second
Phase 7:        04/18 12:25:59  04/18 12:25:59

Total run time: 3 seconds
done

 

Is this next?

mount -o sunit=0,swidth=0

 

Edited by RS7588
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...