RS7588 Posted April 16, 2023 Share Posted April 16, 2023 (edited) Hello everyone! Hopefully someone can offer me some advice . . . I was on a mission today to remove a 1TB SSD from Array Devices and add it to my cache pool alongside a 1 TB NVMe. I roughly followed this video from @SpaceInvaderOne to "safely shrink" my array. He offers two methods in the video and I opted for the latter method despite not even having any parity discs. Don't ask me why...I don't know lol. I shut down Docker and VM I used unBalance to export the data to the others disks and then a script he provided via link to zero out that SSD. I stopped the array, ran New Config with "Preserve current assignments" set to "All". Unassigned that SSD. Started the array. This is where I deviated from the steps in his video . . . The SSD was now an unassigned device and instead of shutting down the server to pull the drive, I figured it was now safe to add it to the cache pool. At first, both SSDs in the cache pool said they couldn't be mounted because they had no uuid. I stopped the array once again, unassigned that 1TB SSD and started the array. My thought here was that at least my original cache drive would mount and I could carry on with a working server, surviving another day to try adding that second SSD to the cache at a later time. Wrong! The original cache disk was still saying it didn't have a uuid. Instead of realizing that I had to change the cache slots back to 1, I changed the file system from btrfs to auto (I thought I had seen that somewhere once). That didn't work so I changed it back to btrfs . Now I'm noticing that the drive is saying "Unmountable: Wrong or no file system". Despite saying that the drive is unmountable, it still appear in my cache pool instead of under unassigned devices. I briefly read through the documentation for handling Unmountable Disks and saw that is is recommended to scrub rather than repair a btrfs drive, but I can't. Even with the array started, UnRaid tells me that "Scrub is only available when the array is Started". I'm going to wait for some feedback before proceeding to screw anything else up. void-diagnostics-20230416-0157.zip Edited April 16, 2023 by RS7588 Quote Link to comment
JorgeB Posted April 16, 2023 Share Posted April 16, 2023 Was the original cache btrfs or xfs? Not seeing any btrfs filesystem detected at boot time, post the output of blkid 1 Quote Link to comment
RS7588 Posted April 16, 2023 Author Share Posted April 16, 2023 (edited) Thanks for your time, @JorgeB! I am fairly certain it was btrfs, but now that you have me thinking about it . . . I'm questioning my memory 🙃. Anyway, here is the output you requested: root@void:~# blkid /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/sdb1: UUID="b05e35d3-1bfb-4be9-b396-471c733eaba5" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="1433ad33-cf37-44eb-bd36-c83981f24c2f" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="0a83b1a8-338d-4de9-a744-8c5cbb7a28d0" BLOCK_SIZE="512" TYPE="xfs" /dev/sdc1: UUID="d424188f-bf7b-490c-a130-0b7db11b6546" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="70116823-ebca-4d02-bf75-7ef2d941149d" /dev/md2: UUID="b05e35d3-1bfb-4be9-b396-471c733eaba5" BLOCK_SIZE="512" TYPE="xfs" /dev/md1: UUID="d424188f-bf7b-490c-a130-0b7db11b6546" BLOCK_SIZE="512" TYPE="xfs" Edited April 17, 2023 by RS7588 Quote Link to comment
JorgeB Posted April 17, 2023 Share Posted April 17, 2023 15 hours ago, RS7588 said: /dev/sde1: UUID="0a83b1a8-338d-4de9-a744-8c5cbb7a28d0" BLOCK_SIZE="512" TYPE="xfs" Assuming this was the original cache it's xfs, set pool slots to 1 and fs to xfs. 1 Quote Link to comment
RS7588 Posted April 17, 2023 Author Share Posted April 17, 2023 The issue persists. Quote Link to comment
RS7588 Posted April 18, 2023 Author Share Posted April 18, 2023 void-diagnostics-20230417-2042.zip Quote Link to comment
JorgeB Posted April 18, 2023 Share Posted April 18, 2023 Apr 17 18:44:34 void emhttpd: shcmd (11484): mount -t btrfs -o noatime,space_cache=v2 /dev/sde1 /mnt/cache Last boot attempt was still with btrfs, did you start the array after changing cache to xfs? 1 Quote Link to comment
RS7588 Posted April 18, 2023 Author Share Posted April 18, 2023 Yes, I did. For kicks and giggles . . . I will stop the array, make sure the drive is set to xfs and start the array again. void-diagnostics-20230418-0726.zip Quote Link to comment
Solution JorgeB Posted April 18, 2023 Solution Share Posted April 18, 2023 Apr 18 07:24:57 void emhttpd: shcmd (13188): mount -t xfs -o noatime,nouuid /dev/sde1 /mnt/cache Apr 18 07:24:57 void kernel: XFS (sde1): Invalid superblock magic number Now is trying to mount xfs, post the output of: xfs_repair -v /dev/sde1 1 Quote Link to comment
RS7588 Posted April 18, 2023 Author Share Posted April 18, 2023 (edited) root@void:~# xfs_repair -v /dev/sde1 Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock - block cache size set to 6166120 entries sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129 resetting superblock realtime bitmap inode pointer to 129 sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130 resetting superblock realtime summary inode pointer to 130 Phase 2 - using internal log - zero log... zero_log: head block 236210 tail block 236210 - scan filesystem freespace and inode maps... sb_icount 0, counted 506304 sb_ifree 0, counted 84912 sb_fdblocks 244071381, counted 174348275 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 3 - agno = 0 - agno = 2 clearing reflink flag on inode 1611998035 clearing reflink flag on inode 1612335074 clearing reflink flag on inode 1075665249 clearing reflink flag on inode 937479 clearing reflink flag on inode 1612335078 clearing reflink flag on inode 1075736762 clearing reflink flag on inode 937481 clearing reflink flag on inode 581027892 clearing reflink flag on inode 1612701675 clearing reflink flag on inode 937483 clearing reflink flag on inode 937485 clearing reflink flag on inode 1075767383 clearing reflink flag on inode 581101113 clearing reflink flag on inode 1075767397 clearing reflink flag on inode 1075769280 clearing reflink flag on inode 1613499577 clearing reflink flag on inode 1613499579 clearing reflink flag on inode 1613499581 clearing reflink flag on inode 581101115 clearing reflink flag on inode 581101117 clearing reflink flag on inode 581101118 clearing reflink flag on inode 937487 clearing reflink flag on inode 1613499583 clearing reflink flag on inode 1613503680 clearing reflink flag on inode 1201332 clearing reflink flag on inode 1401909 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Note - stripe unit (0) and width (0) were copied from a backup superblock. Please reset with mount -o sunit=<value>,swidth=<value> if necessary XFS_REPAIR Summary Tue Apr 18 12:25:59 2023 Phase Start End Duration Phase 1: 04/18 12:25:56 04/18 12:25:56 Phase 2: 04/18 12:25:56 04/18 12:25:56 Phase 3: 04/18 12:25:56 04/18 12:25:57 1 second Phase 4: 04/18 12:25:57 04/18 12:25:58 1 second Phase 5: 04/18 12:25:58 04/18 12:25:58 Phase 6: 04/18 12:25:58 04/18 12:25:59 1 second Phase 7: 04/18 12:25:59 04/18 12:25:59 Total run time: 3 seconds done Is this next? mount -o sunit=0,swidth=0 Edited April 18, 2023 by RS7588 Quote Link to comment
JorgeB Posted April 18, 2023 Share Posted April 18, 2023 Just start (or re-start) the array in normal mode and it should mount. 1 Quote Link to comment
RS7588 Posted April 18, 2023 Author Share Posted April 18, 2023 That did it! Thank you for your help @JorgeB! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.