miggity Posted March 26 Share Posted March 26 In short, I had a single pool nvme drive ("cache") that has appdata, system & downloads (currently empty) on it. I added a HDD to a new pool ("cachepool") that I wanted to use for downloads. I decided, stupidly and in haste, to also rename the original "cache" drive to "systempool". Upon reboot, both drives are unmountable. I know the new "cachepool" just needs to be formated. That's easy. But I would like to get the original "cache" back and running, rather than rebuilding all my dockers. I tried changing the name back to "cache" but it is still unmountable. Is there any way to fix this, or do I need to start rebuilding my dockers all over again? Thanks for any help! Quote Link to comment
JorgeB Posted March 26 Share Posted March 26 Renaming a pool should not change anything regarding mounting or not, post the current diagnostics after array start. Quote Link to comment
miggity Posted March 26 Author Share Posted March 26 (edited) diagnostics attached. I have restarted the array 2 other times since renaming but besides changing "systempool" back to "cache" I haven't done anything. Also, I added the HDD to the "cache" pool originally, but I don't believe I ever started the array before I moved it to it's own pool "cachepool". Not sure this matters but figured I'd mention it. Please let me know if there's more needed, thanks again. illmatic-diagnostics-20240326-1325.zip Edited March 26 by miggity Quote Link to comment
JorgeB Posted March 26 Share Posted March 26 24 minutes ago, miggity said: Also, I added the HDD to the "cache" pool originally Please clarify, do you mean that instead of creating a new pool you added a device to the existing pool? Quote Link to comment
miggity Posted March 26 Author Share Posted March 26 (edited) Yes, originally I added the HDD to the existing "cache" pool with the NVME drive, but I don't remember if I starter the array or not before I moved the HDD into it's own pool. I did all of this in a few minutes during lunch so I was moving quick and clearly not paying full attention. Which I've learned always gets me in trouble with Unraid, yet here I am again... Edited March 26 by miggity Quote Link to comment
miggity Posted March 26 Author Share Posted March 26 (edited) /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="7013-DA05" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/nvme0n1p1: UUID="71d7119e-49bf-45a6-b170-d4d72162860c" BLOCK_SIZE="512" TYPE="xfs" /dev/sdd1: UUID="9312d6d1-8645-4fbe-934b-b32720144681" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="e851406a-cbd6-47c0-92a9-a5a3abf4a819" /dev/md2p1: UUID="9312d6d1-8645-4fbe-934b-b32720144681" BLOCK_SIZE="512" TYPE="xfs" /dev/sdb1: PARTUUID="85897617-0c65-4ac8-9127-e801a434c7c2" /dev/md1p1: UUID="cb1e31c1-9a7f-4e6d-91fe-62caeb5e4563" BLOCK_SIZE="512" TYPE="xfs" /dev/loop0: TYPE="squashfs" /dev/sdc1: UUID="cb1e31c1-9a7f-4e6d-91fe-62caeb5e4563" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="3b44a867-05dc-450e-a026-5e3021db7281" Output from blkid above. Edited March 26 by miggity Quote Link to comment
JorgeB Posted March 26 Share Posted March 26 The NVMe device was the original cache correct, if yes post the output of: xfs_repair -v /dev/nvme0n1p1 1 Quote Link to comment
miggity Posted March 26 Author Share Posted March 26 Yes it was. Output below: Phase 1 - find and verify superblock... - block cache size set to 755024 entries Phase 2 - using internal log - zero log... zero_log: head block 367882 tail block 367882 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Tue Mar 26 14:52:46 2024 Phase Start End Duration Phase 1: 03/26 14:52:45 03/26 14:52:45 Phase 2: 03/26 14:52:45 03/26 14:52:45 Phase 3: 03/26 14:52:45 03/26 14:52:46 1 second Phase 4: 03/26 14:52:46 03/26 14:52:46 Phase 5: 03/26 14:52:46 03/26 14:52:46 Phase 6: 03/26 14:52:46 03/26 14:52:46 Phase 7: 03/26 14:52:46 03/26 14:52:46 Total run time: 1 second done Quote Link to comment
Solution JorgeB Posted March 26 Solution Share Posted March 26 Unassign the cache pool device, start array, stop array, make sure the pool only has one slot, re-assign the NVMe device, start array, post new diags. 1 Quote Link to comment
miggity Posted March 26 Author Share Posted March 26 That seems to have fixed it, the NVMe is mounted and dockers and everything show up. Do you still want the diagnostics posted? Thanks for all the help, any idea what happened so I don't do it again by mistake in the future? Quote Link to comment
JorgeB Posted March 26 Share Posted March 26 If it's working no need for diags. 11 minutes ago, miggity said: any idea what happened so I don't do it again by mistake in the future? Not sure what happened, did the pool have more than one slot after removing the other device? Quote Link to comment
miggity Posted March 26 Author Share Posted March 26 Yes, it was still set to 2 slots but I changed it back to 1 slot during that quick process. Either way it is working so thanks. I'm gonna try to rename these now and nothing else. Hopefully it still works like it should. Quote Link to comment
JorgeB Posted March 27 Share Posted March 27 It should, note that an XFS pool can only have one slot, or it won't mount. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.