Reznap Posted January 31, 2023 Share Posted January 31, 2023 Hi, Recently filled up my 6 TB pool and decided to get some new hard drives. Bought 4 new 18tb drives to replace my 3 3TB drives. First thing I did was take out the current parity drive and replaced it with a new 18tb drive and ran parity, that finished. Everything looked good. Shutdown. Then installed the remaining 3 18TB drives (Unplugged cache drive as I only have 6 SATA Ports) Loaded up UNRAID and 'Disk-Cleared' the 3 new drives. that finished. Everything looked good. Now I want to move the data from the 2 3 TB drives in the system to 2 of the new drives. Go to Tools ---> New Config --> Main --> Assigned 3 18TB Drives to 1, 2, 3, assigned 2 3TB drives to 4, 5 and leave parity unassigned. Start Array --> my 2 3TB drives both say 'Unmountable: Unsupported or no file system' Do some basic googling/searching, run xfs_repair on them and it shows nothing. Pull 1 of the 18TB drives out and reinstall Cache drive, cache drive mounts fine. Both 3TB drives with all my data still say "Unmountable: Unsupported or no file system" What can I do to recover this? Thank you! skynet-diagnostics-20230130-1639.zip Quote Link to comment
trurl Posted January 31, 2023 Share Posted January 31, 2023 13 minutes ago, Reznap said: Now I want to move the data from the 2 3 TB drives in the system to 2 of the new drives. Normally you would rebuild each to one of the new drives one at a time. Will check diagnostics later, dinnertime Quote Link to comment
trurl Posted January 31, 2023 Share Posted January 31, 2023 Nothing you did makes much sense. Why did you rebuild parity, then remove parity? I suppose you intended to add drives, copy data to them, then remove the source drives. Really wish you had asked before doing anything. The correct procedure would be to replace parity with a larger disk then let it rebuild. Then replace one of the data disks with a larger disk and let it rebuild, repeat as necessary. With what you did instead, the new disks (3, 4?) would be unmountable until you format them, but the other data disks that already had data on them (1, 2?) should not be unmountable. What filesystem was on disks 1, 2? Are you sure you haven't left anything out in your description? Quote Link to comment
Reznap Posted January 31, 2023 Author Share Posted January 31, 2023 Yeah, I got lazy and did not want to rebuild parity three times and thought I could use the 'Faster' method in the wiki. my bad.... Quote Link to comment
Reznap Posted January 31, 2023 Author Share Posted January 31, 2023 1 hour ago, trurl said: Nothing you did makes much sense. Why did you rebuild parity, then remove parity? I suppose you intended to add drives, copy data to them, then remove the source drives. Really wish you had asked before doing anything. The correct procedure would be to replace parity with a larger disk then let it rebuild. Then replace one of the data disks with a larger disk and let it rebuild, repeat as necessary. With what you did instead, the new disks (3, 4?) would be unmountable until you format them, but the other data disks that already had data on them (1, 2?) should not be unmountable. What filesystem was on disks 1, 2? Are you sure you haven't left anything out in your description? XFS Quote Link to comment
trurl Posted January 31, 2023 Share Posted January 31, 2023 4 hours ago, Reznap said: run xfs_repair on them Did you do this from the webUI or the command line? Easy to get the command wrong. Quote Link to comment
trurl Posted January 31, 2023 Share Posted January 31, 2023 1 hour ago, Reznap said: Yeah, I got lazy and did not want to rebuild parity three times and thought I could use the 'Faster' method in the wiki. my bad.... I don't get the "lazy" part. Whatever you had in mind would have definitely been more trouble and prone to mistakes than the normal method of upsizing disks. And it probably wouldn't have been faster. Quote Link to comment
Reznap Posted January 31, 2023 Author Share Posted January 31, 2023 1 hour ago, trurl said: Did you do this from the webUI or the command line? Easy to get the command wrong. command line, just did xfs_repair -n /dev/sde1 (and sdf1) I think I also did xfs_repair /dev/sde1 (and sdf1) and xfs_repair -l /dev/sde1 (and sdf1) Quote Link to comment
JorgeB Posted January 31, 2023 Share Posted January 31, 2023 Post the output of: blkid and xfs_repair -v /dev/sde1 Quote Link to comment
Reznap Posted January 31, 2023 Author Share Posted January 31, 2023 6 hours ago, JorgeB said: Post the output of: blkid and xfs_repair -v /dev/sde1 /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/loop0: TYPE="squashfs" /dev/sdc1: UUID="db429a5d-4243-4626-854d-a4fd8515757d" BLOCK_SIZE="512" TYPE="xfs" /dev/sdg1: PARTUUID="fc394ee4-7101-4b4f-b841-76cf1b1861a2" Phase 1 - find and verify superblock... - block cache size set to 741352 entries Phase 2 - using internal log - zero log... zero_log: head block 1027262 tail block 1027262 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Tue Jan 31 08:17:04 2023 Phase Start End Duration Phase 1: 01/31 08:16:52 01/31 08:16:52 Phase 2: 01/31 08:16:52 01/31 08:16:53 1 second Phase 3: 01/31 08:16:53 01/31 08:16:58 5 seconds Phase 4: 01/31 08:16:58 01/31 08:16:58 Phase 5: 01/31 08:16:58 01/31 08:16:58 Phase 6: 01/31 08:16:58 01/31 08:17:03 5 seconds Phase 7: 01/31 08:17:03 01/31 08:17:03 Total run time: 11 seconds done Quote Link to comment
Solution JorgeB Posted January 31, 2023 Solution Share Posted January 31, 2023 Stop array, click on disk1 and change the filesystem from "Auto" to "XFS", repeat for disk2, start array and post new diags. Quote Link to comment
Reznap Posted January 31, 2023 Author Share Posted January 31, 2023 2 hours ago, JorgeB said: Stop array, click on disk1 and change the filesystem from "Auto" to "XFS", repeat for disk2, start array and post new diags. That fixed it. Thank you! if you want the diags for anything let me know. Quote Link to comment
JorgeB Posted January 31, 2023 Share Posted January 31, 2023 If all is well no need for new diags, just keep in mind for the future that any new config will require you to specify the fs for those disks again, since for some reason the xfs signature is missing from them. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.