molesza Posted August 13, 2021 Share Posted August 13, 2021 (edited) I think I have messed up in a big way! Background: I am coming from a freenas install. I installed the zfs plugin to mount my pool in unraid and copy to a temporary array on unraid. This worked just fine. After doing that I added the drives from the zfs pool to the unraid array. They all formatted fine and my array was working fine. I decided to remove some of the drives from the unraid array. Moved all of my data from the drives to other drives. After doing that I stopped the array created a new config and added back the drives I wanted to keep in the array making sure to put parity in correct slot. Once I started the array the parity build started but 3 of my drives reported back as unmountable and needed to be formatted. 1 of the 3 drives has over 1tb of data on it that I dont want to lose. I am starting to think that I should have destroyed the zfs pool somehow and that the unraid array is sitting on top of this zfs pool? But surely when I added the entire pool to the unraid array and formatted that should have killed it. My other thought is that the zfs pool plugin is still running and may be causing an issue? If I start a new config a couple of the drives show as "zfs_member" but not all the 8 drives that were part of the pool. Should I remove the zfs plugin and reboot? I dont want to do anything until someone can possibly point me in the wrong direction. Want to minimize the risk of losing family photos. Thanks chaps The 3 drives that were part of the zfs pool showing as unmountable If I stop the array and remove the drives from the array so that they show up in unnassigned I see the following. No file system and cant mount. The partitions are there though. unraid-diagnostics-20210813-0717.zip Edited August 13, 2021 by molesza Quote Link to comment
molesza Posted August 13, 2021 Author Share Posted August 13, 2021 (edited) I temporarily removed all drives from the array whilst stopped so that "unassigned drives" would read them. Shows file systems all over the place. I have also realised that the 2tb drive that has information on it and is unmountable was never part of the ZFS pool. It was one of the original drives in the unraid array to move data from the zfs pool to the array. There were 8 drives total in the ZFS pool. 1vdev of 4x2tb drives and another vdev of 4x4tb drives, all part of the same ZFS pool. The only drives showing up as "zfs_member" are the 4x4tb drives. One of those 4tb drives is the current parity drive so cant possibly be part of a ZFS pool. Edited August 13, 2021 by molesza Quote Link to comment
JorgeB Posted August 13, 2021 Share Posted August 13, 2021 Strange, it's like there's no valid filesystem on those disks, please post output of: fdisk -l /dev/sdX for the three unmountable 2TB disks. Quote Link to comment
molesza Posted August 13, 2021 Author Share Posted August 13, 2021 Thanks for your reply! Below is the output. I stopped the array and specified "xfs" for the filesystem on those drives instead of "auto" and the array starts up with all drives mounted! Which is great! However I feel there is an underlying problem that needs sorting. I have removed the ZFS plugin and the "ZFS Companion" plugin and rebooted. No reports of "zfs_member" on drives now. Quote Link to comment
JorgeB Posted August 13, 2021 Share Posted August 13, 2021 50 minutes ago, molesza said: I stopped the array and specified "xfs" for the filesystem on those drives instead of "auto" and the array starts up with all drives mounted! Which is great! That's good, but that means there's a problem with the auto function, since it wasn't detecting any of the supported filesystems, it's not just Unraid since UD also didn't detect a valid filesystem on those disks, please also post output of: blkid 53 minutes ago, molesza said: I have removed the ZFS plugin and the "ZFS Companion" plugin and rebooted. No reports of "zfs_member" on drives now. Yes, that can happen with previously used zfs drives since they use 2 partitions and Unraid only wipes one of them, but it's harmless. Quote Link to comment
molesza Posted August 13, 2021 Author Share Posted August 13, 2021 (edited) Here are the results of BLKID and my current array which is rebuilding parity. There are four unassigned devices currently preclearing: sdb sdk sdm sdn Thanks Edited August 13, 2021 by molesza Quote Link to comment
JorgeB Posted August 13, 2021 Share Posted August 13, 2021 Yeah, those three disks sdf, sdg and sdh don't have a filesystem UUID, was anything different done with them when they were added to the array and formatted? Quote Link to comment
molesza Posted August 13, 2021 Author Share Posted August 13, 2021 (edited) 12 minutes ago, JorgeB said: Yeah, those three disks sdf, sdg and sdh don't have a filesystem UUID, was anything different done with them when they were added to the array and formatted? I am not sure unfortunately. Can I create a UUID on those drives now? If not I could transfer off all the data to other drives and reformat? Or when they eventually get replaced that would sort out the problem I guess? It doesnt really bother me, as long as the integrity of the array is OK. Edited August 13, 2021 by molesza Quote Link to comment
JorgeB Posted August 13, 2021 Share Posted August 13, 2021 Data should be fine but if you leave them like that you can run into trouble mounting them in the future, even if you just forget to set the fs or need to mount them with UD, try this, it should be safe but first do it on one of the empty disks just in case, start the array in maintenance mode them type: xfs_admin -U generate /dev/mdX Replace X with the disk number, e.g. md8, then start array in normal mode and post output of blkid again. Quote Link to comment
molesza Posted August 13, 2021 Author Share Posted August 13, 2021 Here is the output. Should I be specifying "sdf1" because it doesnt seem to do anything. Thanks Quote Link to comment
JorgeB Posted August 13, 2021 Share Posted August 13, 2021 1 minute ago, molesza said: Should I be specifying "sdf1" No, mdX like posted above, to keep parity valid. Quote Link to comment
JorgeB Posted August 13, 2021 Share Posted August 13, 2021 Just notice you're still syncing parity, in that case you can cancel and use sdX1. Quote Link to comment
molesza Posted August 13, 2021 Author Share Posted August 13, 2021 It came back with a new UUID when i ran the command but it still didnt mount and the blkid command is attached. I have to go to the office now and I'm quite worried about losing data. I am going to mount forcing XFS on those three drives and let parity rebuild. I'm thinking once that is done I will: 1) transfer the data off all those drives 2) set filesystem back to auto 3) stop and start the array 4) format the unmountable drives This should do the trick right? I'm just worried about having no parity and a drive failing. Really Really appreciate your help so far. Quote Link to comment
JorgeB Posted August 13, 2021 Share Posted August 13, 2021 4 minutes ago, molesza said: This should do the trick right? Yep, if it doesn't clear the beginning of the disks with dd and re-format, this could be the result of garbage on those disks from previous IDs, raid signatures, etc. Quote Link to comment
molesza Posted August 13, 2021 Author Share Posted August 13, 2021 15 minutes ago, JorgeB said: Yep, if it doesn't clear the beginning of the disks with dd and re-format, this could be the result of garbage on those disks from previous IDs, raid signatures, etc. OK cool. How would I do that with the "dd" command please? Thanks Quote Link to comment
JorgeB Posted August 13, 2021 Share Posted August 13, 2021 If just reformatting doesn't fix the problem you can type (with the array stopped): dd if=/dev/zero of=/dev/sdX bs=4k count=1000 Replace X with correct letter, double check you're doing it to the correct disks, any data will be lost, then start array and format. Quote Link to comment
molesza Posted August 13, 2021 Author Share Posted August 13, 2021 Much appreciated! I will do so after parity rebuild and report back. Quote Link to comment
molesza Posted August 15, 2021 Author Share Posted August 15, 2021 OK all good! I let the array rebuild parity. Then I used Unbalance to move the data off of the drive that still had data on it. After this I stopped the array and set all the disks to "auto" on file system. Started the array and the 3 disks were waiting to be formatted. I went ahead and did this and now the drives are all mounting on xfs automatically. Thank you JorgeB ! Quote Link to comment
molesza Posted August 15, 2021 Author Share Posted August 15, 2021 Unraid did mention that it would rebuild parity when I did this but it only formatted the drives. Maybe I should run a parity check? Quote Link to comment
JorgeB Posted August 15, 2021 Share Posted August 15, 2021 16 minutes ago, molesza said: Unraid did mention that it would rebuild parity when I did this but it only formatted the drives. Just setting the fs to auto and formatting disks doesn't require parity sync, but a non correcting parity check won't hurt to make sure all is good. Quote Link to comment
itimpi Posted August 15, 2021 Share Posted August 15, 2021 1 hour ago, molesza said: Unraid did mention that it would rebuild parity when I did this but it only formatted the drives. Maybe I should run a parity check? In principle the format operation would have updated parity but as mentioned it would not do any harm to run a check to make sure it is valid. Quote Link to comment
molesza Posted August 15, 2021 Author Share Posted August 15, 2021 Thanks guys. I'll run a check for sanity. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.