ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 I put the 1TB drive back into the system. I didn't change the device configuration in unraid. I can mount the 1TB drive manually with "mount /dev/sdn1 /aljx". Interestingly, the 1TB drive has the same contents as Disk 2 and Disk 3. Quote Link to comment
JorgeB Posted March 15, 2017 Share Posted March 15, 2017 Yes there's something very weird going on here for sure. LT was made aware of this thread so I'd wait to see what they say, Tom may also need you to run some extra commands to find out what's really going on. I believe whatever happened was before the first diags so and you'll possibly need to completely wipe and reformat one of those disks to fix it. Quote Link to comment
ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 Thanks johnnie.black for your help on this. Quote Link to comment
jbrodriguez Posted March 15, 2017 Share Posted March 15, 2017 I remember reading that btrfs has some "issues" reporting space: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#I_get_.22No_space_left_on_device.22_errors.2C_but_df_says_I.27ve_got_lots_of_space https://btrfs.wiki.kernel.org/index.php/FAQ#Why_is_free_space_so_complicated.3F https://unix.stackexchange.com/questions/37489/when-using-btrfs-why-size-used-and-avail-values-from-df-do-not-match (older) Not sure if this is the case here, though. Quote Link to comment
ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 Just now, jbrodriguez said: I remember reading that btrfs has some "issues" reporting space: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#I_get_.22No_space_left_on_device.22_errors.2C_but_df_says_I.27ve_got_lots_of_space https://btrfs.wiki.kernel.org/index.php/FAQ#Why_is_free_space_so_complicated.3F https://unix.stackexchange.com/questions/37489/when-using-btrfs-why-size-used-and-avail-values-from-df-do-not-match (older) Not sure if this is the case here, though. yea, I'm running btrfs on another rig and have run into that very issue before. This is something much deeper and uglier Quote Link to comment
ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 (edited) johnnie.black, do you think there's any issue with starting to start converting btrfs to xfs starting with disk 10 and moving data? Disk 10 at the moment is empty. Or should I just wait on LT before doing anything further? Edited March 15, 2017 by ajeffco Quote Link to comment
JorgeB Posted March 15, 2017 Share Posted March 15, 2017 Don't see a problem with that, would just not touch disks 2 and 3. Quote Link to comment
ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 1 minute ago, johnnie.black said: Don't see a problem with that, would just not touch disks 2 and 3. Ok. I'll stay away from them and the original 1TB disk. Thanks again. Quote Link to comment
ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 I'll wait for LT to chime in. In reading the File System Conversion page on the wiki, it talks about parity rebuilds, which I'm assuming I don't want to do. Not sure parity is even valid at this point. Quote Link to comment
RobJ Posted March 15, 2017 Share Posted March 15, 2017 Is there any possibility that one or more of these confused drives were part of a BTRFS pool before, either as a Cache pool in unRAID or a BTRFS pool outside of unRAID? Tom has recently said something relevant to this, that seems to indicate that BTRFS has functionality that preserves the BTRFS-ness of a drive, which might include it's pool size. If your 1TB drive is still thinking it's part of a 4TB pool ... Which also may mean that to properly format a BTRFS drive to something else, you may need to take an extra step to un-BTRFS a drive before the format. It could be as easy as just zeroing the early sectors, but I don't know where BTRFS stores its registration info. Could be hidden in the MBR, or near it in the empty unused sectors at the beginning of the partition, or at the end of the drive, etc. Quote Link to comment
RobJ Posted March 15, 2017 Share Posted March 15, 2017 9 minutes ago, ajeffco said: I'll wait for LT to chime in. In reading the File System Conversion page on the wiki, it talks about parity rebuilds, which I'm assuming I don't want to do. Not sure parity is even valid at this point. I strongly concur! Quote Link to comment
ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 Just now, RobJ said: Is there any possibility that one or more of these confused drives were part of a BTRFS pool before, either as a Cache pool in unRAID or a BTRFS pool outside of unRAID? Tom has recently said something relevant to this, that seems to indicate that BTRFS has functionality that preserves the BTRFS-ness of a drive, which might include it's pool size. If your 1TB drive is still thinking it's part of a 4TB pool ... Which also may mean that to properly format a BTRFS drive to something else, you may need to take an extra step to un-BTRFS a drive before the format. It could be as easy as just zeroing the early sectors, but I don't know where BTRFS stores its registration info. Could be hidden in the MBR, or near it in the empty unused sectors at the beginning of the partition, or at the end of the drive, etc. They were part of an Ubuntu system running BTRFS. Before migrating to unRaid, I ran "wipefs -a /dev/..." on them. And when they got to unRaid, I ran preclear on every drive before running through the drive replacement procedure. Not sure if wipefs and preclear are enough to "clean" a disk. The odd thing, the 1TB doesn't think it's part of a 4TB pool, it looks like it think's it's a 4TB drive in the GUI. From the CLI, "btrfs fi" shows the device itself correctly. Al Quote Link to comment
JorgeB Posted March 15, 2017 Share Posted March 15, 2017 (edited) 40 minutes ago, ajeffco said: I'll wait for LT to chime in. In reading the File System Conversion page on the wiki, it talks about parity rebuilds, which I'm assuming I don't want to do. Not sure parity is even valid at this point. Yes, better wait on that, but if disk10 is empty I see no problem formatting it to XFS, no need for a new config, but probably best to wait before starting to load more data. Edited March 15, 2017 by johnnie.black Quote Link to comment
ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 Is there an "official" process to open a support ticket with LT, if that's even possible? Or is the forum post acceptable? Quote Link to comment
Squid Posted March 15, 2017 Share Posted March 15, 2017 Forum post / email [email protected] / click on the Limetech logo on the webUI and you can submit a report there also. Quote Link to comment
JorgeB Posted March 15, 2017 Share Posted March 15, 2017 22 minutes ago, ajeffco said: Is there an "official" process to open a support ticket with LT, if that's even possible? Or is the forum post acceptable? Also keep in mind they are on PST, so it's to soon to get a reply here, maybe in the next few hours. Quote Link to comment
ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 8 minutes ago, johnnie.black said: Also keep in mind they are on PST, so it's to soon to get a reply here, maybe in the next few hours. Oh. I'm not in any rush, just didn't know if there was a "formal" process for them to keep track of these types of issues. Quote Link to comment
ajeffco Posted March 15, 2017 Author Share Posted March 15, 2017 I had a thought. Assuming Drive 3 has been in a bad state since it was brought in does that mean any data written to the drive since then wasn't lost? Or would it be gone regardless? I used rsync to copy from Linux/BTRFS/SMB to unraid and rsync verified everything written. That however would go to /mnt/user/wherever... Just trying to figure out what I might've lost that might need to be recovered. Quote Link to comment
ajeffco Posted March 18, 2017 Author Share Posted March 18, 2017 I'm in the process of migrating data from the btrfs formatted drives to xfs formatted drives. When I get to the last 3 drives that are part of the original issue, any advice as to what I should do? Al Quote Link to comment
JorgeB Posted March 18, 2017 Share Posted March 18, 2017 Disk1 can me converted normally, the problem is disk2 and 3, looks like disk3 is inaccessible, unknown at this point if it contains data or not, I would convert all remaining disks to xfs (including disk2) leaving only disk3 as btrfs, then reboot and see if it disk3 mounts correctly. Quote Link to comment
ajeffco Posted March 18, 2017 Author Share Posted March 18, 2017 (edited) Ok. I've started migrating data off of disk 2. Will be interesting to see what happens to disk 3. Thanks jonnie.black Edited March 18, 2017 by ajeffco Quote Link to comment
ajeffco Posted March 19, 2017 Author Share Posted March 19, 2017 Disk 2 and disk 3 are the same. Drive 3 is asleep Quote Link to comment
ajeffco Posted March 21, 2017 Author Share Posted March 21, 2017 I've completed the process of reformatting my drives to xfs. When I emptied disk 2, as expected disk 3 emptied also. Interestingly when I mounted the original 1TB disk to look at it also, it's showing empty as well instead of a 1TB drive with a 4TB size. Not sure why out of 24 total drives moved from the Linux/BTRFS machine, these 3 whacked out. Unfortunately since I didn't capture a syslog we'll never know. Lesson learned. Thanks again for the help. Al Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.