jdiacobbo Posted September 20, 2024 Posted September 20, 2024 (edited) I recently added a new drive to a drive pool and changed the type to RAID 0. Initially it seemed to work fine but once the rebalance finished the pool became read only. Scrubs also abort instantly with error -30. I m seeing a bunch of errors in the logs related to brtfs as well. Any Ideas what's going on? Edited September 22, 2024 by jdiacobbo Typo in Subject Quote
Solution JorgeB Posted September 21, 2024 Solution Posted September 21, 2024 The balance never finished since it ran out of space, note that raid0 with two different capacity disks will only use the space from the smallest one on both, single profile will fully use both. 1 Quote
jdiacobbo Posted September 21, 2024 Author Posted September 21, 2024 4 hours ago, JorgeB said: The balance never finished since it ran out of space, note that raid0 with two different capacity disks will only use the space from the smallest one on both, single profile will fully use both. Good to know. I was using this link from the Storage Management docs which made it seem like I would be able to use the full amount of storage on the drives. Additionally this info from the Balance section in the pool config also led me to believe that I would have more storage than the single drive. I guess I just misunderstood what was happening. So are you saying I can have multiple drives in a pool and set it to single mode? When setting the pool to single, how is the data distributed between the two disks? Quote
JorgeB Posted September 21, 2024 Posted September 21, 2024 1 hour ago, jdiacobbo said: which made it seem like I would be able to use the full amount of storage on the drives. This may have changed for newer kernels, I seem to remember reading something about that, but have not tested yet, but with the one you are using, it won't fully use both disks. 1 hour ago, jdiacobbo said: So are you saying I can have multiple drives in a pool and set it to single mode? Correct. 1 hour ago, jdiacobbo said: When setting the pool to single, how is the data distributed between the two disks? It will start writing to the drive with most free space, once they are both the same, it will alternate between the two for any new chunks, these are 1GiB in size. 1 Quote
jdiacobbo Posted September 21, 2024 Author Posted September 21, 2024 (edited) On 9/21/2024 at 9:27 AM, JorgeB said: This may have changed for newer kernels, I seem to remember reading something about that, but have not tested yet, but with the one you are using, it won't fully use both disks. Correct. It will start writing to the drive with most free space, once they are both the same, it will alternate between the two for any new chunks, these are 1GiB in size. Thanks for the info. So at this point how do I go about correcting this? I tried to use the balancer to change back to Single mode but it doesn't appear to be doing anything. Edited September 22, 2024 by jdiacobbo Quote
JorgeB Posted September 22, 2024 Posted September 22, 2024 It may not be possible, since I'm seeing other issues with the filesystem, and you are using encrypton which complicates things by a lot, but try this: -disable array auto-start if enabled -reboot to clear the logs -check that the pool device identifiers didn't change, and if yes, type in the CLI (you will need to enter the encryption passphrase when asked) cryptsetup luksOpen /dev/sdn1 sdn1 --allow-discards cryptsetup luksOpen /dev/sdj1 sdj1 --allow-discards mkdir /x mount -t btrfs -o skip_balance /dev/mapper/sdj1 /x btrfs balance cancel /x umount /x /usr/sbin/cryptsetup luksClose sdj1 /usr/sbin/cryptsetup luksClose sdn1 Then start the array normally and post new diags. Quote
jdiacobbo Posted September 22, 2024 Author Posted September 22, 2024 Before I do anything further I want to back up the data to the array. Curiously everywhere I look, the pool capacity is listed at different values. The GUI lists it at 22TB, the btrfs Balance status at 17.46TiB, and the unBalance plugin somewhere around 7.8 TB. Is there any risk of data loss here or should it all still be available just read only? Quote
jdiacobbo Posted September 22, 2024 Author Posted September 22, 2024 (edited) On 9/22/2024 at 5:36 AM, JorgeB said: It may not be possible, since I'm seeing other issues with the filesystem, and you are using encrypton which complicates things by a lot, but try this: -disable array auto-start if enabled -reboot to clear the logs -check that the pool device identifiers didn't change, and if yes, type in the CLI (you will need to enter the encryption passphrase when asked) cryptsetup luksOpen /dev/sdn1 sdn1 --allow-discards cryptsetup luksOpen /dev/sdj1 sdj1 --allow-discards mkdir /x mount -t btrfs -o skip_balance /dev/mapper/sdj1 /x btrfs balance cancel /x umount /x /usr/sbin/cryptsetup luksClose sdj1 /usr/sbin/cryptsetup luksClose sdn1 Then start the array normally and post new diags. I had some odd behavior so I needed to reboot. I'm not 100% how to check if the pool identifiers changed but everything came up as I would expect. Let me know if this looks right? Edited September 28, 2024 by jdiacobbo Quote
JorgeB Posted September 22, 2024 Posted September 22, 2024 8 hours ago, jdiacobbo said: the btrfs Balance status at 17.46TiB That's only the allocated size, not total capacity, stats are more clear on v7. Pool is still using sdj and sdn Quote
jdiacobbo Posted September 23, 2024 Author Posted September 23, 2024 4 hours ago, JorgeB said: That's only the allocated size, not total capacity, stats are more clear on v7. Pool is still using sdj and sdn Any thought on reverting it back to single mode in place or is a copy and reformat the only option? Quote
JorgeB Posted September 23, 2024 Posted September 23, 2024 After canceling the balance you can try, but free up some space first, or it may still run out of space. Quote
jdiacobbo Posted September 23, 2024 Author Posted September 23, 2024 i tried canceling the balance but it says there isn't balance running and the pool is read only so I can't delete anything. At this point I'm just going to copy the data over to the array and reformat it. Quote
JorgeB Posted September 23, 2024 Posted September 23, 2024 15 minutes ago, jdiacobbo said: i tried canceling the balance but it says there isn't balance running The balance need to be canceled manually, using the commands I posted yesterday: https://forums.unraid.net/topic/175539-drive-pool-in-read-only-after-adding-drive-and-changing-to-raid-0/?do=findComment&comment=1467469 Quote
jdiacobbo Posted September 23, 2024 Author Posted September 23, 2024 Sorry I thought that was only if the device IDs changed. Once the copy finished I will try this. Quote
JorgeB Posted September 23, 2024 Posted September 23, 2024 1 hour ago, jdiacobbo said: Sorry I thought that was only if the device IDs changed Sorry, rereading what I wrote, it wasn't very clear. 1 Quote
jdiacobbo Posted September 28, 2024 Author Posted September 28, 2024 Rebooted and ran the commands you posted previously. The pool was writable and I'm now able to select Convert to single in the Balance Status section and its running. We'll see if this works! At this point this is just for educational purposes. I think I'll end up just adding the drives to the array as encrypted drives and setting a share exclusive to those drives which would accomplish the same end result. That would make more sense and probably be easier long term than using a pool. Thanks for all the help @JorgeB! 1 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.