eagle470 Posted May 5, 2022 Share Posted May 5, 2022 This is my chia server, I had the last four drives in a disk pool on another server. I yanked them out and stuffed them into my cable spaghetti server. Obviously, I didn't think through the single drive mode and didn't change the metadata parity settings. Obviously it's just chia plots, but I'd rather not blow them away as it costs money to replot. Quote Link to comment
JonathanM Posted May 5, 2022 Share Posted May 5, 2022 That's a new one to me. Maybe @JorgeB will have some ideas. Since you don't have a parity drive, I'm thinking you can just do a new config and leave out 12 - 15, then add them to a pool so at least you will get back to a "normal" config and go from there. Whether you can split the 4 drive pool without moving data around, I don't think so, but like I said maybe JorgeB will have some options. Quote Link to comment
eagle470 Posted May 5, 2022 Author Share Posted May 5, 2022 48 minutes ago, JonathanM said: That's a new one to me. Maybe @JorgeB will have some ideas. Since you don't have a parity drive, I'm thinking you can just do a new config and leave out 12 - 15, then add them to a pool so at least you will get back to a "normal" config and go from there. Whether you can split the 4 drive pool without moving data around, I don't think so, but like I said maybe JorgeB will have some options. That was my initial urge to do, but I'm holding off on the off chance someone has advice on this. I was thinking about this and I'm pretty sure the only reason this worked is because I DON'T have a parity drive. I'm pretty sure things would have stopped if I had a parity drive, because the parity drive would not be able to keep up the the changes. I also think it would have corrupted parity, but that is a guess and nothing more. Quote Link to comment
eagle470 Posted May 5, 2022 Author Share Posted May 5, 2022 1 hour ago, JonathanM said: That's a new one to me. Also, it's good to know that even though I left the admin world, I can still find new and interesting ways to break things! Quote Link to comment
JonathanM Posted May 5, 2022 Share Posted May 5, 2022 20 minutes ago, eagle470 said: I'm pretty sure the only reason this worked is because I DON'T have a parity drive. I'm pretty sure things would have stopped if I had a parity drive, because the parity drive would not be able to keep up the the changes. If you had valid parity, the added drives would have been cleared to keep parity valid before they were added, erasing any filesystem on them. I think it would be possible to have this config with parity, if you added the parity after the drives were already committed to the data slots. Various Unraid functions may be broken as a result, but it would be interesting to see. Parity doesn't care about data on the drives, so it probably would stay valid. Quote Link to comment
eagle470 Posted May 5, 2022 Author Share Posted May 5, 2022 47 minutes ago, JonathanM said: If you had valid parity, the added drives would have been cleared to keep parity valid before they were added, erasing any filesystem on them. I think it would be possible to have this config with parity, if you added the parity after the drives were already committed to the data slots. Various Unraid functions may be broken as a result, but it would be interesting to see. Parity doesn't care about data on the drives, so it probably would stay valid. The issue is the frequency of updates. After 30 minutes I already had 8+ million writes due to a balance kicking off immediately. There is ZERO way that a parity drive could keep up with that, unless it was an SSD and I'm not that well off. So either the parity drive would slow the system to a near halt OR the system would start to drop parity writes as the queue grew past the buffer. I'm betting on the latter. Quote Link to comment
JonathanM Posted May 5, 2022 Share Posted May 5, 2022 1 minute ago, eagle470 said: So either the parity drive would slow the system to a near halt OR the system would start to drop parity writes as the queue grew past the buffer. I'm betting on the latter. I'll take that bet. I think it would have slowed the writes to allow the parity to stay valid. Perhaps JorgeB could recreate this on one of his test rigs. 🙂 Quote Link to comment
JorgeB Posted May 6, 2022 Share Posted May 6, 2022 This happened to me before, and it can happen with parity, usually easily fixable, just no clear why a balance is running, if it started automatically post the diagnostics to see why. Quote Link to comment
eagle470 Posted May 6, 2022 Author Share Posted May 6, 2022 6 hours ago, JorgeB said: This happened to me before, and it can happen with parity, usually easily fixable, just no clear why a balance is running, if it started automatically post the diagnostics to see why. I patched to rc6, last night while drunk. So no logs. Question though, is there a way to convert the meta data to single drive mode and remove the mirror? Quote Link to comment
JorgeB Posted May 6, 2022 Share Posted May 6, 2022 1 minute ago, eagle470 said: Question though, is there a way to convert the meta data to single drive mode and remove the mirror? Yes, if they are in a pool now: btrfs balance start -f -mconvert=single /mnt/pool_name Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.