JunkoZane Posted March 21, 2023 Share Posted March 21, 2023 Hello fellow unraiders, recently I did a stupid thing. Instead of following any of the guides how to upgrade cache drives I did it the way I imagined it should work... Situation: I have 2x256gb cache ssd's in raid1 configuration. Since I ran out of memory I purchased 2x1tb ssd's I've stoped the array, unassigned one of the cache drives and started the array. Added 1 of new ssd's (1tb) to the cache pool and started the array again. Left it overnight to do its thing. Then stoped the array, unassigned second 256gb ssd from cache pool and started array. Now I have one 1tb cache ssd that has error "Unmountable: No file system" instead of capacity shown. Since all my dockers live in cache, I can't do a thing with my unraid server now. Tried running btrfs rescue zero-log /dev/sde (since 1tb ssd for sure is sde) with no luck: Quote No valid Btrfs found on /dev/sde ERROR: could not open ctree I do have both 256gb ssd's untouched thou. But don't know how to re-add them ( or at least one of them) just to get my server up and running... Any help is much appreciated, Thank you in advance! the-beast-diagnostics-20230321-2028.zip Quote Link to comment
JorgeB Posted March 22, 2023 Share Posted March 22, 2023 Post the output of btrfs fi show Quote Link to comment
JunkoZane Posted March 22, 2023 Author Share Posted March 22, 2023 6 hours ago, JorgeB said: Post the output of btrfs fi show Label: none uuid: ad3a33bd-dcf5-4a1a-bdd2-6d499aaecb25 Total devices 2 FS bytes used 2.36TiB devid 1 size 1.82TiB used 1.73TiB path /dev/sdb1 devid 2 size 1.82TiB used 1.73TiB path /dev/sdf1 Quote Link to comment
JorgeB Posted March 22, 2023 Share Posted March 22, 2023 That confirms they were wiped, post the output of: btrfs-select-super -s 1 /dev/sde1 and btrfs-select-super -s 1 /dev/sdg1 Quote Link to comment
JunkoZane Posted March 22, 2023 Author Share Posted March 22, 2023 Quote btrfs-select-super -s 1 /dev/sde1 No valid Btrfs found on /dev/sde1 ERROR: open ctree failed Quote btrfs-select-super -s 1 /dev/sdg1 using SB copy 1, bytenr 67108864 Quote Link to comment
JorgeB Posted March 22, 2023 Share Posted March 22, 2023 13 minutes ago, JunkoZane said: Post the output of btrfs fi show again please Quote Link to comment
JunkoZane Posted March 22, 2023 Author Share Posted March 22, 2023 Ofcourse! Quote btrfs fi show Label: none uuid: ad3a33bd-dcf5-4a1a-bdd2-6d499aaecb25 Total devices 2 FS bytes used 2.36TiB devid 1 size 1.82TiB used 1.73TiB path /dev/sdb1 devid 2 size 1.82TiB used 1.73TiB path /dev/sdf1 Label: none uuid: 630416da-0948-4846-a062-d6c0b6e30614 Total devices 2 FS bytes used 140.57GiB devid 1 size 238.47GiB used 159.03GiB path /dev/sdg1 *** Some devices missing Quote Link to comment
JorgeB Posted March 22, 2023 Share Posted March 22, 2023 Stop array, unassign all devices from that pool, start array, stop array, re-assign the Micron SSD (sdg) as the only pool device, start array, post new diags. Quote Link to comment
JunkoZane Posted March 22, 2023 Author Share Posted March 22, 2023 41 minutes ago, JorgeB said: Stop array, unassign all devices from that pool, start array, stop array, re-assign the Micron SSD (sdg) as the only pool device, start array, post new diags. I have my old cache pool working Should I now set all shares not to use cache pool and invoke mover? Or is there other way to uploading my new magazine of SSD's? Quote Link to comment
JorgeB Posted March 22, 2023 Share Posted March 22, 2023 Do you still want to upgrade using the 1TB devices? Quote Link to comment
Solution JorgeB Posted March 22, 2023 Solution Share Posted March 22, 2023 You can add (not replace) one of the 1TB devices, wait for the pool to balance, when done you can replace the 250GB with the other 1TB. Quote Link to comment
JunkoZane Posted March 22, 2023 Author Share Posted March 22, 2023 I'm confused. I did exactly that the first time... Quote Link to comment
JorgeB Posted March 22, 2023 Share Posted March 22, 2023 And it should work, you can try again and post new diags if there is problem, before rebooting. Quote Link to comment
JunkoZane Posted April 1, 2023 Author Share Posted April 1, 2023 Hello, again. Same problem occurred when I tried to remove 256GB drive (sdg): 1TB drive shows same error "UNMOUNTABLE: NO FILE SYSTEM" As you mentioned, I attach logs before rebooting the-beast-diagnostics-20230401-2216.zip Quote Link to comment
JorgeB Posted April 2, 2023 Share Posted April 2, 2023 Don't touch the removed device since likely you need it to re-use it, for now post the output of: btrfs fi show Quote Link to comment
JunkoZane Posted April 2, 2023 Author Share Posted April 2, 2023 48 minutes ago, JorgeB said: Don't touch the removed device since likely you need it to re-use it, for now post the output of: I still have both 256GB drives untouched, maybe first one that I took out would be more helpful? I have stopped the array yesterday and started it today, but no reboot was performed since this error occurred (since I don't know what's happening, I figured I'll better mention it...) 48 minutes ago, JorgeB said: btrfs fi show Label: none uuid: ad3a33bd-dcf5-4a1a-bdd2-6d499aaecb25 Total devices 2 FS bytes used 2.36TiB devid 1 size 1.82TiB used 1.73TiB path /dev/sdb1 devid 2 size 1.82TiB used 1.73TiB path /dev/sdf1 Quote Link to comment
JorgeB Posted April 2, 2023 Share Posted April 2, 2023 1 hour ago, JunkoZane said: Total devices 2 FS bytes used 2.36TiB This is you other pool, meaning the current pool has no valid filesystem, diags you posted only show you removing a device, but the problem was already there before that, so something went wrong before. Since you rebooted check that the old cache device is still sdg, then post output of: btrfs-select-super -s 1 /dev/sdg1 Quote Link to comment
JunkoZane Posted April 2, 2023 Author Share Posted April 2, 2023 6 hours ago, JorgeB said: Since you rebooted... I haven't, unless stopping and starting array has same effect. btrfs-select-super -s 1 /dev/sdg1 warning, device 2 is missing using SB copy 1, bytenr 67108864 I do suspect that problems started at the very beginning when I removed first 256GB drive. Maybe I should try using it as my only cache pool device and later build mirror to my new 1TB SSD from it? Quote Link to comment
JorgeB Posted April 3, 2023 Share Posted April 3, 2023 That single device should be usable now, the old pool was a raid1 mirror correct? Quote Link to comment
JunkoZane Posted April 3, 2023 Author Share Posted April 3, 2023 22 minutes ago, JorgeB said: That single device should be usable now, the old pool was a raid1 mirror correct? You're talking about sdg? Yes, my cache pool was raid1 mirrored and consisted 2 * 256gb ssd's Quote Link to comment
JorgeB Posted April 3, 2023 Share Posted April 3, 2023 8 minutes ago, JunkoZane said: You're talking about sdg? Yes, stop array, unassign all pool devices, start array, stop array, re-assign old pol device (sdg) by itself, start array, post new diags. Quote Link to comment
JunkoZane Posted April 3, 2023 Author Share Posted April 3, 2023 Here's diags with sdg the-beast-diagnostics-20230403-2229.zip Quote Link to comment
JorgeB Posted April 4, 2023 Share Posted April 4, 2023 Post new diags to confirm the missing device was deleted, since that was still going on in the previous ones. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.