Timbiotic Posted September 20, 2018 Share Posted September 20, 2018 (edited) I have a SSD drive with btrfs for my cache. Lately it keeps going into read only and I have to rebuild the docker image. I added a 2nd ssd thinking i was going to need to dump all the files in read only to save my cache. But when i was able to fix it by rebuilding the docker image i decided to make a pool. As soon as I put it back online the docker image corrupted again. What is causing the docker image to go corrupt almost everytime i stop it? Should i switch to XFS? In another note and possibly related i cannot scrub the SSD disk without it immediately aborting. Same when i click balance. It doesnt say aborted but it doesnt seem to do anything either. When i created the pool i got the weird too many profile message. Any suggestions / help would be appreciated. For now I will rebuild the image again to get my normal operations going. Edited September 21, 2018 by Timbiotic resolved Quote Link to comment
JorgeB Posted September 20, 2018 Share Posted September 20, 2018 Grabbing and postings the diagnostics after if goes read only might help Quote Link to comment
Timbiotic Posted September 20, 2018 Author Share Posted September 20, 2018 3 hours ago, johnnie.black said: Grabbing and postings the diagnostics after if goes read only might help lillis.69.mu-diagnostics-20180920-1545.zip Quote Link to comment
JorgeB Posted September 20, 2018 Share Posted September 20, 2018 Cache is trying to balance and failing since there's fs corruption, you need to backup data, re-format and restore data. Quote Link to comment
Timbiotic Posted September 21, 2018 Author Share Posted September 21, 2018 if i have another ssd drive already installed is there an easy way to move everything to it and make it cache drive? or will i need to backup data to it, format old drive and move back Quote Link to comment
Timbiotic Posted September 21, 2018 Author Share Posted September 21, 2018 Getting a little panicky because now my cache drive will not mount. I had added the 2nd SSD to the pool and then removed it and formated it. Not sure if that did something but now my main cache drive will not mount. Wondering if i should do a scrub repair ? lillis.69.mu-diagnostics-20180921-0923.zip Quote Link to comment
JorgeB Posted September 21, 2018 Share Posted September 21, 2018 You needed to move the data off the cache pool and format, if it's now unmountable see here and use it to try and recover the data. 1 Quote Link to comment
testdasi Posted September 21, 2018 Share Posted September 21, 2018 Your description of what you did is pretty confusing. Can you described, step-by-step what you did. Quote Link to comment
Timbiotic Posted September 21, 2018 Author Share Posted September 21, 2018 (edited) 5 minutes ago, johnnie.black said: You needed to move the data off the cache pool and format, if it's now unmountable see here and use it to try and recover the data. Thanks i have mounted it using the mount -o degraded,recovery,ro /dev/sdX1 /x command. Just MC it now or should i create a share? Also is it possible to copy to the new SSD in btrfs and make that the new pool and then format the old SSD and add it to that pool? Edited September 21, 2018 by Timbiotic Quote Link to comment
JorgeB Posted September 21, 2018 Share Posted September 21, 2018 5 minutes ago, Timbiotic said: Just MC it now or should i create a share? You need to copy to the array or an unassigned disk. 5 minutes ago, Timbiotic said: Also is it possible to copy to the new SSD in btrfs and make that the new pool and then format the old SSD and add it to that pool? Don't understand the question. Quote Link to comment
Timbiotic Posted September 21, 2018 Author Share Posted September 21, 2018 Just now, johnnie.black said: You need to copy to the array or an unassigned disk. Don't understand the question. I have a 2nd SSD i wanted to pool with the first one. Instead of copying it to the array is it possible to copy to the unassigned disk and then make it the new cache drive with all the new files in place? Then format the old drive and add it to the pool ending up with 2 SSDs in a new cache pool? Or should i just back up to array, format and create new pool and move back (seems longer). Quote Link to comment
JorgeB Posted September 21, 2018 Share Posted September 21, 2018 Yes, use unassigned devices to format the SSD using btrfs, when the copy is done make sure you start the array one time with no cache cache devices assigned, so Unraid "forgets" them, then stop the array and assign just that device, after that you can add the other one, but best to clear it first, you can use: blkdiscard /dev/sdX Replace X with correct letter. 1 Quote Link to comment
Timbiotic Posted September 21, 2018 Author Share Posted September 21, 2018 1 hour ago, johnnie.black said: Yes, use unassigned devices to format the SSD using btrfs, when the copy is done make sure you start the array one time with no cache cache devices assigned, so Unraid "forgets" them, then stop the array and assign just that device, after that you can add the other one, but best to clear it first, you can use: blkdiscard /dev/sdX I get root@lillis:/x# blkdiscard /dev/sdm -bash: blkdiscard: command not found root@lillis:/x# Replace X with correct letter. Quote Link to comment
Timbiotic Posted September 21, 2018 Author Share Posted September 21, 2018 But the rest worked wonderfully ! Is that an outdated command for clearing? Quote Link to comment
JorgeB Posted September 21, 2018 Share Posted September 21, 2018 1 minute ago, Timbiotic said: Is that an outdated command for clearing? No, try typing instead of copy/paste, some times copy/paste from the forum adds extra characters. Quote Link to comment
Timbiotic Posted September 21, 2018 Author Share Posted September 21, 2018 (edited) that worked typing it thanks so much i have a fully functional cache pool and docker setup again! That being said is it normal not to see the pool space? Is an unraid cache pool like RAID 0 where it uses both disc's space or is it like RAID 1 mirrorred? nevermind i found this in FAQ Edited September 21, 2018 by Timbiotic Quote Link to comment
JorgeB Posted September 21, 2018 Share Posted September 21, 2018 Default is raid1, though it can be changed, also be aware that if using two different size devices total space will be incorrectly reported in all but single profile. Quote Link to comment
Timbiotic Posted September 21, 2018 Author Share Posted September 21, 2018 so if i change it will it wipe the discs in the process? Will i need to back them up first? I dont care about performance gain so would i do the jbod one for redundancy? This one: -dconvert=single -mconvert=raid1 Quote Link to comment
JorgeB Posted September 21, 2018 Share Posted September 21, 2018 1 minute ago, Timbiotic said: so if i change it will it wipe the discs in the process? No, conversion is online and no data is touched, and while it's a low risk operation a backup of anything important is always a good idea. 2 minutes ago, Timbiotic said: I dont care about performance gain so would i do the jbod one for redundancy? For redundancy use the default raid1 profile, there's no redundancy with the single profile (except metadata), if one device fails all pool is lost. Quote Link to comment
Timbiotic Posted September 21, 2018 Author Share Posted September 21, 2018 Thanks so much for all of your help i am exactly where i want to be now again. Quote Link to comment
martinf Posted March 31, 2020 Share Posted March 31, 2020 I'm on 6.8.3 and have the same problem with btrfs. Docker immage gets corrupted. BTRFS gets corrupted and I have to wipe and rebuild Docker. (Thankfully, I found the template function so reinstalling is quick.) I have tried 3 different controllers (Motherboard SATA, SATA controller and a Dell LSI SAS/SATA controller). I have tried 3 different SSD. In Mirror, Single, with and without Encryption. Does not matter. After a couple of weeks or so, it gets corrupted, SSD (BTRFS) read only. Last time, I tried mount in main mode. Was able to run fs-check on the disk with the recover/fix option. Then remount disks and scrub it. Now it is working again. Dont really care about the corruption part, I can always rebuild. However, why is it happening in the first hand? Ram, motherboard issue? Cheers Martin Quote Link to comment
JorgeB Posted April 1, 2020 Share Posted April 1, 2020 10 hours ago, martinf said: Ram This would be the #1 suspect, btrfs is very susceptible to bad RAM, and gets corrupted very quickly with it. 1 Quote Link to comment
martinf Posted April 2, 2020 Share Posted April 2, 2020 Thanks johnnie.black I start to pull out some of the ram then. And buy a new server when New Zealand are back in business... Cheers Martin Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.