jeffreywhunter Posted July 8, 2015 Author Share Posted July 8, 2015 Ha! Yep, everything on the cache drive... root@HunterNAS:/mnt# ls disk1/ disk2/ disk4/ disk6/ disk8/ test/ user0/ disk10/ disk3/ disk5/ disk7/ disk9/ user/ root@HunterNAS:/mnt# cd test root@HunterNAS:/mnt/test# ls appdata/ docker.img So what's the next step? Just reboot? Link to comment
jeffreywhunter Posted July 8, 2015 Author Share Posted July 8, 2015 So I copied all the files from the /mnt/test directory where we mounted the cache drive (hoping that if we have to start over i can just copy the files back and be done). I just did a straight CP -R... Should I have copied with other options (i.e. is there some attributes that a straight cp would not capture?). What would be the next step to recover the unmountable cache drive? Link to comment
JonathanM Posted July 8, 2015 Share Posted July 8, 2015 Well, at this point since you have the files, I'd just reassign the drive back to the cache slot, let unraid format it again, and copy the files back. If you want to experiment a little more, perhaps we could try another command or two to get the 750 back as a single drive. It's a little curious that it was unable to break the RAID1 with the delete missing command. Perhaps the correct command would be btrfs device delete /dev/sde1 /mnt/test since the other part of the RAID wasn't actually missing from the system. Now that I think about it, that makes more sense, since the btrfs mount command is supposed to be able to find all of it's pieces automatically, and you didn't actually remove the other SSD. Link to comment
jeffreywhunter Posted July 8, 2015 Author Share Posted July 8, 2015 No joy in muddville... btrfs device delete /dev/sde1 /mnt/test ERROR: error removing the device '/dev/sde1' - unable to go below two devices on raid1 Happy to try some other ideas. Since I have the files backedup (any specific considerations for backing them up? just a simple copy?), then perhaps the easiest is to take the array off-line, format the cache and move on? Link to comment
JonathanM Posted July 8, 2015 Share Posted July 8, 2015 After a little more research, I can't seem to find a way to convert a btrfs RAID1 back to single drive status. It's apparently assumed you will always want to add protection, never remove it. To back up the files cp should be fine, rsync with checksum verification would be better though. After you are sure you have everything you need backed up, I think it's time to format and copy the data back on. This probably should serve as a warning for others. Perhaps a strongly worded warning that once you assign multiple devices to a btrfs cache pool, you can't go back to a single device without reformatting. It would be nice if a moderator or someone on the paid staff at limetech would confirm my findings though. Link to comment
jeffreywhunter Posted July 9, 2015 Author Share Posted July 9, 2015 So I was able to recover. Followed these steps... 1. Mounted the "unmountable" drive in another directory (mount -o degraded /dev/sdf /mnt/test/) 2. Copied the files using rsync to an existing disk (rsync -avHw --no-compress --progress /mnt/test/ /mnt/disk1/test) 3. Stopped the array, selected NO DISK for the cache disk. 4. Started the array, stopped the array and rebooted. 5. After reboot stopped the array, selected the SSD for the cache drive, started the array. 6. Copied the files back to the cache disk. 7. Rebooted. When it came back up, all was correct! Thanks for your help! Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.