PicoCreator

Members
  • Posts

    6
  • Joined

  • Last visited

Everything posted by PicoCreator

  1. Ahh i see - that was some quick "Just In Time" save you did there. It worked, through whatever magic removal and adding back in does, the cache is back online Because i was worried on starting up the array again, i was trying to find a spare external HDD to transfer the data out 😅 so this really saved me alot of time Thanks @JorgeB
  2. Faced a similar experience when a reboot with a bad sata card occured. Knocking out my entire cache array. Because no actual drives was lossed, after rewiring to another sata port (SATA 2 sadly). I am able to see the entire "btrfs filesystem" even if I unable to add them back to unraid (as they are unassigned, and shows a warning that all data will be formatted when reassigning) $ btrfs filesystem show Label: none uuid: 0bfdf8d7-1073-454b-8dec-5a03146de885 Total devices 6 FS bytes used 1.37TiB devid 2 size 111.79GiB used 37.00GiB path /dev/sdo1 devid 3 size 223.57GiB used 138.00GiB path /dev/sdm1 devid 4 size 223.57GiB used 138.00GiB path /dev/sdi1 devid 5 size 1.82TiB used 1.60TiB path /dev/sdd1 devid 6 size 1.82TiB used 1.60TiB path /dev/sde1 devid 7 size 111.79GiB used 37.00GiB path /dev/sdp1 ... there are probably other BTRFS disk drives if you have theme as well ... While attempting to remount this cache pool using the steps found at I was unfortunately faced with an error of $ mount -o degraded,usebackuproot,ro /dev/sdo1 /dev/sdm1 /dev/sdi1 /dev/sdd1 /dev/sde1 /dev/sdp1 /recovery/cache-pool mount: bad usage So alternatively I mount using the UUID (with /recovery/cache-pool being the the recovery folder i created) $ mount -o degraded,usebackuproot,ro --uuid 0bfdf8d7-1073-454b-8dec-5a03146de885 /recovery/cache-pool With that i presume i can then safely remove the drives from the cache pool (for the last 2 disk that was left), and slowly manually reorganize and recover the data.
  3. Chiming in here - my situation was a sata card failure, which was replaced - however between the reboots the auto start pretty much killed the cache the same way as https://forums.unraid.net/topic/94233-solved-rebuild-cache-pool/ So despite having no disk failure itself, im now trying to figure out how to rebuild the whole btrfs array to migrate my approximately 2TB of VM data that i have pinned to be exclusively on cache 😅 My guess is that, because no drives were really lost, i should be able to perform the recovery in the next 24 hours - but it isn't exactly a pleasant experience, needing to time sink into the recovery process due to such an issue. My array is now with disabled auto-start, auto-start really should not be default behaviour, if there is a risk of permemenant data loss - which we would block from the UI and warn normally anyway.
  4. Personally as i create docker containers as part of my work. Via autoamted builds. 1. You will need to first understand the docker toolchain and structure 2. Understand "dockerfile" and how it is used to build a docker image (https://docs.docker.com/engine/reference/builder/) 3. Lots of linux command line, for whatever specifically you want to do. With number 2. The dockerfile reference is what i am constantly referring for all its command. It will also greatly help if you have the docker toolchain installed on your local machine, so you can rapidly iterate on modify / testing the dockerfile build process. Note however, rapid is subjective here, due to how "long" it takes for some of this dockerfile images to build. Rapidly, is similar to programming 10 years + ago. Where you sometimes sit for 2 minutes and wait for your "dockerfile code to compile". Still its generally much faster then uploading the docker file to docker hub and triggering the automatic build.
  5. @gundamguy How bout without the GUI? I would be fine with the command line. Specifically 3, because i am contemplating between n-copies >= 3 vs parity as well.
  6. Hey all, I am planning to replace my existing off the shelf NAS (2x3TB Raid 1). With a new custom build (RPC-4224 with 6 drives salvaged from various computers). And is waiting for my thumb-drive of unraid, and the hardware parts to arrive. While I am familiar with linux and LVM due to their heavy usage at work. I am currently exploring the usage of BTRFS for my new NAS replacement. And plans to fiddle around with it, once the parts arrive. My questions are broken to the following 3 parts. [*]Does the unraid btrfs UI allow me to setup with n-copies > 2 ? [*]Can I rebalance btrfs n-copies from 2 to higher when I add in a new drives in future ? [*]Since every file already includes a CRC checksum. If I setup with n-copies of 3 and larger. Is there a use for a parity drive ? Since CRC already detects bitrot And when n-copies >= 3, i can do a recovery of any one failure / rot, from the remainder.