Jump to content

PicoCreator

Members
  • Posts

    6
  • Joined

  • Last visited

Posts posted by PicoCreator

  1. Ahh i see - that was some quick "Just In Time" save you did there.

    It worked, through whatever magic removal and adding back in does, the cache is back online

     

    Because i was worried on starting up the array again, i was trying to find a spare external HDD to transfer the data out 😅 so this really saved me alot of time

     

    Thanks @JorgeB

  2. Faced a similar experience when a reboot with a bad sata card occured. Knocking out my entire cache array.

     

    Because no actual drives was lossed, after rewiring to another sata port (SATA 2 sadly). I am able to see the entire "btrfs filesystem" even if I unable to add them back to unraid (as they are unassigned, and shows a warning that all data will be formatted when reassigning)

     

    $ btrfs filesystem show
    Label: none  uuid: 0bfdf8d7-1073-454b-8dec-5a03146de885
            Total devices 6 FS bytes used 1.37TiB
            devid    2 size 111.79GiB used 37.00GiB path /dev/sdo1
            devid    3 size 223.57GiB used 138.00GiB path /dev/sdm1
            devid    4 size 223.57GiB used 138.00GiB path /dev/sdi1
            devid    5 size 1.82TiB used 1.60TiB path /dev/sdd1
            devid    6 size 1.82TiB used 1.60TiB path /dev/sde1
            devid    7 size 111.79GiB used 37.00GiB path /dev/sdp1
            
     ... there are probably other BTRFS disk drives if you have theme as well ...

     

    While attempting to remount this cache pool using the steps found at 

    I was unfortunately faced with an error of 

    $ mount -o degraded,usebackuproot,ro /dev/sdo1 /dev/sdm1 /dev/sdi1 /dev/sdd1 /dev/sde1 /dev/sdp1  /recovery/cache-pool
    mount: bad usage

     

     

    So alternatively I mount using the UUID (with /recovery/cache-pool being the the recovery folder i created)

     

    $ mount -o degraded,usebackuproot,ro --uuid 0bfdf8d7-1073-454b-8dec-5a03146de885  /recovery/cache-pool

     

    With that i presume i can then safely remove the drives from the cache pool (for the last 2 disk that was left), and slowly manually reorganize and recover the data.

     

  3. Chiming in here - my situation was a sata card failure, which was replaced - however between the reboots the auto start pretty much killed the cache the same way as 

     

    https://forums.unraid.net/topic/94233-solved-rebuild-cache-pool/

     

    So despite having no disk failure itself, im now trying to figure out how to rebuild the whole btrfs array to migrate my approximately 2TB of VM data that i have pinned to be exclusively on cache 😅

     

    My guess is that, because no drives were really lost, i should be able to perform the recovery in the next 24 hours - but it isn't exactly a pleasant experience, needing to time sink into the recovery process due to such an issue.

    My array is now with disabled auto-start, auto-start really should not be default behaviour, if there is a risk of permemenant data loss - which we would block from the UI and warn normally anyway.

×
×
  • Create New...