snowboardjoe

Members
  • Posts

    261
  • Joined

  • Last visited

Community Answers

  1. snowboardjoe's post in unRAID unstable a few hours after upgrading to 6.12.6, BTRFS error with cache pool was marked as the answer   
    Went through several other steps, but did not make any progress. Ran restore to an external drive and that seemed to grab everything just fine. Then as a last ditch effort I ran the repair command:
     
    root@laffy:/etc# btrfs check --repair --force /dev/sdg1 enabling repair mode Opening filesystem to check... WARNING: filesystem mounted, continuing because of --force Checking filesystem on /dev/sdg1 UUID: 8bdd3d07-cbb0-4d53-a9f2-da67099186ea [1/7] checking root items Fixed 0 roots. [2/7] checking extents parent transid verify failed on 2151677952 wanted 14578466 found 14578474 parent transid verify failed on 2151677952 wanted 14578466 found 14578474 parent transid verify failed on 2151677952 wanted 14578466 found 14578474 Ignoring transid failure super bytes used 258022117376 mismatches actual used 258022100992 parent transid verify failed on 2151677952 wanted 14578466 found 14578474 Ignoring transid failure No device size related problem found [3/7] checking free space tree [4/7] checking fs roots parent transid verify failed on 2151677952 wanted 14578466 found 14578474 Ignoring transid failure ...identical reponses delted for bervity... [5/7] checking only csums items (without verifying data) parent transid verify failed on 2151677952 wanted 14578466 found 14578474 Ignoring transid failure [6/7] checking root refs Recowing metadata block 2151677952 parent transid verify failed on 2151677952 wanted 14578466 found 14578474 Ignoring transid failure [7/7] checking quota groups skipped (not enabled on this FS) found 516044218368 bytes used, no error found total csum bytes: 415007296 total tree bytes: 1490927616 total fs tree bytes: 743276544 total extent tree bytes: 180305920 btree space waste bytes: 374203586 file data blocks allocated: 943655108608 referenced 490961231872  
    Ran check one more time and it came back clean. Remounted filesystem as RW and remained stable (normally it would go RO in about 30 seconds). Did a full reboot and all services are back online including all containers. Watching for stability at this point.
  2. snowboardjoe's post in 6.12.2 - Dashboard is no longer accessible was marked as the answer   
    Disk location is installed. I just ran an update on that plugin and updated all Docker containers. The Dashboard came back. Not entirely sure which issue resolved the problem, but the Docker container update likely spun up all disks noting your issue that you found. I will keep an eye on this.