• [6.8.3 -> 6.9.0] After upgrading from 6.8.3 to 6.9.0 I have no Docker Containers or VMs


    Boomháuer
    • Solved

    I'm not as worried about the VMs as I am the Docker Containers, all of my Plugins are still installed.

     

    I have attached the diagnostics file below.  I did try and downgrade to 6.8.3 and that didn't fix the problem... still no containers so I'm back to 6.9.0.

     

    Any help would be great, thank you.

    unraid-diagnostics-20210301-2215.zip




    User Feedback

    Recommended Comments

    Well, I may have found my problem.  My cache drive is saying that it's unmountable and that's where my appdata and VMs were hanging out.

     

    Quote

    Mar 1 23:08:16 unRAID kernel: sd 7:0:2:0: [sdd] 976773168 512-byte logical blocks: (500 GB/466 GiB)
    Mar 1 23:08:16 unRAID kernel: sd 7:0:2:0: [sdd] Write Protect is off
    Mar 1 23:08:16 unRAID kernel: sd 7:0:2:0: [sdd] Mode Sense: 73 00 00 08
    Mar 1 23:08:16 unRAID kernel: sd 7:0:2:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Mar 1 23:08:16 unRAID kernel: sdd: sdd1
    Mar 1 23:08:16 unRAID kernel: sd 7:0:2:0: [sdd] Attached SCSI disk
    Mar 1 23:08:16 unRAID kernel: BTRFS: device fsid 6246aab7-33c0-464d-8d19-a8cf3bd9be4b devid 1 transid 40519520 /dev/sdd1 scanned by udevd (1377)
    Mar 1 23:08:34 unRAID emhttpd: WDC_WDS500G2B0A-00SM50_191470802264 (sdd) 512 976773168
    Mar 1 23:08:34 unRAID emhttpd: import 30 cache device: (sdd) WDC_WDS500G2B0A-00SM50_191470802264
    Mar 1 23:08:34 unRAID emhttpd: read SMART /dev/sdd
    Mar 1 23:08:36 unRAID root: /usr/sbin/wsdd
    Mar 1 23:08:39 unRAID emhttpd: shcmd (37): mount -t btrfs -o noatime,space_cache=v2 /dev/sdd1 /mnt/cache
    Mar 1 23:08:39 unRAID kernel: BTRFS info (device sdd1): enabling free space tree
    Mar 1 23:08:39 unRAID kernel: BTRFS info (device sdd1): using free space tree
    Mar 1 23:08:39 unRAID kernel: BTRFS info (device sdd1): has skinny extents
    Mar 1 23:08:39 unRAID kernel: BTRFS info (device sdd1): enabling ssd optimizations
    Mar 1 23:08:39 unRAID kernel: BTRFS info (device sdd1): creating free space tree
    Mar 1 23:08:39 unRAID kernel: BTRFS critical (device sdd1): corrupt leaf: block=862728765440 slot=54 extent bytenr=1435058176 len=4096 invalid generation, have 9837500584236705385 expect (0, 40519521]
    Mar 1 23:08:39 unRAID kernel: BTRFS error (device sdd1): block=862728765440 read time tree block corruption detected
    Mar 1 23:08:39 unRAID kernel: BTRFS: error (device sdd1) in btrfs_create_free_space_tree:1189: errno=-5 IO failure
    Mar 1 23:08:39 unRAID kernel: BTRFS warning (device sdd1): failed to create free space tree: -5
    Mar 1 23:08:39 unRAID kernel: BTRFS error (device sdd1): commit super ret -30
    Mar 1 23:08:39 unRAID kernel: BTRFS error (device sdd1): open_ctree failed
    Mar 1 23:08:39 unRAID root: mount: /mnt/cache: can't read superblock on /dev/sdd1.
    Mar 1 23:08:42 unRAID root: /usr/sbin/wsdd

     

    SMART says that the drive is fine, am I screwed at this point?  I do have a backup of the appdata folder from about three weeks ago so it's not the end of the world I guess but if I could get the cache working again that would be preferable.

     

    Thanks

    Link to comment
    16 hours ago, JorgeB said:

    That means the tree block is corrupt, there are some recovery option here.

    @JorgeBthank you for the suggestion.  I was able to use option 1) Mount filesystem read only (safe to use) and copy the files off of the cache drive except for one Ubuntu VM.  You saved me a bunch of time and I appreciate it greatly.

     

    Now to shut the server down and install the new cache drive.

    Link to comment

    Want to add, I have this same issue. If I restore back to 6.8.3, I have to mount the cache drive then stop the array and add it back in as a cache drive then it starts correctly and works fine. 

    If copying everything off and then formatting/restoring data is the only option, that is fine just worrisome.

    Link to comment
    1 hour ago, robry said:

    If I restore back to 6.8.3, I have to mount the cache drive then stop the array and add it back in as a cache drive then it starts correctly and works fine. 

    That's a different issue, please post the diagnostics after upgrading to v6.9

     

    Link to comment
    12 hours ago, hamad said:

    I have the same issue.. no more Docker or VM after upgrading.

    Cache files system is corrupt, you can try going back to v6.8 to see if it still mounts there, if it does back it up, note that after downgrading you'll nee to re-assign the pool devices, if it still doesn't see my first post above for some recovery options.

    Link to comment
    2 hours ago, JorgeB said:

    Cache files system is corrupt, you can try going back to v6.8 to see if it still mounts there, if it does back it up, note that after downgrading you'll nee to re-assign the pool devices, if it still doesn't see my first post above for some recovery options.

    Ok, I downgraded to 6.8.3 and every thing  back to normal. so what is wrong with my cache in 6.9

    How can I upgrade to 6.9 then ?

    thanks

    Link to comment
    31 minutes ago, hamad said:

    so what is wrong with my cache in 6.9

    Newer kernel can detect previously undetected corruption, you should backup your pool, re-format, then restore the data before or after upgrading back to v6.9

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.