Jump to content
  • [6.11.3] Cache disks becomes Read-only


    Tudisimo
    • Closed

    Hello just reporting that after upgrading from 6.11.1 to 6.11.3 one of my 3 cache single disk arrays becomes Read-only. I noticed as this is where I host my docker containers, after a while they start crashing/malfunctioning due to no availability to write info.

     

    Restarted, restarted docker services, mount/unmount nothing fixed except rolling back.  Fix common errors plugin reports as docker.img being full (not the case) and above it cache drive reports as read only.

     

    Attached my diagram file if it helps.  I have no need to upgrade besides CVEs but thought I would let the team know. Affected drive was DLappdata

     

    tower-diagnostics-20221111-1128.zip




    User Feedback

    Recommended Comments

    Diags are from v6.11.1 but btrfs is detecting data corruption in one of the cache devices:

     

    Nov 11 11:20:45 Tower kernel: BTRFS info (device nvme2n1p1): bdev /dev/nvme2n1p1 errs: wr 0, rd 0, flush 0, corrupt 9, gen 0

     

    Start by running a scrub.

    Link to comment

    Thanks Jorge,

     

    As mentioned I had to roll back, I did scrub on 6.11.3 and rebooted after, made no difference. Only solution was roll back.

     

    Link to comment

    Difficult to say more without the v6.11.3 diags other than pool is also showing issues with v6.11.1, other than the mentioned corruption there's also this:

     

    Nov 11 11:20:46 Tower kernel: BTRFS error (device nvme2n1p1): incorrect extent count for 6093506871296; counted 8236, expected 8224

     

    Looks like it's just a log tree problem, so it might be fixable by zeroing it, but make sure pool is backed up before trying, with the array stopped:

     

    btrfs rescue zero-log /dev/nvme2n1p1

     

    Then start the array, if the pool mounts run a scrub and when it ends post new diags.

    Link to comment

    Thanks Jorge, just to close this one out. Decided to reformat cache drive and reassign to pool. Updated to 6.11.3 afterwards, been rock solid since (5+ days).

    Looks like somehow 6.11.3 and 6.11.1 handled this corruption differently (guessing 6.11.3 is less tolerant and only mounted as read-only). 

    Cheers,

    Luis

    • Like 1
    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...