Jump to content
  • [6.12.4] [ZFS] Lost data while moving across shares/datasets.


    sabertooth
    • Urgent

      Finally took the plunge and did an upgrade to 6.12.4 from 6.11.5.

      6.11.5:

    • ZFS(RAIDZ1)
    • User shares with same names as zfs dataset with a link(inside the share) to zfs dataset.


      6.12.4:

    • Create a new pool, and add all HDDs.
    • Change primary storage for all shares to zfs pool.
    • Global Share Settings -> Permit exclusive shares.
       

      So far so good, however some shares (from within zfs pool) show Exclusive access: No
      After a reboot, things still remain the same.
     
      In order to enable exclusive share:

    • Create a new new dataset on the zfs pool.
    • Move data from old share to new share (mv source destination)
    • Delete old share.
    • Rename new share.

     
      I tested this with smaller dataset/share first before moving to dataset(s)/share(s) with hefty amount of data.

     
      And, this is when I lost data after moving.
      I should have paid attention to ZFS master plugin while it was showing no increase in dataset size even when close to 800GB data was moved.

     
      Any ideas as to what went wrong? For the first time, am now worried that other datasets might show empty.
      Also, how can one enable Exclusive access without having to go through this circus?

     




    User Feedback

    Recommended Comments



    Quote

    So far so good, however some shares (from within zfs pool) show Exclusive access: No
      After a reboot, things still remain the same.

     

    Most likely data for that share existed in more than just one pool, we'd need the diagnostics from that time to confirm.

    Link to comment

    One of your shares is set to cache only but has data on disk1

     

    Quote

    p------t                          shareUseCache="only"    # Share exists on zdata, disk1

     

    Was that one involved? 

    Exclusive access requires primary storage only and the share must not exist on any other disk/pool

    Link to comment
    21 minutes ago, Kilrah said:

    Exclusive access requires primary storage only and the share must not exist on any other disk/pool

    Correct, you must fix that or exclusive mode won't work.

    Link to comment
    29 minutes ago, Kilrah said:

    One of your shares is set to cache only but has data on disk1

     

     

    Was that one involved? 

    Exclusive access requires primary storage only and the share must not exist on any other disk/pool

    No, this share has 2TB+ data and is from zfs on 6.11.5.

    Link to comment
    22 minutes ago, JorgeB said:

    Correct, you must fix that or exclusive mode won't work.

    That share is thankfully intact. This share exist only on ZFS pool.
    How does one FIX this in the first place?

    Why close the BUG? Is there a way for me to recover deleted data?

    Edited by sabertooth
    Link to comment
    14 minutes ago, sabertooth said:

    How does one FIX this in the first place?

    You need to move or delete that share from disk1, you can do that manually or by using the mover with the correct settings, see the GUI help.

     

    15 minutes ago, sabertooth said:

    Why close the BUG? Is there a way for me to recover deleted data?

    Can you explain step by step what you did to result in lost data to see if it can be reproduced? It may not be a bug but user error.

     

     

     

    Link to comment
    2 minutes ago, JorgeB said:

    You need to move or delete that share from disk1, you can do that manually or by using the mover with the correct settings, see the GUI help.

    That share in question is NOT on disk1 but on pool (zdata). As clearly mentioned in first post, share was created with a link to ZFS dataset in 6.11.5.

     

    2 minutes ago, JorgeB said:

    Can you explain step by step what you did to result in lost data to see if it can be reproduced? It may not be a bug but user error.

    Please see the first post.

    Edited by sabertooth
    Link to comment
    6 minutes ago, sabertooth said:

    That share in question is NOT on disk1 but on pool (zdata). As clearly mentioned in first post, share was created with a link to ZFS dataset in 6.11.5.

    That's not what this shows:

    57 minutes ago, Kilrah said:

    p------t                          shareUseCache="only"    # Share exists on zdata, disk1

     

    7 minutes ago, sabertooth said:

    Please see the first post.

    I cannot replicate based on that.

    Link to comment
    13 minutes ago, JorgeB said:

    That's not what this shows:


    If that is the case why are contents mapped to the corresponding dataset on ZFS pool?
    So, did the upgrade messed it up ? i.e.  after changing the primary storage (6.12.4) to dataset on zfs pool the original share (6.11.5) was left on disk1.
     

    13 minutes ago, JorgeB said:

    I cannot replicate based on that.

    Have lost close to 800 GB of data, I won't be trying to lose more data.

    Edited by sabertooth
    Link to comment
    4 minutes ago, sabertooth said:

    If that is the case why are contents mapped to the corresponding dataset on ZFS pool?

    That's irreverent, currently that share also exists on disk1, please read what I wrote:

     

    24 minutes ago, JorgeB said:

    You need to move or delete that share from disk1, you can do that manually or by using the mover with the correct settings, see the GUI help.

     

    5 minutes ago, sabertooth said:

    Have lost close to 800 GB of data, I won't be trying to lose more data.

    Without a detailed way to reproduce we cannot see what the problem was, and confirm it is a bug or not.

    Link to comment
    1 minute ago, JorgeB said:

    That's irreverent, currently that share also exists on disk1, please read what I wrote:

    OP said this isn't the share that was involved

     

    45 minutes ago, sabertooth said:

    No, this share has 2TB+ data and is from zfs on 6.11.5.

    But which ones was it then? 

    Link to comment
    Just now, Kilrah said:

    OP said this isn't the share that was involved

    Apparently reading is hard.
     

    1 minute ago, Kilrah said:

    But which ones was it then? 

    downloads :( I am still searching for an old copy.

    Link to comment
    8 minutes ago, Kilrah said:

    OP said this isn't the share that was involved

    OK, sorry about that, but like mentioned we'd need the diags showing the problem, i.e. when the share was not showing exclusive.

    Link to comment

    So, let me try to explain the problem again.


    6.11.5:

    • ZFS(RAIDZ1) with zpool called zfs.
    • User share e.g. downloads with a link data to a dataset called downloads in the zpool called zfs

     
     Upgrade to 6.12.4
     6.12.4:

    • Create a new pool - zdata, and add all HDDs.
    • Global Share Settings -> Permit exclusive shares.
    • Change primary storage for downloads to zfs pool zdata.

     
    Now, since the exclusive access is shown as NO

    • renamed old share to downloads_old.
    • created new share called downloads with primary storage as zdata.
    • move data from downloads_old to downloads.
    • Remove old share downloads_old.
    • ^^^^^^^^^^ This is when I lost data ^^^^^^^^^^^^^^

    NOTE: Before trying on downloads, I had tried with other shares.


    After this disaster, I left my last share as is to avoid more data loss. This is the share which you see as exclusive access marked as NO.

    I had TWO issues,
    a) Loss of data and
    b) How to enable exclusive access without going through arduous process creating a new share and copying close 3 TB of data. 



     

    Edited by sabertooth
    Link to comment

     

    11 minutes ago, sabertooth said:

    b) How to enable exclusive access without going through arduous process creating a new share and copying close 3 TB of data. 


    should not have to go through anything complicated.    You just need to make sure that there are no files (or folders) for that share on the array or on any other pool and the share has no secondary storage set.  If any of these conditions are not met the Exclusive Share setting is automatically set to NO.

    Link to comment
    43 minutes ago, sabertooth said:

    Now, since the exclusive access is shown as NO

    You should have saved the diags at that time and asked for help, most likely the share existed somewhere else besides that pool, it would have been an easy fix, and no need to rename the share.

    Link to comment
    49 minutes ago, sabertooth said:

    renamed old share to downloads_old.

    How did you rename the share? If it was a zfs dataset mv wouldn't work, at most it would have renamed the mount point, leaving the dataset still there.

     

    What name was that share? Also post the output of:

    zfs list

    and

    zfs mount

     

    Link to comment
    8 minutes ago, JorgeB said:

    You should have saved the diags at that time and asked for help, most likely the share existed somewhere else besides that pool, it would have been an easy fix, and no need to rename the share.

    Hindsight is 20/20 😭, expensive mistake to say the least.

    Link to comment
    3 minutes ago, JorgeB said:

    How did you rename the share? If it was a zfs dataset mv wouldn't work, at most it would have renamed the mount point, leaving the dataset still there.

     

    From Unraid GUI. By move, I meant the contents i.e. mv source_dataset/* to destination_dataset/.  Dataset was renamed as well as per ZFS master plugin and zfs list.

     

    zfs list
    NAME                USED  AVAIL     REFER  MOUNTPOINT
    zdata              5.47T  12.6T      805G  /mnt/zdata
    zdata/cache         232G  12.6T      231G  /mnt/zdata/cache
    zdata/downloads     153G  12.6T      153G  /mnt/zdata/downloads
    zdata/isos          469G  12.6T      469G  /mnt/zdata/isos
    zdata/media         128K  12.6T      128K  /mnt/zdata/media
    zdata/prashant     2.32T  12.6T     2.32T  /mnt/zdata/p-------


     

    zfs mount
    zdata                           /mnt/zdata
    zdata/cache                     /mnt/zdata/cache
    zdata/downloads                 /mnt/zdata/downloads
    zdata/isos                      /mnt/zdata/isos
    zdata/media                     /mnt/zdata/media
    zdata/prashant                  /mnt/zdata/p-------

     

    Link to comment
    57 minutes ago, sabertooth said:

    From Unraid GUI. By move, I meant the contents i.e. mv source_dataset/* to destination_dataset/.

    Need more details please:

     

    1 hour ago, sabertooth said:

    Now, since the exclusive access is shown as NO

    • renamed old share to downloads_old. - using the GUI?
    • created new share called downloads with primary storage as zdata. - I assume with the GUI?
    • move data from downloads_old to downloads. - complete command used?
    • Remove old share downloads_old. - complete command used
    •  

     

    Link to comment

     

    39 minutes ago, JorgeB said:

    Now, since the exclusive access is shown as NO

    • renamed old share to downloads_old. - using the GUI?: Yes
    • created new share called downloads with primary storage as zdata. - I assume with the GUI? : Yes
    • move data from downloads_old to downloads. - complete command used?: From command line> mv downloads_old/* downloads/.
    • Remove old share downloads_old. - complete command used: From within command line> rm -rf downloads_old/

     

    Link to comment
    35 minutes ago, sabertooth said:

    rm -rf downloads_old/

    This command would not remove a dataset, something missing from your description.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...