Jump to content

sabertooth

Members
  • Posts

    45
  • Joined

  • Last visited

Report Comments posted by sabertooth


  1.   Afraid it indeed was a zfs dataset since all my shares were on zfs in 6.11.5. Please remember that the share initially was created with Primary Storage set to Array containing a link to dataset with same name as the share. After migration, I did change the Primary Storage to zpool which probably explains the yellow warning symbol next to it. I suggest you try out the above listed steps.

     

     

  2. Please refer to attached screen-shot, clearly shows dummy_old share as non-exclusive and dummy as exclusive (on zdata)

    All steps are listed below:

    root@UnraidZFS:/mnt/user# mv dummy_old/* dummy/.

    root@UnraidZFS:/mnt/user# rm -rf dummy_old/

    root@UnraidZFS:/mnt/user# zfs list
    NAME                USED  AVAIL     REFER  MOUNTPOINT
    zdata              5.47T  12.6T      805G  /mnt/zdata
    zdata/dummy        1.04M  12.6T     1.04M  /mnt/zdata/dummy

    Screenshot 2023-09-05 at 09.24.55.png

  3.  

    39 minutes ago, JorgeB said:

    Now, since the exclusive access is shown as NO

    • renamed old share to downloads_old. - using the GUI?: Yes
    • created new share called downloads with primary storage as zdata. - I assume with the GUI? : Yes
    • move data from downloads_old to downloads. - complete command used?: From command line> mv downloads_old/* downloads/.
    • Remove old share downloads_old. - complete command used: From within command line> rm -rf downloads_old/

     

  4. 3 minutes ago, JorgeB said:

    How did you rename the share? If it was a zfs dataset mv wouldn't work, at most it would have renamed the mount point, leaving the dataset still there.

     

    From Unraid GUI. By move, I meant the contents i.e. mv source_dataset/* to destination_dataset/.  Dataset was renamed as well as per ZFS master plugin and zfs list.

     

    zfs list
    NAME                USED  AVAIL     REFER  MOUNTPOINT
    zdata              5.47T  12.6T      805G  /mnt/zdata
    zdata/cache         232G  12.6T      231G  /mnt/zdata/cache
    zdata/downloads     153G  12.6T      153G  /mnt/zdata/downloads
    zdata/isos          469G  12.6T      469G  /mnt/zdata/isos
    zdata/media         128K  12.6T      128K  /mnt/zdata/media
    zdata/prashant     2.32T  12.6T     2.32T  /mnt/zdata/p-------


     

    zfs mount
    zdata                           /mnt/zdata
    zdata/cache                     /mnt/zdata/cache
    zdata/downloads                 /mnt/zdata/downloads
    zdata/isos                      /mnt/zdata/isos
    zdata/media                     /mnt/zdata/media
    zdata/prashant                  /mnt/zdata/p-------

     

  5. So, let me try to explain the problem again.


    6.11.5:

    • ZFS(RAIDZ1) with zpool called zfs.
    • User share e.g. downloads with a link data to a dataset called downloads in the zpool called zfs

     
     Upgrade to 6.12.4
     6.12.4:

    • Create a new pool - zdata, and add all HDDs.
    • Global Share Settings -> Permit exclusive shares.
    • Change primary storage for downloads to zfs pool zdata.

     
    Now, since the exclusive access is shown as NO

    • renamed old share to downloads_old.
    • created new share called downloads with primary storage as zdata.
    • move data from downloads_old to downloads.
    • Remove old share downloads_old.
    • ^^^^^^^^^^ This is when I lost data ^^^^^^^^^^^^^^

    NOTE: Before trying on downloads, I had tried with other shares.


    After this disaster, I left my last share as is to avoid more data loss. This is the share which you see as exclusive access marked as NO.

    I had TWO issues,
    a) Loss of data and
    b) How to enable exclusive access without going through arduous process creating a new share and copying close 3 TB of data. 



     

  6. 13 minutes ago, JorgeB said:

    That's not what this shows:


    If that is the case why are contents mapped to the corresponding dataset on ZFS pool?
    So, did the upgrade messed it up ? i.e.  after changing the primary storage (6.12.4) to dataset on zfs pool the original share (6.11.5) was left on disk1.
     

    13 minutes ago, JorgeB said:

    I cannot replicate based on that.

    Have lost close to 800 GB of data, I won't be trying to lose more data.

  7. 2 minutes ago, JorgeB said:

    You need to move or delete that share from disk1, you can do that manually or by using the mover with the correct settings, see the GUI help.

    That share in question is NOT on disk1 but on pool (zdata). As clearly mentioned in first post, share was created with a link to ZFS dataset in 6.11.5.

     

    2 minutes ago, JorgeB said:

    Can you explain step by step what you did to result in lost data to see if it can be reproduced? It may not be a bug but user error.

    Please see the first post.

  8. 21 hours ago, dlandon said:

    Correct, the smb-fruit.conf file contains all the fruit settings.

     

    My wording may not have been the best.  In this situation, there is a log message not seen in internal testing or rc releases and we are trying to understand where it is comming from.

     

    The smb-fruit.conf file contains the most common settings used for fruit settings.  The uncommented settings are the default settings.  The commented settings are optional.  The idea is to not have to change the smb-extras.conf file to make any adjustments you might need for your particular system.  Changes in the extras file are global and the settings in the fruit file are applied per share.

    I have uploaded the diagnostics, could you please suggest exact changes I should to in smb-extra.cong?

    Also, am seeing these errors again while accessing these shares from within Windows, earlier problems were from macOS.

  9. 6 hours ago, dlandon said:

    Yes, try to avoid that for fruit settings.

     

    The idea is to make a copy on the flash device.  When Unraid sees the smb-fruit.conf on the flash, those settings are applied for fruit settings.  If the flash copy is not available, Unraid uses the default settings at /etc/samba/smb-fruit.conf.  The idea is that the flash settings get applied on each reboot and you customize for your use case.

     

    Settings in that file are commented out.  You uncomment the settings you think will apply to your situation and restart samba so you can do some testing.  We want people to test and give feedback so we can create a set of generic settings as a default.

     

    cat /etc/samba/smb-shares.conf.

     

    No.  Make your changes and restart samba.  They will be applied.

     

    Remove any settings related to fruit.  Other settings in smb-extras.conf can stay.

     

    This scheme insures that the fruit settings are applied correctly per share.  There is a special case with fat and exfat file systems.  Special settings for these devices are applied because those file systems do not support extended attributes and writes can fail.  You can't apply fruit settings globally for all shares.


    /etc/samba/smb-shares.conf contains following:
       vfs objects = catia fruit streams_xattr

     

    Quote

    Settings in that file are commented out.  You uncomment the settings you think will apply to your situation and restart samba so you can do some testing.  We want people to test and give feedback so we can create a set of generic settings as a default.

     

    This is released product, why should customers test this on production systems.

    Why can't you replicate this in-house? RC releases are completely different story.

  10. On 3/12/2022 at 6:08 PM, AndrewZ said:

    Remove the symlink linking the user shares back to your ZFS pool and directly reference the zfs pool instead

     

    Typing the whole path (i.e. symbolic link) in the VM Add GUI works as expected, i.e. am able to install to Windows using iso file through the symbolic link.

×
×
  • Create New...