limetech

Administrators
  • Posts

    10184
  • Joined

  • Last visited

  • Days Won

    196

Report Comments posted by limetech

  1. 34 minutes ago, Taddeusz said:

    I changed the AutoMounter host from unraid.bean.local to 192.168.22.90 and that worked. Why would it not work using the fq host name?

    'avahi' is the name of the Linux package that implements mDNS protocol and 'bonjour' is the name of MacOS package that does same.  These packages are responsible for resolving network hostnames that end in ".local".  If 'avahi' is not configured properly then name resolution won't work, but you can always refer to a host by it's IP address.  If disabling ipv6 does not work for you, please post contents of this file from your server:

     

    /etc/avahi/avahi-daemon.conf

  2. On 5/17/2023 at 8:40 PM, Misty said:

    Please add CONFIG_FANOTIFY support in Linux Kernel. I'm noticing some strange disk spin ups in recent builds, but without fanotiy it's painful to troubleshoot (can only use inotify recursively, causing disk never spin down).

    Actually, most new kernels have enabled this config (Ubuntu, Debian, recent version of RedHat). As UnRAID is mainly oriented towards storage, there's no reason to keep this option off.

    This is added in next release (rc7), please let me know how you're using this to track down disk spin-up issues 👍

    • Like 1
  3. 5 hours ago, primeval_god said:

    That is not ideal. The reason is that I have no need for IO performance beyond what shfs already provides. I would much rather have the functionality that is mentioned under the exclusive share restrictions (in particular the ability to make changes and add folders to other disks without having to stop and start the array). 

     

    Actually I neglected to update the 'Restrictions' under 'Exclusive shares'.  This is the only restriction which still applies:

    • Both the share Min Free Space and pool Min Free Space settings are ignored when creating new files on an exclusive share.

    The symlink is 'dynamic' meaning when a share path is traversed it checks immediately if the share exists on only one volume and if so, returns a symlink, else returns normal directory.

    • Like 2
  4. 18 minutes ago, Nogami said:

    Just wondering why it's recommended to erase and re-create zpools?  My RC5 ZFS pools seemed to come in OK.  Am I overlooking something basic (bit of a ZFS newb).

     

    This is only for zpools created with 6.12.0-beta5 (not -rc5) which was the first "beta" which had zfs pool support.

  5. 3 hours ago, apandey said:

    Great, appreciate the speed of new RCs turning up

    For those of us who have zfs via plugin on 6.11.5, is there a way to pre-check if those will be importable into 6.12? I understand that datasets get created when defining shares, but if i already have vdevs / datasets, how will they be interpreted by 6.12 initially and will there be cases where they could be rejected and require recreating pools? 

     

    Importing existing pool does not change anything in the pool, except possibly compression on/off and autotrim on/off.  But note that all top-level directories will be interpreted as shares whether they are actual directories or datasets.  When/if you later create  a share in that pool it will be created as a dataset.

     

  6. 2 hours ago, apandey said:

    Right now, for a cache=only share, we have /mnt/user/share shfs path and /mnt/poolname direct mount.

    /mnt/poolname/share

     

    2 hours ago, apandey said:

    Am I correct to understand that the new exclusive share feature will turn /mnt/user/share to be a direct mount?

    Yes.  It's a bind mount.

     

    2 hours ago, apandey said:

    Will the other path still be available?

    Yes.

     

    2 hours ago, apandey said:

    Does the user need to make any path adjustments

    No.

     

    2 hours ago, apandey said:

    will the upgrade do something about it?

    No need.

    • Thanks 2
  7. 23 hours ago, aim60 said:

    Sometime in the future, might shares on array disks, that have been confined to one disk (via Included Disks) be considered as Exclusive, and be bind-mounted.

     

    Considered that, but writes to unraid array disks are greatly throttled by r/m/w parity updates.  Maybe would benefit reads, but you can enable 'disk shares' and get the same benefit.

    • Like 1
  8. 4 hours ago, JonathanM said:

    Is the quoted text also in the inline help system next to the exclusive setting?

    For "Exclusive access" the help text reads:

     

    When set to "yes" indicates a bind-mount directly to a pool has been set up for the share in the /mnt/user tree, provided the following conditions are met:

    • Primary storage is a pool.
    • Secondary storage is set to none.
    • The share does not exist on any other volumes.
  9. 25 minutes ago, JonathanM said:

    That's fine, I just wanted to confirm the behaviour so we can help people who will inevitably have a container writing to the wrong location.

     

    Note this is documented in the Release Notes:

     

    "If the share directory is manually created on another volume, files are not visible in the share until after array restart, upon which the share is no longer exclusive."

     

    Should we add anything to that description?

    • Like 1
  10. 1 hour ago, mdrjr said:

    I'm been a unraid customer since v5, love it and everytime it gets better! 

    Thank you!

     

    1 hour ago, mdrjr said:

    1. Able to have Primary Storage as Cache drive and Secondary storage as a Pool (zfs)

    That's coming and will be implemented at same time we implement multiple unRAID pools - then everything is a pool :)

     

    1 hour ago, mdrjr said:

    2. Allow me to change the ZFS compression instead of lz4 to zstd as it resets everytime the array stops

    Everything I've read says 'lz4' is "better"... but sure we will implement a way to specify the algorithm.  Curious, what do you mean "it resets everytime array stops"?

  11. 5 hours ago, kubed_zero said:

    I've got zero extra Samba custom configurations/files, so this is running with vanilla Unraid as far as I'm aware. 

    Thank you for your report.  I have TM backups running ok with Monterey (12.6).  At first I thought it was because the share is marked 'public'.  Changed to 'private' and there was some connectivity issues but eventually poked around the TM preferences and got it to work again... so the investigation continues...