itimpi

Moderators
  • Posts

    19616
  • Joined

  • Last visited

  • Days Won

    54

Report Comments posted by itimpi

  1. 7 hours ago, steveBBB said:

    Have noticed on this release that when enabling vm manager and it creates the iso folder it creates it on the array not cache and when you set it to cache it just deletes the whole share, when the system is rebooted it re creates it again on the array

    What share have you set to hold the iso file (the normal default is ‘isos’).   What is the Use Cache setting for the share holding the iso files.

     

    We may be able to give better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.

  2. You would not have to make any changes when you migrate to 6.12.

     

    The basic functionality around user shares has not changed - it is just the presentation that had changed in preparation for new features in future releases.   Your existing settings would automatically adopt the new presentation when you upgrade.

     

    You can largely ignore the bind-mount feature.   It is just a performance optimisation that is automatically applied when the 6.12 release detects that all the files for a share are on the same pool.

     

    A small ownside to having a SSD only array is that at the moment the trim operation is not supported so with some brands of SSD this can lead to performance degradation over time.   I think it is likely that this restriction might be removed in a future release, but as not all HBA cards support trim some people will find this will still apply to them.

  3. Not sure if it is related to your error, but there are a lot of 

    May  6 00:21:58 vs-tower  shfs: share cache full

    In the syslog.   You do not have a Minimum Free Space value set for your ‘cache’ pool - it should be set to be more than the largest file you expect to write as file systems getting too full can have unpredictable effects.

     

    There are also a lot of lines of the form:

    May  6 03:05:19 vs-tower kernel: CIFS: __readahead_batch() returned 870/1024

    In the syslog.  Not sure what they mean but they are not normal.

     

    neither of these, though, explain why there should be corruption :)  if you run a scrub is the corruption detected as since the pool is formatted btrfs it should automatically have checksums associated with it to check integrity.

     

  4. 4 hours ago, isvein said:

    But what I still dont get is what it does in practice. I think it has something to do how the share gets mounted, but that is as far as I get

    Think of Exclusive as being equivalent to what used to be Use Cache=Only, but with a new performance optimisation that allows access to by-pass the overheads of fuse that implements User Shares for other modes.

    • Thanks 1
  5. 16 minutes ago, jaimbo said:

    Is there a current estimate of when we might see that be availably publically (either full release or RC?) :) 

    Limetech never gives estimates other than 'when it is ready'.  Since 6.12 is not yet released as a stable release I would expect we are talking about quite a few months at best.

  6. 5 minutes ago, binaryrefinery said:

    @itimpi - thanks. I was referring to/quoted the inline documentation / help in the Web UI, rather than the online docs.

    Documentation aside, it's a little non-intuitive for the free space setting on the share to affect the behavior of the cache. The share itself is nowhere near being full so I wouldn't have thought about the minimum free space as a fix. I'm also a bit curious to know if the split level would affect this too.


    Problem solved though... thanks @JorgeB and @itimpi

    The pool (cache) also has a Minimum Free Space setting.

     

    At the moment if the setting on a share is higher than that on the cache pool it takes precedence.   I personally think this is wrong and only the setting on the cache pool should be taken into account when writing to the cache, and that for the share to apply when writing to the array drives.

  7. @binaryrefinery

    Just a FYI the documentation says

    Quote

    Yes: Write new files to the cache as long as the free space on the cache is above the Minimum free space value. If the free space is below that then by-pass the cache and write the files directly to the main array.

    When mover runs it will attempt to move files to the main array as long as they are not currently open. Which array drive will get the file is controlled by the combination of the Allocation method, Split level, and Minimum Free Space setting for the share.

    which DOES mention the need to set the Minimum Free Space value.  Maybe the Help needs that extra bit adding as well.

  8. 18 minutes ago, hydkrash said:

    But I didn't dawn on me that the ZFS folder would be deemed as the cache pool.

    When you use the Use Cache setting = Only then you are not really using it as a cache.

     

    Perhaps it would be clearer if the text was redone to be something like "Use Cache/Pool" to make it clearer or maybe simply "Use Pool"?   The current text dates from the days when only a single pool was possible and the option was to use it as a cache or not.

  9. 23 minutes ago, enJOyIT said:

    Is it now possible to use zfs for an array drive?

    Yes if you are running Unraid 6.12 rc2

    24 minutes ago, enJOyIT said:

    Can I have multiple filesystems mixed up within one array? Lets say... two disks with xfs and three disks with zfs?

    Yes.  Each disk is a self-contained file system and can be any of the types supported by Unraid.

    24 minutes ago, enJOyIT said:

    Is there any benefit for using zfs for an array drive instead of xfs?

    I would think that the main benefit is data corruption being detected in real time.   You get similar detection if using btrfs

  10. I see where you are coming from, but I was assuming that only the top part of the Release Notes would initially be visible in the dialog box?  My concern would be that the moment anyone has to click elsewhere to get more detail that they need then they are less likely to do it, but I guess the alert could contain the link to them?

     

    I did not get an Alert when I installed 6.12 rc1 - should I have?  This would at least have given a feel for how it might work.

     

    BTW:  Any idea about the other part - whether vfio ids change with the new kernel.

  11. 13 minutes ago, bonienl said:

    I don't think showing the complete release notes upfront will bring anything.

    It is like these disclaimer notices, people just click 'accept' and continue.

     

    I guess we disagree on this.  I think that users would at least read the initial stages of the release notes although they would probably not read the later sections on detailed fixes or package updates.

     

    I wonder if it worth running a poll on this to see what other people think would happen?

    • Thanks 1
  12. OK - I guess I never really looked at this feature in enough detail.

     

    It would be nice of Unraid could somehow insert the text of the Release notes into the dialog following that preamble to avoid the user having to look elsewhere (or is this already planned).

    • Thanks 1
  13. 2 hours ago, bonienl said:

     

    There is a "ALERT" system in place since Unraid version 6.11, which allows to display any warning or other information before upgrading. The user has to explicitly acknowledge this message before proceeding.

     

    None of the Unraid releases have yet make use of this system.

     

    Ps. This ALERT system can also be used for plugins.

     

    I know about the alert system - I make use of this.

     

    I still think displaying the release notes before allowing the OS upgrade to proceed is a good idea?   I think this is more than the Alert system can provide?