Jump to content

itimpi

Moderators
  • Posts

    20,214
  • Joined

  • Last visited

  • Days Won

    55

Report Comments posted by itimpi

  1. 2 hours ago, trurl said:

    Even if you retain all you can still make any changes including changing slots so some default settings is probably the only thing that makes sense. 

    Not sure I agree here.   Changing slots still does not necessarily mean you want the shares to be set up any differently.  
     

    Perhaps the best way forward would be to add a new check box to the New Config dialog as to whether shares settings should be left unchanged or reset to defaults?    At least that would give visibility to the effect that share settings might be affected.   Making the default the current behaviour would mean users only get the share settings retained if they explicitly ask for them.

     

    do you think this should explicitly be raised as a feature request?

  2. I suspect that technically this is not a bug in that the New Config is working as designed.

     

    Having said that I agree it might make a lot of sense to leave all share settings unchanged when usin New Config - especially if using the option to retain current disk assignments.   I personally would find that more convenient than current behaviour.
     

    I do not like your second option as that would cause problems for users who have their shares exported and set to Public.

  3. Do you have any docker containers that if you look at the Docker tab and switch on the Advanced view show they are ‘healthy’?    It has been noticed that such containers have been set up by their authors to write to the docker image every few seconds as part of a health check, and although the writes are small the write amplification inherent in using BTRFS means this can add up.   I believe that an issue has been raised against docker in case this is a bug in the Docker engine rather than the container authors simply mis-using the health check capability.

     

    In addition there are other options available in the Docker settings that can reduce this load such as using an XFS formatted image instead of BTRFS, or not using an image at all and storing directly into the file system.  The array has to be stopped to see these options.    Have you tried any of these?

  4. 9 hours ago, jdiggity81 said:

    I take it that means with the work around you built in that the lsi 9200-16e will work until they make the corrections?

    Strange that the 'e' model needs this whereas the 'i' model (which I have) does not.   You would have thought they would be identical except for the connector being external rather than internal.

  5. 6 minutes ago, mgutt said:

    Besides of that. Are these JSON files really part of an usual docker installation or is this a special Unraid thing? I wonder why only Unraid users are complaining those permanent writes. Ok... maybe it has the most transparent traffic monitoring ;)

     

    This is nothing to do with Unraid - it is internal to the docker container.   It may have been more obvious to Unraid users because of the fact that a docker image is used and there was an issue in 6.8.3 and earlier that could cause significant write amplification when writing to that image stored on a SSD.   If the docker container files were stored directly in the file system (as the latest Unraid betas allow) then this is probably far less noticeable particularly if using a HDD with a file system like XFS that has far less inherent write amplification than BTRFS does.

     

    11 minutes ago, mgutt said:

    If these writes are related to Docker only, we should open an issue here. Because only updating a timestamp (or nothing) inside a config file, does not really sound like it's working as it meant to be.

    This could not do any harm although they may simply reply that the feature is being mis-used and the fix should come from the container maintainer.

     

  6. Not obvious on the best way to handle this as technically this is an issue with specific docker containers rather than an UnRAID issue.   There may be good reasons the maintainer of a particular container has set this up so that over-riding it becomes a bad idea.    I wonder if the Fix Common Problems plugin could be enhanced to detect this and suggest the fix?  Alternatively make it a configuration option in the docker templates?

  7. What do you have for the Disk Shares option under Settings->Global Share Settings (if you had posted the system's diagnostics zip file I could have seen for myself).

     

    If you click on the Shares tab does it show the cache under the Disk Shares?   If so you should be able to set the security level from there.   However whether it should be showing up under the Disk Shares section by default becomes an interesting question.

  8. 4 minutes ago, jowi said:

    Depends on how many dockers you have listed. If they don’t fit the screen, the interface gets stuck. It won’t scroll. Display font size also plays a role, the bigger the font, the less dockers you can list, and the gui gets stuck and wont scroll.

     

    But then again, this is not specific for this version, its an issue for as long as there are dockers in unraid. It wont get fixed for some reason.

     

    I have far more Dockers than fit on the screen and they scroll OK for me on my iPad.

     

    i think the root cause has to be some sort of bug at the web kit level which can therefore affect all iOS/iPadOS browsers as Apple mandates they have to use WebKit for rendering.  I would be interested to know if anyone using Safari on MacOS ever experience such problems.

  9. 12 minutes ago, MothyTim said:

    It's only the docker page, everything else is fine! It's the same in Safari and Chrome.

    As i said it is working fine for me.   I have had problems in the past but it is OK now.   It may be relevant that I am using the iOS14 beta so possibly a web engine (Which is used by both those browsers)  problem has been fixed.

  10. 30 minutes ago, Dephcon said:

     

    That's very interesting.  Say or example I have a share that's 'cache only' and i change which pool device i want it to use, unraid will move the data from one pool device to the other?  That would be highly useful for me in my IO testing.

    No, Unraid will not move the data to the new pool.      It will just start using that pool for any new files belonging to the share.      Note that for read purposes ALL drives are checked for file under a rip level folder named for the share (and thus logically belongs to the share).   The files on the previous pool will therefore still be visible under that share even though all new files are going to the new share.

    • Like 1
  11. 1 hour ago, bubbl3 said:

    I have it enabled there for appdata, then why the mover doesn't it enabled? also, how does one move all the existing data to the cache drive without the mover? Hopefully not manually...

    If you want mover to move files from array to cache then the Use Cache setting needs to be set to Prefer.   The GUI built-in help describes how the various settings affect mover. Also mover will not move any files that are open so you may need to disable the docker and/or VM services while such a move is in progress.

  12. 12 hours ago, Lignumaqua said:

    The link below is to an interesting paper from 2017 that compares the write amplification of different file systems. BTRFS is by far the worst with a factor of 32x for small files overwrite and append when COW is enabled. With COW disabled this dropped to 18.6X, which is still pretty significant. This is three years ago, so things may have changed. In particular space_cache V2 could be  a reaction to this? BTRFS + writing or amending small files = very high write amplification.

     

    https://arxiv.org/abs/1707.08514

    This suggests that BTRFS is a great system for secure storage of data files, but not necessarily a good choice for writing multiple small temporary files, or for log files that are continually being amended.  Looking at common uses of the cache in Unraid might lead to the following suppositions. A BTRFS cache using Raid 1 is a good place for downloaded files before they are moved into the array. It's also good for any static data files. However, it's likely not to be the best place for a Docker img file or any kind of temporary storage. Particularly if redundant storage isn't needed. XFS might be a better choice there.

    I found this research article to be of great interest as it indicates that a large amount of write amplification is inherent in using the BTRFS file system.

     

    I guess this raises a few questions worth thinking about:

    • Is there a specific advantage to having the docker image file formatted internally as BTRFS or could an alternative such as XFS help reduce the write amplification without any noticeable change in capabilities.
    • This amplification is not specific to SSD's.
    • The amplification is worse for small files (as are typically found in appdata share).
    • Are there any BTRFS settings that can be applied at the folder level to reduce write amplification.  I am thinking here of the 'system' and 'appdata' folders.
    • If you have the CA Backup plugin to provide periodic automated backup of the appdata folder is it worth having that share on a single drive pool formatted as XFS to keep amplification to a minimum.  The 6.9.0 support for multiple cache pools will help if you need to segregate by file format.

     

    • Thanks 1
  13. That is a standard Linux utility, not something Limetech are involved in producing.

     

    The bit you highlight is not a typo.    It is standard Linux-speak telling you to use the Linux built-in ‘man’ command (which gives details of any Linux command) to get more detail on the options.

     

×
×
  • Create New...