Jump to content

itimpi

Moderators
  • Content Count

    9646
  • Joined

  • Last visited

  • Days Won

    21

Report Comments posted by itimpi


  1. 4 minutes ago, jowi said:

    Depends on how many dockers you have listed. If they don’t fit the screen, the interface gets stuck. It won’t scroll. Display font size also plays a role, the bigger the font, the less dockers you can list, and the gui gets stuck and wont scroll.

     

    But then again, this is not specific for this version, its an issue for as long as there are dockers in unraid. It wont get fixed for some reason.

     

    I have far more Dockers than fit on the screen and they scroll OK for me on my iPad.

     

    i think the root cause has to be some sort of bug at the web kit level which can therefore affect all iOS/iPadOS browsers as Apple mandates they have to use WebKit for rendering.  I would be interested to know if anyone using Safari on MacOS ever experience such problems.


  2. 12 minutes ago, MothyTim said:

    It's only the docker page, everything else is fine! It's the same in Safari and Chrome.

    As i said it is working fine for me.   I have had problems in the past but it is OK now.   It may be relevant that I am using the iOS14 beta so possibly a web engine (Which is used by both those browsers)  problem has been fixed.


  3. 30 minutes ago, Dephcon said:

     

    That's very interesting.  Say or example I have a share that's 'cache only' and i change which pool device i want it to use, unraid will move the data from one pool device to the other?  That would be highly useful for me in my IO testing.

    No, Unraid will not move the data to the new pool.      It will just start using that pool for any new files belonging to the share.      Note that for read purposes ALL drives are checked for file under a rip level folder named for the share (and thus logically belongs to the share).   The files on the previous pool will therefore still be visible under that share even though all new files are going to the new share.

    • Like 1

  4. 1 hour ago, bubbl3 said:

    I have it enabled there for appdata, then why the mover doesn't it enabled? also, how does one move all the existing data to the cache drive without the mover? Hopefully not manually...

    If you want mover to move files from array to cache then the Use Cache setting needs to be set to Prefer.   The GUI built-in help describes how the various settings affect mover. Also mover will not move any files that are open so you may need to disable the docker and/or VM services while such a move is in progress.


  5. 12 hours ago, Lignumaqua said:

    The link below is to an interesting paper from 2017 that compares the write amplification of different file systems. BTRFS is by far the worst with a factor of 32x for small files overwrite and append when COW is enabled. With COW disabled this dropped to 18.6X, which is still pretty significant. This is three years ago, so things may have changed. In particular space_cache V2 could be  a reaction to this? BTRFS + writing or amending small files = very high write amplification.

     

    https://arxiv.org/abs/1707.08514

    This suggests that BTRFS is a great system for secure storage of data files, but not necessarily a good choice for writing multiple small temporary files, or for log files that are continually being amended.  Looking at common uses of the cache in Unraid might lead to the following suppositions. A BTRFS cache using Raid 1 is a good place for downloaded files before they are moved into the array. It's also good for any static data files. However, it's likely not to be the best place for a Docker img file or any kind of temporary storage. Particularly if redundant storage isn't needed. XFS might be a better choice there.

    I found this research article to be of great interest as it indicates that a large amount of write amplification is inherent in using the BTRFS file system.

     

    I guess this raises a few questions worth thinking about:

    • Is there a specific advantage to having the docker image file formatted internally as BTRFS or could an alternative such as XFS help reduce the write amplification without any noticeable change in capabilities.
    • This amplification is not specific to SSD's.
    • The amplification is worse for small files (as are typically found in appdata share).
    • Are there any BTRFS settings that can be applied at the folder level to reduce write amplification.  I am thinking here of the 'system' and 'appdata' folders.
    • If you have the CA Backup plugin to provide periodic automated backup of the appdata folder is it worth having that share on a single drive pool formatted as XFS to keep amplification to a minimum.  The 6.9.0 support for multiple cache pools will help if you need to segregate by file format.

     

    • Thanks 1

  6. That is a standard Linux utility, not something Limetech are involved in producing.

     

    The bit you highlight is not a typo.    It is standard Linux-speak telling you to use the Linux built-in ‘man’ command (which gives details of any Linux command) to get more detail on the options.

     


  7. 27 minutes ago, S1dney said:

    Changed Priority to Urgent

     

    >>

     

    Since I noticed this thread getting more and more attention lately, and more and more people urging it to be urgent instead of minor, I'll raise priority on this one.

     

    Just an FYI, I made/kept it minor initially cause I had a workable workaround that I felt satisfied with. If the Command Line Interface isn't really your thing or you have any other reason to not tweak the OS in an unsupported way I can fully understand this frustration.

     

    In the end... The community decides priority.

     

    Also updated the title to version 6.8.3 as requested.

     

    Cheers

    This actually points out that we could do with another intermediate category called something like “Major” meaning it is very important but is not actually stopping the server working or directly causing data loss.    I would then put this into the “Major” category rather than “Urgent”.   I certainly agree it needs to be more than “Minor”.


  8. This is an inherent quirk of the underlying Linux system in that if it thinks the source and target are on the same mount point it implements move by first attempting a simple rename (for speed) and only if it fails does a copy/delete get run.   Since this is fundamental Linux behavior I am not sure that there is anything Unraid can do about it.   Mover is specifically written to leave files for a share with Use Cache = No alone even if they exist on the cache as there is a Use Case for that behaviour.   A possible solution might be to introduce yet another option for the Use Cache setting, but users already have problems with the number of options already allowed for.

     

    The ‘workaround’ is to either use a copy/delete operation yourself or to make the target folder have a Use Cache=Yes setting so that mover will later transfer the file to the array.

     

    You often see this behaviour when using a docker container to automate downloads.   If you set the drive mappings for the container to have different internal mount points then the version of Linux inside the container will implement its own ‘move’ as a copy/delete so the issue does not arise at the Unraid level.   However it is still often more convenient to simply set the target to a location set with Use Cache = Yes.
     


  9. 1 hour ago, Videodr0me said:

    No improvement with 6.9 beta1. Parity speeds still at about 55-65mb/s instead of approx 90mb/s. I rechecked very early logs (pre 6.x) and then i had over 100mb/s. I hope this gets addressed soon, as a parity check now takes 36-48 hours.

    Not really an answer to your question, but have you looked into using the Parity Check Tuning plugin to avoid the parity check running during prime time with its adverse affect on performance?


  10. 1 minute ago, PeteAsking said:

    I can always clone the USB key to a new one and upgrade the cloned key. Would this break my license key if I did that? End of the day some testing would be good, and I can test on a simple home setup if its semi safe to do that.

    A much better (and easier) solution is to make a backup of your current USB at any point that you want to be able to revert to.    You can then revert if wanted by copying the files back to the USB stick overwriting the files already there.

     

    You san make the backup by either plugging the USB stick into another machine and copying it's contents or by clicking on the flash drive on the Main tab in the Unraid GUI and selecting the option to back it up as a downloaded zip file


  11. The two actions are not equivalent and you should expect writing to the array to always be slower than a parity check.    You might want to read this section from the online documentation to get an insight into why this is the case.


    The speeds you quote are not atypical for writing to the parity protected array :)  Do you have the “Turbo Write” mode active as this may well give faster write speeds (albeit at the expense of keeping all drives spinning).

     


  12. 25 minutes ago, boomam said:

    Your guess is incorrect ;-)

    I have not edited any XML in the setup of this, or any other, container as I've not had a reason to edit.

    Users do not normally access the XML file directly.  You could have set that IP address via the option to Edit the container settings in the GUI as that generates the XML file.

     


  13. 17 minutes ago, SliMat said:

    but I have changed to "annoyance" if its not deemed important that peoples machines can be left unusable

    I think the 'urgent' category is meant to be used for 'drop everything else until this is fixed' type errors and ones that can cause data loss?  I would only have downgraded this one to 'Minor' rather than all the way to 'Annoyance', but that is just my view.


  14. 23 minutes ago, RokleM said:

    I will however it is nearly 100% unlikely that a USB stick that has been 100% reliable for over 5 years somehow had an issue within seconds of clicking the upgrade button...The statistical improbability... 

     

    There is something wrong with the update process, but hopefully other don't run into it.   

    The upgrade process is one time that a lot of writes occur to the USB drive so maybe not that unlikely.

     


  15. I suspect you are going to need to reformat and rewrite the flash device as it is definitely getting read errors on the Unraid server at the moment.   Doing an upgrade is the one time you do a significant amount of writing to the flash drive in normal running of Unraid.

     

    Before doing to make sure you have a backup of the 'config' folder from either the current flash drive or a recent backup.   Putting that back after recreating the USB drive will put back your configuration.