mgutt

Moderators
  • Posts

    11250
  • Joined

  • Last visited

  • Days Won

    123

Report Comments posted by mgutt

  1. 18 hours ago, TexasUnraid said:

    Is there any downside to the no-healthcheck?

    Some people use external tools to auto-check the running state of docker containers. healthcheck runs usually a simple "heartbeat" shell script to verify the docker is not only in running state, but also really working. In Unraid you can see this state as well in the docker container overview. But finally I think this is only important for business usage if you really need to know if everything works as expected.

  2. 2 hours ago, TexasUnraid said:

    The way I understood it is that it is simply a buffer for when the logging driver is overloaded so that it can catch back up.

     

    Yes, I think you are right. What about this:

    --log-opt mode=non-blocking --log-opt max-buffer-size=1K

     

    Maybe it "looses" the logs because of this 😅

    When the buffer is full and a new message is enqueued, the oldest message in memory is dropped. Dropping messages is often preferred to blocking the log-writing process of an application.

     

    Or maybe its possible to create empty log files?!

    --log-opt max-size=0k

     

    2 hours ago, TexasUnraid said:

    via tcp connection.

    Sounds like a lot of overhead.

     

     

  3. 4 hours ago, TexasUnraid said:

    I have no idea what these json.log files are though,

    You need to be more clear. Are you talking about these files?

    https://forums.unraid.net/bug-reports/stable-releases/683-unnecessary-overwriting-of-json-files-in-dockerimg-every-5-seconds-r1079/

    /var/lib/docker/containers/*/hostconfig.json is updated every 5 seconds with the same content
    /var/lib/docker/containers/*/config.v2.json is updated every 5 seconds with the same content except of some timestamps (which shouldn't be part of a config file I think)

     

    Writes to them can be disabled through --no-healthcheck

     

    If not, what is the content of the files you mean?

     

    EDIT: Ok, it seems these are the general logs:

    https://stackoverflow.com/questions/31829587/docker-container-logs-taking-all-my-disk-space

     

    Docker offers different options to influence the logs:

    https://docs.docker.com/config/containers/logging/configure/

     

    As an example this should disable the logs:

    --log-driver none

     

    Another idea could be to raise the buffer, so it collects a huge amount of logs before writing them to the ssd:

    --log-opt max-buffer-size=32m

     

    Another interesting thing is this description:

    local	    Logs are stored in a custom format designed for minimal overhead.
    json-file	The logs are formatted as JSON. The default logging driver for Docker.

     

    So "local" seems to produce smaller log files?!

     

    Another possible option is maybe "syslog", so it writes the logs to the host syslogs (which is located in the RAM) and not inside a json file:

    syslog	Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.

     

    Happy testing ;)

  4. 1 hour ago, TexasUnraid said:

    /var/lib/docker

    If you are using the docker.img, but not if you are using the folder.

     

    1 hour ago, TexasUnraid said:

    container-id-json.log files

    You could add a path to the container, so for example /log (container) is written to /tmp (host). And /tmp is located in the RAM, so it does not touch your SSD. This would be a similar trick as Plex RAM transcoding.

     

    Another method would be to define the /log path inside the container as a RAM Disk:

    https://forums.unraid.net/topic/35878-plex-guide-to-moving-transcoding-to-ram/page/12/?tab=comments#comment-894460

     

    Of course "/log" is only an example. You need to check the path where the log files are written to.

     

    PS it could be necessary to rebuild the container (delete / add through apps > previous apps), so the content of /log becomes deleted (you can't set paths if they already exist in a container).

  5. @ChatNoir

    You posted a screenshot that your WD180EDFZ spins down. Now I'm on 6.9.2, too, but nothing works except of the original Ultrastar DC HC550 18TB which I'm using as my parity disk.

     

    18:38

    914883082_2021-06-2018_38_08.png.4d81634d4167f5345e84ff27a36cd5b6.png

     

    23:06

    205533099_2021-06-2023_06_03.png.b3b39e91a352ddb3f3de6c9ed1bc1ed5.png

     

    As you can see there was no activity on most of the disks, so why isn't unraid executing the spin down command?!

     

    Logs (yes, that's all):

    Jun 20 18:27:00 thoth root: Fix Common Problems Version 2021.05.03
    Jun 20 18:27:08 thoth root: Fix Common Problems: Warning: Syslog mirrored to flash ** Ignored
    Jun 20 19:07:04 thoth emhttpd: spinning down /dev/sdg

     

    If I click the spindown icon it creates a new entry in the logs. And it creates it as well if I execute the following command:

    /usr/local/sbin/emcmd cmdSpindown=disk2

     

    So the command itself works flawlessly, but it isn't executed by Unraid.

     

    @limetech What are the conditions before this command gets executed? Does Unraid check the power state, before going further? Because this disks have the powerstate "IDLE_B" all the time. Maybe you like to send me the source code, so I can investigate it?

  6. 4 hours ago, TexasUnraid said:

    after a few hours

    Was the terminal open in this time? After closing the terminal, the watch process is killed as well.

     

    If you want a long term monitoring, you could add " &" at the end of the command, to permanently run it in the background and later you could kill the process with the following command:

    pkill -xc inotifywait

     

    Are you using the docker.img? The command can't monitor file changes inside the docker.img. If you want to monitor them, you need to change the path to "/var/lib/docker".

  7. 23 hours ago, TexasUnraid said:

    is there a way to tell which files are being written to by docker?

     

    You could start with this, which returns the 100 most recent files of the docker directory:

    find /mnt/user/system/docker -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head -n100n

     

    Another method would be to log all file changes:

    inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/system/docker > /mnt/user/system/recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt
    

     

    More about --no-healtcheck and these commands:

    https://forums.unraid.net/bug-reports/stable-releases/683-unnecessary-overwriting-of-json-files-in-dockerimg-every-5-seconds-r1079/?tab=comments#comment-10983

     

  8. Maybe some of you like to test my script:

    https://forums.unraid.net/topic/106508-force-spindown-script/

     

    It solved multiple issues for me:

    - some disks randomly spin up without any I/O change, which means Unraid does not know that they are spinning and by that they stay infinitely in IDLE_A state and never spin down.

    - some disks randomly return the STANDBY state although they are spinning. This is really crazy.

    - some disks randomly like to spindown twice to save even more power. I think the second spindown triggers the sata port standby state.

     

    Feedback is welcome!

  9. 27 minutes ago, boomam said:

    I get that, but considering it was listed as 'resolved' in the 6.9 update - if its still an issue

    The part that is related to Unraid was solved. Nobody can solve write-amplification of BTRFS and Unraid can't influence how docker stores status updates. Docker decided to save this data in a file instead of RAM. This causes writes. Feel free to like / comment the issue. Maybe it will be solved earlier if devs see how many people are suffering from wearing out SSDs.

  10. 13 minutes ago, TexasUnraid said:

    can you explain the HEALTHCHECK?

    https://forums.unraid.net/bug-reports/stable-releases/683-unnecessary-overwriting-of-json-files-in-dockerimg-every-5-seconds-r1079/?tab=comments#comment-10980

     

    13 minutes ago, TexasUnraid said:

    use tips and tweaks

    You can set those values with your go file as well. No Plugin necessary. I set only vm.dirty_ratio to 50%:

    image.png.26336e18e3822c4037557825d19f9ab7.png

     

    30 seconds until writing is okay for me.

  11. 1 hour ago, TexasUnraid said:

    Yeah, thats the theory but when I tested it in the past I didn't see much of a difference

    Ok, the best method is to avoid the writes at all. Thats why I disabled HEALTHCHECK for all my containers.

     

    1 hour ago, TexasUnraid said:

    increased my dirty writes

    You mean the time (vm.dirty_expire_centisecs) until the dirty writes are written to the disk and not the size?

  12. 7 hours ago, Squid said:

    so that it doesn't cause issues like this

    Is this a "new" feature? Maybe the user had installed a container in the past with a /mnt/cache path and by that the template was already part of his "previous apps" (which bypasses the auto adjustment of CA). The user said he never had a cache pool and this problem occurred not until upgrading to Unraid 6.9.

     

    Finally I suggested the user to edit the /shares/sharename.cfg and disable the cache through the "shareUseCache" variable.