-
Posts
11250 -
Joined
-
Last visited
-
Days Won
123
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by mgutt
-
-
24 minutes ago, TexasUnraid said:
The constant updates and logging that can not be disabled, adjusted or tweaked in any way causes quite a bit of writes unnecessarily
Are they writing to random folders or why isn't it possible to tweak?
-
docker ps --no-trunc
-
18 hours ago, TexasUnraid said:
Is there any downside to the no-healthcheck?
Some people use external tools to auto-check the running state of docker containers. healthcheck runs usually a simple "heartbeat" shell script to verify the docker is not only in running state, but also really working. In Unraid you can see this state as well in the docker container overview. But finally I think this is only important for business usage if you really need to know if everything works as expected.
-
2 hours ago, TexasUnraid said:
The way I understood it is that it is simply a buffer for when the logging driver is overloaded so that it can catch back up.
Yes, I think you are right. What about this:
--log-opt mode=non-blocking --log-opt max-buffer-size=1K
Maybe it "looses" the logs because of this 😅
When the buffer is full and a new message is enqueued, the oldest message in memory is dropped. Dropping messages is often preferred to blocking the log-writing process of an application.
Or maybe its possible to create empty log files?!
--log-opt max-size=0k
2 hours ago, TexasUnraid said:via tcp connection.
Sounds like a lot of overhead.
-
4 hours ago, TexasUnraid said:
I have no idea what these json.log files are though,
You need to be more clear. Are you talking about these files?
/var/lib/docker/containers/*/hostconfig.json is updated every 5 seconds with the same content /var/lib/docker/containers/*/config.v2.json is updated every 5 seconds with the same content except of some timestamps (which shouldn't be part of a config file I think)
Writes to them can be disabled through --no-healthcheck
If not, what is the content of the files you mean?
EDIT: Ok, it seems these are the general logs:
https://stackoverflow.com/questions/31829587/docker-container-logs-taking-all-my-disk-space
Docker offers different options to influence the logs:
https://docs.docker.com/config/containers/logging/configure/
As an example this should disable the logs:
--log-driver none
Another idea could be to raise the buffer, so it collects a huge amount of logs before writing them to the ssd:
--log-opt max-buffer-size=32m
Another interesting thing is this description:
local Logs are stored in a custom format designed for minimal overhead. json-file The logs are formatted as JSON. The default logging driver for Docker.
So "local" seems to produce smaller log files?!
Another possible option is maybe "syslog", so it writes the logs to the host syslogs (which is located in the RAM) and not inside a json file:
syslog Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
Happy testing
-
1 hour ago, TexasUnraid said:
/var/lib/docker
If you are using the docker.img, but not if you are using the folder.
1 hour ago, TexasUnraid said:container-id-json.log files
You could add a path to the container, so for example /log (container) is written to /tmp (host). And /tmp is located in the RAM, so it does not touch your SSD. This would be a similar trick as Plex RAM transcoding.
Another method would be to define the /log path inside the container as a RAM Disk:
Of course "/log" is only an example. You need to check the path where the log files are written to.
PS it could be necessary to rebuild the container (delete / add through apps > previous apps), so the content of /log becomes deleted (you can't set paths if they already exist in a container).
-
-
1 hour ago, ChatNoir said:
I can confirm that all my HDDs do spin down and stay down properly
Are you using an HBA controller? I'm using onboard SATA ports.
PS Re-booted in Safe Mode. Same behaviour. 😭
-
You posted a screenshot that your WD180EDFZ spins down. Now I'm on 6.9.2, too, but nothing works except of the original Ultrastar DC HC550 18TB which I'm using as my parity disk.
18:38
23:06
As you can see there was no activity on most of the disks, so why isn't unraid executing the spin down command?!
Logs (yes, that's all):
Jun 20 18:27:00 thoth root: Fix Common Problems Version 2021.05.03 Jun 20 18:27:08 thoth root: Fix Common Problems: Warning: Syslog mirrored to flash ** Ignored Jun 20 19:07:04 thoth emhttpd: spinning down /dev/sdg
If I click the spindown icon it creates a new entry in the logs. And it creates it as well if I execute the following command:
/usr/local/sbin/emcmd cmdSpindown=disk2
So the command itself works flawlessly, but it isn't executed by Unraid.
@limetech What are the conditions before this command gets executed? Does Unraid check the power state, before going further? Because this disks have the powerstate "IDLE_B" all the time. Maybe you like to send me the source code, so I can investigate it?
-
4 hours ago, TexasUnraid said:
after a few hours
Was the terminal open in this time? After closing the terminal, the watch process is killed as well.
If you want a long term monitoring, you could add " &" at the end of the command, to permanently run it in the background and later you could kill the process with the following command:
pkill -xc inotifywait
Are you using the docker.img? The command can't monitor file changes inside the docker.img. If you want to monitor them, you need to change the path to "/var/lib/docker".
-
23 hours ago, TexasUnraid said:
is there a way to tell which files are being written to by docker?
You could start with this, which returns the 100 most recent files of the docker directory:
find /mnt/user/system/docker -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head -n100n
Another method would be to log all file changes:
inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/system/docker > /mnt/user/system/recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt
More about --no-healtcheck and these commands:
-
14 hours ago, TexasUnraid said:
XFS = ~20-25gb/day
BTRFS single drive = 75-85gb/day
Both folder or docker.img?
-
Maybe some of you like to test my script:
https://forums.unraid.net/topic/106508-force-spindown-script/
It solved multiple issues for me:
- some disks randomly spin up without any I/O change, which means Unraid does not know that they are spinning and by that they stay infinitely in IDLE_A state and never spin down.
- some disks randomly return the STANDBY state although they are spinning. This is really crazy.
- some disks randomly like to spindown twice to save even more power. I think the second spindown triggers the sata port standby state.
Feedback is welcome!
-
27 minutes ago, boomam said:
I get that, but considering it was listed as 'resolved' in the 6.9 update - if its still an issue
The part that is related to Unraid was solved. Nobody can solve write-amplification of BTRFS and Unraid can't influence how docker stores status updates. Docker decided to save this data in a file instead of RAM. This causes writes. Feel free to like / comment the issue. Maybe it will be solved earlier if devs see how many people are suffering from wearing out SSDs.
-
35 minutes ago, TexasUnraid said:
Well when you have almost 100gb of dockers is might not be so fast lol.
Which container has a size of more than 1GB?!
And most of them re-use the same packages. 100GB is really crazy ^^
- 1
-
25 minutes ago, TexasUnraid said:
without having to rebuild it correct?
I don't know, but who cares? It contains only docker related files. Rebuilding is done fast.
- 1
-
6 hours ago, TexasUnraid said:
appears to be higher writes then xfs
Which is absolutely normal as BTRFS is a copy-on-write filesystem with huge write amplification.
-
3 hours ago, boomam said:
I'm amazed that after being 'fixed' in 6.9, that this is still an issue.
So is the new advice now to 'fix' by using Docker folder paths instead of IMG?- at least until the next 'fix' comes around?
This is something which needs to be solved by docker:
- 1
-
13 minutes ago, TexasUnraid said:
can you explain the HEALTHCHECK?
13 minutes ago, TexasUnraid said:use tips and tweaks
You can set those values with your go file as well. No Plugin necessary. I set only vm.dirty_ratio to 50%:
30 seconds until writing is okay for me.
-
1 hour ago, TexasUnraid said:
Yeah, thats the theory but when I tested it in the past I didn't see much of a difference
Ok, the best method is to avoid the writes at all. Thats why I disabled HEALTHCHECK for all my containers.
1 hour ago, TexasUnraid said:increased my dirty writes
You mean the time (vm.dirty_expire_centisecs) until the dirty writes are written to the disk and not the size?
-
Just now, TexasUnraid said:
Some had success hanging from a docker image to a docker folder,
This will "solve it" as only the individual (and tiny) files are updated and not "huge" parts of the docker.img file.
-
On 4/12/2021 at 5:27 PM, ds679 said:
After looking at the logs - I would never see spindown requests?
Funnily I have completely the same after upgrading from 12TB Ultrastar to 18TB White Label, why I wrote this script:
https://forums.unraid.net/topic/106508-force-spindown-script/
But I was not able to test 6.9.x by now. Maybe it does not happen there?!
- 1
-
On 4/13/2021 at 12:26 AM, Xav said:
A few days later obvisiouly the same Problem again 😞
On 4/16/2021 at 10:55 PM, ben2000de said:The Problem is still present for me in 6.9.2
Do the disks spindown after they woke up or do they stay spinning?
-
7 hours ago, Squid said:
so that it doesn't cause issues like this
Is this a "new" feature? Maybe the user had installed a container in the past with a /mnt/cache path and by that the template was already part of his "previous apps" (which bypasses the auto adjustment of CA). The user said he never had a cache pool and this problem occurred not until upgrading to Unraid 6.9.
Finally I suggested the user to edit the /shares/sharename.cfg and disable the cache through the "shareUseCache" variable.
[6.8.3] docker image huge amount of unnecessary writes on cache
in Stable Releases
Posted
And why isn't it possible to move them to the RAM with --mount type=tmpfs,destination=/logs or similar?