szymon
-
Posts
13 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by szymon
-
-
Hi, I managed to isolate my issue to a faulty container as described here
I guess my problem is not caused by the image file mounted as a loop device. Thank you to everyone responding here and for your suggestions!
-
26 minutes ago, jonp said:
Hi guys,
Just to confirm @szymon that you too are running in an encrypted cache pool with btrfs, right?
We will make an effort to recreate this in the lab to see what's going on.
No, not running an encrypted SSD cache pool, it's unencrypted. I read that some people have problems running encrypted SSD pool so I left it alone.
What is weird though is that I had literally zero problems running 6.7.2 for a long time. Iencly recently decided to encrypt data array and this is when the problems started. I just finished encrypting the last disk and after parity rebuild is done I will restart the machine to see if it fixes the issue.
For now I deleted the docker image and recreated it. I turned all the dockers on again and after a few minutes both CPU and RAM went up to 100% and read rate from one of the two SSDs went over 200MB/s until I shut down docker service. Then it went back to normal. I am now turning the containers on one by one to see when it crashes. And if it doesn't work then I also try 6.8.0.
-
Not mine Reddit threads but they are all recent and all related to "loop2".
I will recreate the docker image when I'm back at home, I had to disable reverse proxy and guacamole so I cannot access the VM now thanks for the tip!
-
Hi, I have the same problem. RAM gets 100% consumed, all CPUs go to 100% as per the GUI graph. What is weird though is that htop is not showing full CPU utilization.
One of the two cache SSDs is being constantly read at the speed of 200+ MB/s and unRAID gives an error of hot drive.
I have an unencrypted SSD cache pool, two 500GB Samsung drives. Running 6.7.2.
iostat is showing that loop2 is responsible for the massive disk read.
The problem goes away once I disable docker for good. Then after I start the docker service and just one container, after a few hours it goes 100% RAM, 100% CPU and full disk read speed again.
I tried isolating all the dockers only to one core but it did not stop the issue, all cores are 100%.
I think that this is not an isolated case, you can see the below two topics which could be linked to the issue of loop2.
-
I can confirm the same behaviour on RC4. I do indeed have VMs and dockers with assigned IP addresses.
Update: my log fills up to 100% after one day with a flood of these errors. Is there a way to pinpoint which container/VM is responsible?
[6.8.3] docker image huge amount of unnecessary writes on cache
in Stable Releases
Posted · Edited by szymon
I observed the following:
Now testing encrypted XFS.
Update: