szymon

Members
  • Posts

    13
  • Joined

  • Last visited

Report Comments posted by szymon

  1. I observed the following:

    •  BTRFS pool, unencrypted, 2 x 500GB SSD. All my containers started, including Plex, Nextcloud, MariaDB, a few -rr containers (sonarr, etc), a few torrent containers, UniFi controller. Cache write 47GB / 12h.
    • BTRFS pool, unencrypted, 2 x 500GB SSD. Most my containers stopped, including Plex, Nextcloud, MariaDB, a few -rr containers (sonarr, etc), a few torrent containers, UniFi controller. Cache write 38GB / 12h.
    • XFS single 500GB SSD, unencrypted. All my containers started, including Plex, Nextcloud, MariaDB, a few -rr containers (sonarr, etc), a few torrent containers, UniFi controller. Cache write 7GB / 12h.

    Now testing encrypted XFS.

     

    Update:

    • XFS single 500GB SSD, encrypted. All my containers started, including Plex, Nextcloud, MariaDB, a few -rr containers (sonarr, etc), a few torrent containers, UniFi controller. Cache write 6,6GB / 12h.
    • Thanks 1
  2. 26 minutes ago, jonp said:

    Hi guys,

     

    Just to confirm @szymon that you too are running in an encrypted cache pool with btrfs, right?

     

    We will make an effort to recreate this in the lab to see what's going on.

    No, not running an encrypted SSD cache pool, it's unencrypted. I read that some people have problems running encrypted SSD pool so I left it alone.

    What is weird though is that I had literally zero problems running 6.7.2 for a long time. Iencly recently decided to encrypt data array and this is when the problems started. I just finished encrypting the last disk and after parity rebuild is done I will restart the machine to see if it fixes the issue.

    For now I deleted the docker image and recreated it. I turned all the dockers on again and after a few minutes both CPU and RAM went up to 100% and read rate from one of the two SSDs went over 200MB/s until I shut down docker service. Then it went back to normal. I am now turning the containers on one by one to see when it crashes. And if it doesn't work then I also try 6.8.0.

  3. Hi, I have the same problem. RAM gets 100% consumed, all CPUs go to 100% as per the GUI graph. What is weird though is that htop is not showing full CPU utilization.

    One of the two cache SSDs is being constantly read at the speed of 200+ MB/s and unRAID gives an error of hot drive.

    I have an unencrypted SSD cache pool, two 500GB Samsung drives. Running 6.7.2.

     

    iostat is showing that loop2 is responsible for the massive disk read.

     

    The problem goes away once I disable docker for good. Then after I start the docker service and just one container, after a few hours it goes 100% RAM, 100% CPU and full disk read speed again.

     

    I tried isolating all the dockers only to one core but it did not stop the issue, all cores are 100%.

     

    I think that this is not an isolated case, you can see the below two topics which could be linked to the issue of loop2.

     

     

     

  4. I can confirm the same behaviour on RC4. I do indeed have VMs and dockers with assigned IP addresses.

     

    Update: my log fills up to 100% after one day with a flood of these errors. Is there a way to pinpoint which container/VM is responsible?