Slow MySQL / MariaDB writes in Docker/VM


Go to solution Solved by LittleLama,

Recommended Posts

Hi

 

I got several issues with my databases since a little month. Especially with a 2GB one. I can't say if this issue apply to all my docker containers as only this one need real performances, but I would like to figure it out !

 

I first thought it was related to Unraid version as I just updated it, but, after a rollback, the problem is still there.

 

I have real slow performance with the writes on my database.

 

I tried first with mysql docker,

... installed then MariaDB to test with no real difference,

... tuned a litte my mysql/mariadb config with no real progress.

 

I benchmarked process between my laptop with no tuned config & same versions and the Unraid server.

The performances to make a comparison. I was used to have a faster process on Unraid than on my laptop, but now it is really the opposite.

 

For a significantly long process, I can make a bunch of updates in 11 minutes on my laptop, but on Unraid it makes 25% in 1h30.

Seeing PhpMyAdmin stats, database on Unraid doesn't pass the 6000 rows updated/hour, as my laptop passes the 100 000 rows/hours.

 

The Unraid CPU doesn't move, is stable at roundly 50%, and RAM is free at 50% too. Mysql or MariadDB have 4GB of memory for a database of 2GB. A htop shows that database doesn't consume CPU during process.

 

So, I really would like to have some rescue.

 

I can share with you diagnotics and whatever you would need.

My Docker is on SSD only.

Cache is made of 2 SSD of 4TB each in btrfs pool with replication.

Issue is the same in Unraid 6.12.4, 6,12.5 and 6.12.6

 

Many thanks

 

 

Tuned MariaDB config

[mysqld]
log_error=/var/lib/mysql/sn-1f0ce8-error.logi
slow_query_log=on
long_query_time=2

innodb_buffer_pool_size=4G
innodb_log_file_size=150M

open_files_limit=40960
table_open_cache=12000
table_open_cache_instances=64

innodb_lru_scan_depth=100
innodb_flush_neighbors=2
innodb_max_dirty_pages_pct_lwm=1
innodb_max_dirty_pages_pct=1

innodb_io_capacity = 2000
innodb_read_io_threads = 64
innodb_thread_concurrency = 0
innodb_write_io_threads = 64

 

 

 

htop.png

unraid-diagnostics-20240123-1711.zip

Edited by LittleLama
Link to comment

Thank you for your advises.

My SSD have 72 GB of SLC cache, that looks enough for my usage. There is not a lot of data exchange except this 2 GB database. If I make a copy test from Array to Cache, speed is there, and Virtual Machines doen't seem to be abnormally slow when up.

 

You make me discovering exclusive share that I'm currently trying and I will change Docker image & appdata path to /mnt/cache instead of /mnt/user even if system & appadata are on cache only.

 

Edited by LittleLama
Link to comment

Working on it.

Just forgot that I have Grafana & Node Exporter, so I can give some more data of what is happening.

Just look at this.

https://snapshots.raintank.io/dashboard/snapshot/hBk9MFi1HPVOY6cIDXRJJl10LOIE7Zo5

 

DataBase long process turned most of the time, nevertheless from 17h37m59s to 17h41m37s it has been a period with short time calculations and fast speed writing.

Each row made less than 0,2s to calculate instead of 6 to 10 seconds. During this time CPU were well sollicitated.

Just after, everything returned has it was used to these last days and long writes.

 

If you see something relevant ..

image.png.84ab5e0f34b088d05c1407ca759742c1.png

Capture d'écran 2024-01-25 110215.png

 

Capture d'écran 2024-01-25 110346.png

Capture d'écran 2024-01-25 110944.png

Edited by LittleLama
Link to comment

Hi,

So I made the benchmark this week end.

 

I let the current Dockerized configuration (Website in a Docker & SQL in a Docker - named DB/Docker later) to compare with two other

- Website in a Docker & SQL in a Ubuntu VM - named DB/VM

- Website in a Docker & SQL in a Dedicated and not virtualized Windows machine (1,5Ghz 8GB RAM .. not a real performance, with not tuned SQL config - most basic config ever),  appart from Unraid - named DB/Dedicated

 

db-bench.png

 

The results are poor for DB/Docker, of course, but they are too for DB/VM.

They are great for DB/Dedicated, looks like as they were before december.

 

I planned to rebuild my Docker.img before this benchmark, but I'm not sure this will be a fix as the VM is as slow as Docker ...

What do you think of it ?

 

Looks actually like a cache pool performance issue, maybe like him : 

 

I should maybe work on my cache pool. Trying to drop the pool replication.

 

Thanks

 

 

Edited by LittleLama
Link to comment
  • LittleLama changed the title to Slow MySQL / MariaDB writes in Docker/VM
  • Solution

So, I finally changed my SSD and replaced my two Samsung QVO (with btrfs file system) by a single Samsung EVO (xfs) and everything returned fine.

I first tested to change cache pool to RAID0, RAID1, Single with no real difference.

I then moved the whole cache pool on the Array and observe a really light improvement.

 

When replacing the drive, all the job times returned to normal value.

As a conclusion, I don't know if it is a hardware issue or a file system issue and will run the SSD in Samsung Magician to know it.

 

Thanks

db-bench2.png

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.