Hi everybody,
we have the same memory problems, with mysql etaing up all the memory, bigenning to swap till OOM KILLER raise up.
We have 3 nodes, Percona-XtraDB 57-5.7.17-29.20.3, centos 7.3.1611, 4 gb RAM (VM on vSphere6) and the VM are dedicated to mysql, with the only exception of the process due to PMM agent and OS overhead.
Each vm has :
8 vCPU, 4GB RAM and 1.5 GB of Swap;
Transparent Huge Pages disabled
Jemalloc instead of glibc
Here are:
the conf file
the screenshot of the top process after 3 hours of complete inactivity
the screenshot of the PMM dashboard showing MySQL internal memory overview.
We were planning to go on production, but after this test, we are not more so confident…
We have the same issue with a 3 nodes cluster. I use the previous version and today i have updated to the latest but the problem is still here.
the memory consume increase constantly and then the swap too, until the oom kill the mysqld process
we found another problem, updating to the version 5.7.17-29.20 do not update the server version. The package version is right but when looking the mysql version, is still the old version
I’ve been looking into this, and I believe that this issue occurs because of an interaction between the performance schema and thread-pooling. Since PMM polls the server frequently, this leak is more easily seen with PMM. (It’s not a leak exactly since the memory does get freed up when the server exits).