Memory leak suspected in 5.6.31-77.0


On our production master servers, we have noticed that mysql is using more memory than the max calculated memory by mysqltuner.sql. As a result, after three days of activity, oom-killer is restarting mysqld process. Also we have observed that mysqld is swapping even when there is enough free memory.

MySQL Version: 5.6.31-77.0
CentOS Version: 7.2.1511
Engine: InnoDB

On one of the server, the issue was resolved after downgrading to 5.6.28-76.1.

Is there a known issue of memory leak in 5.6.31? If not, what relevant info do you want to investigate this.

Appreciate any help with this issue.


I would not count on mysqltuner (or any other tool) to compute memory usage. There are a lot of variables there in play check this blog post

If you think you have a leak check the VSZ you see (“ps aux” output) and see if it is still growing after few days. If it is - it could be memory leak if not you might just have memory fragmentation settling in.

Hi Peter,

Thanks for prompt reply.

I do see a steady increase in VSZ at a rate of 32M roughly every minute. Current status

ps aux |grep mysqld

mysql 14408 31.7 95.7 [COLOR=#FF0000]144862184 126178464 ? Sl Aug31 509:43 /usr/sbin/mysqld --basedir=/usr --datadir=/data/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=200000 --pid-file=/var/run/mysqld/ --socket=/var/lib/mysql/mysql.sock --port=3306 --innodb-numa-interleave=1

Major changes that we have done recently are as follows:

  1. Enabled Numa
  2. Switched to jemalloc library
  3. Changed IO Scheduler from cfq to deadline
  4. Started collecting stats using mysqld_exporter for prometheus

First three changes are as part of the Percona Audit recommendation. I have also attached pt-summary, pt-mysql-summary and mysqltuner output for your reference. I have omitted some information due to file size limit.


mysqltuner.txt (5.6 KB)

pt-summary.txt (9.31 KB)

pt-mysql-summary.txt (13.1 KB)

Hi Peter,

We downgraded to Percona Server 5.6.28-76.1 on 06/Sept and since then the memory utilisation has been steady and stable. Please refer to the attached image.

Any help in investigating this issue is appreciated. Please let me know if you need more info.



Hi Peter,

Just checking, could it be related to the fix that is introduced in 5.6.32-78.0?
[*]Fixed MYSQL_SERVER_PUBLIC_KEY connection option memory leak. Bug fixed #1604419.
[/LIST] Thanks,

Same issue in 5.6.32-78.0. As a result, we have applied the security fix by modifying “mysqld_safe”.

We saw the same issue when applying the patch and taking our version to 5.6.32-78.0. Had to rollback after multiple crashes due to memory issues.

This is exactly the problem I’m having also. the bug got introduced some time after 5.6.28-76.1 and 5.6.31 was already bad but 5.6.32-78.0 is probably 2 times worse. on one of the servers 5.6.32-78.0 was leaking 100gb of ram every 24 hours. I confirm that 5.6.28-76.1 doesn’t have these issues.

Folks affected by the memory leak - are you using the audit log plugin?

Do you have a large number of tables, perhaps you’re experiencing a similar issue to this one?

Hi Laurynas and HTF1,

Thanks for your response. We do use Audit log plugin. Is that the root cause?

We just have about 700 tables and we use FK constraints.

Did anybody test the latest version - 5.6.33-79.0?
[*]Fixed memory leaks in Audit Log Plugin. Bug fixed #1620152 (upstream #71759).
[/LIST] Thanks,