Memory usage and Swap usage percona server 5.7 (mysql 5.7)

Good day, we have a few server running percona / mysql server 5.7.

All the servers have quite a lot of memory installed and one specific server have 128Gb of memory.

However, on all the servers we are experiencing issues whereby the MySQL server keep son using memory at an increasing rate, at some point starting to use swap as well up to the point where the whole server fails due to lack of memory.

The server with the 128Gb of memory have Innodb_buffer_pool_size set to 90Gb, and one of the other servers have 64Gb memory and a 14Gb innodb_buffer_pool , however both have the exact same symptoms.

We have been trying to research this issue for some time now and have also enabled the memory instruments to try and figure out where the memory is going to, however even from those stats it does not show anything specifically using up a lot of memory.

Swapiness on the servers are set to 1 , and yet this did not make a lot of difference.

From OS level we tried several methods to detect memory usage, however all memory usage points to the mysqld process.

Can anyone perhaps give me some pointers on where else to look or how to find out what is eating up all the memory as this is getting quite urgent.

I would really appreciate any help.


Do you have the graph of the VSZ for MySQL process ? This trend over time can hint what the problem might be.

Do you have Transparent huge pages enabled ? It is most common cause of surprising memory use.

Check out this blog as well

Hi Peter,

How can I get the graph for the VSZ?

I have read about the huge pages, however there do not seem to be anything enabled in the configuration files.

I have now added a script on the server to plot the memory every 60 seconds.

I will use that data to try and upload a graph for you.

Hi Guys,

I have plotted the memory usage (RSS and VSZ ) for the mysqld process throughout the night at 60second intervals.

Attached is a basic graph (I am not good with creating graphs in excel :wink: ) , so hopefully this will help a bit.

The machine this data was retrieved from have 64Gb of physical memory and 32Gb of swap space.

Swap space was added because they kept on running out of memory leaving the machine to completely fall over so they added swap space to give them time to restart services.

At the end of the graph, you can see we restarted the mysqld service this morning and the memory dropped immediately.

When I looked at that point in time, there was only the replication processes in waiting state and no other processes running against the database.


Good day, herewith the latest graphs from the server with 64Gb physical memory.

The dip in the graph is when we restarted the mysqld processes and from there you can see the memory simply keeps increasing.

so far we have not been able to find any specific cause from the memory instrument information.

Any other suggestions would be appreciated.

Regards [ATTACH=CONFIG]n45872[/ATTACH]

I have the very same problem with 5.6 after the latest update 5.6.32-78.0: innodb only, buffer pool set to 62.5% RAM, perfomance schema=OFF, but mysqld gradually eats all RAM and swap. I’ve already restarted it twice because of that. Previously I had 5.6.30-76.3 and it wasn’t affected, so today will be downgrading to 5.6.31-77.0 to isolate the issue to the particular update.

Hi everyone,

any new suggestions regarding this issue?

Everything is fine after I downgraded percona-server to 5.6.31-77.0, so problem is definitely in the 5.6.32-78.0 update. Machiel, I can only suggest you to find and try a previous 5.7.* version (< 5.7.14-7, i.e. before August 23, 2016), because probably it is the same regression.

Anyone affected by the issue - are you using the audit log plugin? Can you post your my.cnfs?

Yes! I’m using the audit on connections:

audit_log_strategy = PERFORMANCE
audit_log_file = /var/log/mysql/audit.log
audit_log_format = JSON
audit_log_policy = LOGINS
audit_log_rotate_on_size = 20971520
audit_log_rotations = 20

Then this looks like, which is fixed and will be part of the next server releases.

If someone is experiencing a memory leak without the audit plugin, let us know

I recently hit this same memory leak and confirmed it’s coming from the audit plugin. Based on 1620152 It looks like it’s fixed in 5.7.15. Are there any estimates of what that may be released?