High memory usage and OOMkiller

Hello everybody,

We’re running MySQL server (Percona) and we’re experiencing crashes because of the infamous OOMkiller. This has been going for over a year and we’ve tried a lot of things, but we don’t have a definitive solution yet. I’ve read all topics I could find about the subject but none of them contained the solution.

Let me explain the situation:
There are multiple servers running our software and all servers experience the same issue. The servers are virtual, they have 4 cores, 16 GB memory, SSD disks and they run Apache and PHP besides MySQL. The memory usage of apache and php is small (about 1 GB), most of the work is done by the MySQL server.
There are 20 to 23 databases on each server, each database contains about 350 tables (and the number of tables grows with every release). The tables are all InnoDB and the total database size varies for every database, the smallest one is about 360 MB and the largest one about 34 GB. The large size is partly due tot BLOBs.
The databases are heavily used. There are ofcourse a lot of SELECT queries (nearly all of them using JOINS) but there is also a very high number of INSERTs because of audit logging. Furthermore, triggers are used on most INSERT, UPDATE and DELETE queries.

It is possible to create a stable situation without crashes, but it’s a thin line. When another database is added to the server, it starts crashing again. We’re using Munin for monitoring and you can see the memory usage building up towards the point where the memory is full. It then continues using swap and when that is exhausted the OOMkiller steps in and kills the MySQL server process, so it can start all over again.
Depending on the global usage (workdays vs. weekends) there might be multiple crashes a day or none at all. A full backup using mysqldump can cause a crash as well when the memory usage is already high after a day’s hard work.

So far, we’ve deducted that there is not one single cause for the high memory usage. It seems to be a combination of the number of databases (not so much the size of the data), the number of connections and the usage.
We’ve tried many things and we’ve come up with a config with a very small InnoDB buffer pool: 1 GB and 4 instances. Furthermore we’ve limited the max connections to 150 and the thread cache size is 10. A larger thread cache (as calculated by the percona config generator) also causes quickly rising memory usage. We figured that our threads can use a lot of memory because of queries with a lot of JOINs. If all this memory is kept in cache, it will eventually fill up the system’s memory.
Then we’ve changed the application to use persistent connections, this also helps in keeping the memory usage down (but persistent connections can’t always be used, if multiple processes connect to the same database). And finally we’ve disabled the performance schema to save memory as well.
These settings can keep a server running, but it’s still the thin line I’ve talked about before. When more databases are added (which involves more connections and a higher workload), the crashes start coming again.

We were wondering if somebody has an idea where all the memory goes. We think it should be possible to run more than 20 databases on one server and maybe increase the InnoDB buffer pool (although we are not experiencing performance issues at the moment).
Or could it be that this amount of databases, connections an workload is the max of what one of these servers can handle? So we’ve reached the limit?

At the moment we’ve managed to keep one server running for 19 days, I guess because of the quieter holiday period. Because of the long uptime, I can provide some statistics (from ‘SHOW GLOBAL STATUS’)
uptime: 1722760 sec = 19,9 days
selects: 33146297 = 19.2 per sec
inserts: 5152891 = 3 per sec
updates: 4850987 = 2.8 per sec
deletes: 24471 = 0.01 per sec
queries: 97821785 = 56,8 per sec
slow queries: 8196 = 0.005 per sec
connections: 3667483 = 2,12 per sec
threads_created = 784776 = 0,45 per sec

I’ve thought about running a binary with debug symbols to see where the memory is going, but these are production machines and since the crashes only occur during busy times, I don’t think this is a good idea. Als setting up an extra server for load testing doesn’t do the trick because the exact usage is hard to replicate. We do have some tests which can make the server crash (running a large data import with a lot of post-processing produces a lot of queries in a short time and can crash the server with some settings), but this is also not 100% perfect.

Attached you’ll find the config and the innodb and global status. I hope there is somebody who might have an idea.

Version: Percona server 5.6.40-84.0-1.jessie on Debian Jessie

20180814-my-cnf.txt (1.8 KB)

20180814-global-status.csv.txt (12.4 KB)

20180814-innodb-status.txt (11.1 KB)

Hello ChrisB welcome to the Forum.
In the first instance, take a look at this article and see if it has any bearing? You seem to have done a lot of investigation already but it’s worth checking [url]https://www.percona.com/blog/2014/04/28/oom-relation-vm-swappiness0-new-kernel/[/url]

OK I have a few more things for you to look into … did you definitely ID MySQL as the source of your memory issues?

There’s an article here [url]http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html[/url]

Where you have this:

innodb-buffer-pool-size = 1G
innodb-buffer-pool-instances = 4

It’s recommended to keep >=1G per instance, so instances should be 1.

Also the discussion on this bug implies there might be an issue with unexpected memory growth in some circumstances [url]MySQL Bugs: #68287: High Memory Usage with MySQL 5.6.12 GA in "Development Machine" mode

See how you get on?

Hello Lorraine, thank you for the reading material.
I’ve read it but I don’t think these issues apply in our case. Except maybe for the last, the MySQL bug. The last person answering there disabled the performance schema and got a huge memory boost. I experienced the same on our servers when the PS was disabled, but it still doesn’t prevent the occasional peaks and subsequent “OOMkiller” crashes.

When OOMkiller is invoked it logs the current memory state to the syslog. From there I can see that the mysqld process is using 15.5 of the 16 GB avaiable memory:

Aug 14 11:59:45 server2 kernel: [43719069.287891] Out of memory: Kill process 2021 (mysqld) score 953 or sacrifice child
Aug 14 11:59:45 server2 kernel: [43719069.289401] Killed process 2021 (mysqld) total-vm:22850728kB, anon-rss:15923976kB, file-rss:0kB

So because of this I’ll positively ID the MySQL server as the source of the memory issues.

I’ll go ahead change the buffer pool instances, but I don’t expect it to make much of a difference for this problem.