MySQL memory usage doubles after a few days

I run three main tables in engine=memory for speed purposes. I do this as they are read only, built when the MySQL is started, and thus never need to be written to.

There are other on disk tables, such as user preferences, forum postings etc.

This all works fine and I am happy with the speed. But after a few days the amount of memory used by MySQL doubles, such that some of the swap file is accessed.

This is what mysqltuner reports shortly after starting:

-------- General Statistics --------------------------------------------------
[–] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.5.9-log
[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics -------------------------------------------
[–] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
[–] Data in MyISAM tables: 389M (Tables: 180)
[–] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[–] Data in MEMORY tables: 1G (Tables: 3)
[!!] InnoDB is enabled but isn’t being used
[!!] Total fragmented tables: 6

-------- Security Recommendations -------------------------------------------
[OK] All database users have passwords assigned

-------- Performance Metrics -------------------------------------------------
[–] Up for: 56m 45s (106K q [31.295 qps], 2K conn, TX: 1B, RX: 30M)
[–] Reads / Writes: 90% / 10%
[–] Total buffers: 581.0M global + 8.8M per thread (28 max threads)
[OK] Maximum possible memory usage: 827.8M (10% of installed RAM)

This shows that the 3 tables are taking 1Gb of memory and that MySQL will take a further possible 827M. Add to that indexes I guess, then the total memory usage for MySQL will be around 2Gb.

If I run top I see this to be the case:

Tasks: 168 total, 1 running, 167 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.5%us, 0.5%sy, 0.0%ni, 97.5%id, 0.2%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 8164764k total, 5101708k used, 3063056k free, 38072k buffers
Swap: 4096564k total, 19948k used, 4076616k free, 2544960k cached

3001 mysql 15 0 2269m 1.9g 5392 S 6.0 24.3 1:57.47 mysqld

Each day the server is updated with light forum postings, low count user registrations / preferences, pretty basic stuff.

In the evening the main MEMORY databases are updated with new results. This is performed by dropping and importing a .DBF file with dbf2mysql.

All works well but after a few days the amount of memory used by MySQL more than doubles to 5Gb.

I can not pinpoint exactly why this happens. I think it is like memory leakage, where memory is used but gradually not returned to the pool. MySQL memory take increases, such that the O/S does not have enough for caching and other apps and thus dips into the swap.

If I restart MySQL all is fine again until in a few days times it hits 5Gb again. :confused:

This is the memory usage after 33 hours:

Tasks: 165 total, 1 running, 164 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 98.9%id, 0.1%wa, 0.0%hi, 0.7%si, 0.0%st
Mem: 8164764k total, 4414124k used, 3750640k free, 85984k buffers
Swap: 4096564k total, 18676k used, 4077888k free, 1136948k cached

3001 mysql 15 0 3036m 2.5g 5576 S 0.7 32.6 51:14.36 mysqld

Notice the MySQL memory usage has gone up from 1.9g to 2.5g so clearly it is increasing each day.

As for what is causing this I do not know. It could be that I create a lot of temporary tables for faster processing. Maybe the temp tables are not being closed properly and that memory is not being returned?

Here’s the mysqltuner output after 33 hours:

[–] Up for: 1d 8h 57m 18s (3M q [26.067 qps], 69K conn, TX: 33B, RX: 925M)
[–] Reads / Writes: 83% / 17%
[–] Total buffers: 581.0M global + 8.8M per thread (28 max threads)
[OK] Maximum possible memory usage: 827.8M (10% of installed RAM)
[OK] Slow queries: 0% (15/3M)
[OK] Highest usage of available connections: 28% (8/28)
[OK] Key buffer size / total MyISAM indexes: 200.0M/186.3M
[OK] Key buffer hit rate: 96.1% (14M cached / 571K reads)
[!!] Query cache efficiency: 0.0% (0 cached / 2M selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 2% (5K temp sorts / 272K sorts)
[OK] Temporary tables created on disk: 4% (11K on disk / 244K total)
[OK] Thread cache hit rate: 99% (8 created / 69K connections)
[!!] Table cache hit rate: 1% (432 open / 34K opened)
[OK] Open file limit used: 49% (613/1K)
[OK] Table locks acquired immediately: 99% (2M immediate / 2M locks)

See that 244K temp tables were created. Don’t worry about the created on disk as I use tempfs as a ramdisk for those tables.

I don’t know if it is connected but the process of creating a lot of temporary tables seems to affect the table cache. As lots of tables are open the table opened count is huge, and thus the table cache has just a 1% hit rate. I have tried figures such as 10000 table cache but there is no performance gain and it just seems a silly figure for a total of only 180 actual tables.

Why so many temp tables opened? Because I found that when processing reports in php it helps to create a sub table from the master and just let the php code work on this sub table. It’s worked well like this but I don’t know if it’s causing the memory to be gradually consumed by MySQL.

And here is the top output after 4 days 2 hours

Tasks: 172 total, 1 running, 170 sleeping, 0 stopped, 1 zombie
Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 8164764k total, 5186300k used, 2978464k free, 7144k buffers
Swap: 4096564k total, 2189968k used, 1906596k free, 1669576k cached

3001 mysql 15 0 5404m 2.9g 4100 S 0.7 36.8 144:08.01 mysqld

It is the VIRT memory which is taking the most. This is where MySQL runs out of memory and dippted into the swap.

The server still runs fine - it is not slow. I guess that top is saying MySQL used a peak of 5404M at once stage; it is not using it now but if it needs to again after tonight’s update it will dip into the swap again.

The RES memory is now 2.9G from the initial 1.9G so an extra Gigabyte has been used since starting. This is where I think tables / indexes which are used in memory are not being returned? This is what is probably tipping it into using the swap.

I found that creating a temporary table off the main table helps to speed up report generation time.

Main Table = 400,000 records with 215 fields
Users require reports on say, a name, date and value which could be 1000 records.

The report is not a simple SELECT 1000 records and display the fields; the report has many sub tables calculated in the php code.

e.g. Report for A Name


Winter 5, 100, 5%, $103, 27%
Spring 4, 50, 8%, -$25, -15%


North 8, 80, 10%, $88, 33%
South 12, 50, 24%, $194, 98%

Something like 70 of these tables need to be generated. Now I could just pull the data from the master table and let caching sort it out but the reports require grouping and in tests I did it was far better to create a sub set of data and group on that.

[I]create temporary table $temp_table ENGINE = MEMORY SELECT field1, … field 80


sub_report($group) {

$result = @mysql_query(“select $group,sum(if(position=1,1,0)),count(*),sum(if(position=1,sp ,-1)),sum(1/(sp+1)),sum(placed) from $temp_table group by $group”);

while($row = @mysql_fetch_row($result)) {
… display report html

I found this works best as the sub_report gets called 70 times each with a different group by. If I ran 70 selects with groups on the 400,000 table it is slower than first creating a 1000 table and running the 70 groups on that.

This works really well. If the query which the user requires is small enough then the reports are completed in a few hundred milliseconds.

Note that only 1 table is created per report not 70 and they are all closed at the end of the php script:

if(!empty($temp_table)) $result = @mysql_query(“drop temporary table $temp_table”);

But it seems that this is never removed from the table cache and this is why mysqltuner is reporting:

Table cache hit rate: 0% (189 open / 120K opened)

There are not many users on the server so I don’t think it is number of connections. mysqltuner reporting:

Highest usage of available connections: 32% (9/28)


port = 3306
socket = /var/lib/mysql/mysql.sock

interactive-timeout = 240

max_connections = 28
key_buffer_size = 200M
join_buffer = 192K
max_allowed_packet = 4M
table_cache = 600
open_files_limit = 1200
sort_buffer_size = 384K
read_buffer_size = 4M
read_rnd_buffer_size = 4M
thread_cache = 16
thread_cache_size = 16

query_cache_size = 1M
query_cache_limit = 16K
query_cache_type = 0

#max heap high because tf_races to MEM_tf requires big size
max_heap_table_size = 1G
tmp_table_size = 380M
max_tmp_tables = 20
tmpdir = /tmp

bulk_insert_buffer_size = 384M

long_query_time = 5
slow_query_log = 1
log-slow-queries = /tmp/mysql-slow.log
myisam_repair_threads = 2
myisam_sort_buffer_size = 384M
myisam_max_sort_file_size = 384M

max_allowed_packet = 16M


key_buffer = 384M
sort_buffer_size = 32M
read_buffer = 384M
write_buffer = 384M

interactive-timeout = 1200

err-log = /tmp/mysqld.log

I would not judge your server’s health by mysqltuner. It is just a tool to show ratios of counters, and these are weakly related to server health and performance at best. See d-ignore-mysqls-key-cache-hit-ratio/ and realize that the same principle holds for table cache hit ratio, etc.

I see the same behavior. Only takes about a day to go from it’s starting memory usage (13.5G) to all of my memory usage (24G). At this point mysql will crash. I’ve found that if I manually run FLUSH TABLES the memory will be freed up, however at this point I would have to do that daily, which would mask what is a bigger problem. I’m not really sure how to proceed at this point.