MySQL grows until it crashes

I have a 64-bit server (with 64-bit OS and MySQL) with 4 gigs of RAM and 2 gigs of swap. We run MySQL 5.0.22 (installed from the CentOS 5.0 RPM). Nothing else runs on the server.

We have a problem where MySQL eats up more and more RAM until it starts swapping, then it eats up the swap until the OS kills it because there is nothing left to be allocated. It takes weeks sometimes for it to creep to the limit, but performance drops over time as well, due to swapping.

The database is extremely write-heavy, uses InnoDB almost exclusively (some logs are stored in MyISAM), and is about 300 gigabytes in size (~90 billion rows total).

MySQL is configured as follows:

innodb_additional_mem_pool_size=20M
innodb_buffer_pool_size=2048M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
innodb_log_buffer_size=4M
innodb_log_file_size=512M
innodb_thread_concurrency=8
key_buffer_size=32M
max-connect-errors=10000000
table_cache=1024
thread_cache=32
transaction-isolation=READ-COMMITTED
query_cache_size=32M
read_buffer_size=2M
sort_buffer_size=8M

MySQL has been up for about six days right now and is using 5770MB of memory. Even with shared libraries and other such memory use, I can’t figure out how it is using that much memory based on that configuration. How can I fix my configuration? Per the InnoDB tuning guide on the blog, I should be able to set the buffer pool size to be 70% of total RAM, which would cause MySQL to crash within hours.

I’d like to work on performance tuning, but as long as MySQL is crashing, I don’t know how I can.

I appreciate any help. I’m sure I’m missing something simple.

How was MySQL installed? Compiled from source? What version?

Is that all there is to your config?

MySQL was installed using yum (for configuration management between a handful of servers, I’d rather not compile from source):

yum install mysql-server

It’s directly from the CentOS Extras.

That is the bulk of /etc/my.cnf. The whole file is as follows:

[mysqld]
datadir=/var3/lib/mysql
socket=/var/lib/mysql/mysql.sock
old_passwords=1

safe-show-database
key_buffer_size=32M
innodb_buffer_pool_size=2048M
innodb_additional_mem_pool_size=20M
innodb_log_file_size=512M
innodb_log_buffer_size=4M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
innodb_thread_concurrency=8
max-connect-errors=10000000
table_cache=1024
thread_cache=32
transaction-isolation=READ-COMMITTED
query_cache_size=32M
read_buffer_size=2M
server-id=3
sort_buffer_size=8M

log-bin=/home/junk/mysql/mysql-bin

tmpdir=/var3/tmp

report-host=johnson.junk.ws

max_connections=60

[mysql.server]
user=mysql
basedir=/var3/lib

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

OK so it’s vanilla MySQL and the config looks fine.

50% for InnoDB is normal, and your per connection settings appear to be OK.

What kind of swappiness setting do you use?

[URL]An Interview With Linus Torvalds: Linux and Git - Part 1 | Tag1 Consulting

I honestly doubt the swappiness is the problem, it’s sounds like a residual memory leak due to something in your setup.

If it was me I’d be inclined to do a fresh OS install etc, but with 300 Gb that’s going to hurt.

Yeah, I looked into the swappiness setting a long while ago. It’s set at the default (60) now. It doesn’t affect performance much since MySQL is the only thing running anyway.

We have the out of memory error come up because the server doesn’t have enough swap (we have 2 gigs of swap for 4 gigs of RAM, which isn’t ideal… if I had a chance to go back and smack the datacenter tech, I probably would). But even still, I just can’t figure out why MySQL is using so much RAM.

I guess I’ll just wait until December when we get new servers anyway. sigh Maybe I’ll look into PostgreSQL )