mysql memory consumption / crashes

Hello,
I have quite an annoying problem: after about one week running mysql-server (5.0.44) is terminated because of excessive memory usage (around 2.8GB, 32-bit system). I have allready reduced the memory settings several times, but the problem still persists.

Mysql-tunig-primer calculates a max memory allocated of 447M and a configured limit of 1017M. top tells me the process uses 2239M at the moment, so time for a manual restart…

Any ideas how to solve the issue??

other applications involved

  • mainly InnoDB-Tables, 2 in memory tables
  • apache/php (running on same server)
  • mysqldump
  • replication to a standby-server

CONFIG (memory-related parts)

key_buffer = 8M
max_allowed_packet = 32M
table_cache = 1350
sort_buffer_size = 2M
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 1M
max_connections = 150
long_query_time = 4
thread_cache_size = 20
join_buffer_size = 2M
max_heap_table_size = 16M
tmp_table_size = 4M
open_files_limit = 4000
max_binlog_cache_size = 1073741824

innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 50M
innodb_log_buffer_size = 8M

ERROR MESSAGE (Low Mem)

090306 17:54:53 [ERROR] /usr/sbin/mysqld: Out of memory (Needed 1094604 bytes)
090306 17:54:53 [ERROR] Out of memory; check if mysqld or some other process uses all available memory; if not, you may have to use ‘ulimit’ to allow mysqld to use more memory or you can add more swap space

ERROR MESSAGE (Crash)

081221 19:10:15 InnoDB: Error: cannot allocate 1064960 bytes of
InnoDB: memory with malloc! Total allocated memory
InnoDB: by InnoDB 317687476 bytes. Operating system errno: 12
InnoDB: Check if you should increase the swap file or
InnoDB: ulimits of your operating system.
InnoDB: On FreeBSD check you have compiled the OS with
InnoDB: a big enough maximum process size.
InnoDB: Note that in most 32-bit computers the process
InnoDB: memory space is limited to 2 GB or 4 GB.
InnoDB: We keep retrying the allocation for 60 seconds…
081221 19:40:02 InnoDB: Error: cannot allocate 1064960 bytes of
InnoDB: memory with malloc! Total allocated memory
InnoDB: by InnoDB 317683032 bytes. Operating system errno: 12
InnoDB: Check if you should increase the swap file or
InnoDB: ulimits of your operating system.
InnoDB: On FreeBSD check you have compiled the OS with
InnoDB: a big enough maximum process size.
InnoDB: Note that in most 32-bit computers the process
InnoDB: memory space is limited to 2 GB or 4 GB.
InnoDB: We keep retrying the allocation for 60 seconds…
081221 19:41:02 InnoDB: We now intentionally generate a seg fault so that
InnoDB: on Linux we get a stack trace.
081221 19:41:02 - mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=8388608
read_buffer_size=258048
max_used_connections=41
max_connections=150
threads_connected=4
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 353190 K
bytes of memory
Hope that’s ok; if not, decrease some variables in the equation.

Have you seen this post?

http://www.mysqlperformanceblog.com/2006/05/17/mysql-server- memory-usage/

Just a hunch, it might be your max_allowed_packet and your max_connections settings. 150 x 32MB = 4.6875 GB. As large packets get sent to each connection, it’s possible they allocate more memory to accomodate the large packet and never release it.

Try setting your max packet to something smaller.

Also look at reducing your number of connections. If you don’t need 150, reduce the number.

For instance, on my busy forums (1M PHP hits daily), I have it limited to 48. I have 32 php-fastcgi processes, each which keeps a persistent connection to MySQL. I keep a few more around for things Debian maintenance scripts and running mytop or logging in myself.

Thanks for your hint regarding max_allowed_packet, haven’t considered this. I will check it out next week (don’t want to change during the weekend).

There were in fact changes to blobs during the last year when the crashes started. Well, the number of blobs - esp. with large size - highliy decreased, but sometimes changes have strange effects. Possible that the frequent use of larger blobs helped to free unused memory…

Further lowering the number of connections would be quite hard. The crash reports usually showed 110-401 max connections (the setting was 400 before my first changes). The report above was quite extraordinary regarding this fact.

BTW: The server has 8GB of physical memory. Would have been better to set it up with a 64-bit-install, but I worried additional troubles. atm there is no way to change this.

No luck. max_allowed_packet_size does not seem to change anything.
I lowered the value from 32 to 4 mb and restarted on 16.3., 19:00. The rise seems to quite the same as before.

I had calculated a max of 1617MB (1017MB + 150*4MB) for the new setting, after 4 days i allready see 2000MB. The rise is usually slower during the weekend, but I expect the server to crash on monday or tuesday.

Any other ideas/suggestions? I welcome any hint…
Best regards, Stephan

memory usage was as following:
virt. res.
17.3. 10:30 585m 497m
17.3. 14:39 677m 588m
18.3. 09:43 970m 878m
18.3. 12:58 1044m 953m
18.3. 23:55 1280m 1.2g
19.3. 08:55 1343m 1.2g
19.3. 17:07 1540m 1.4g
19.3. 20:53 1621m 1.5g
20.3. 11:02 1778m 1.6g
20.3. 20:30 2085m 1.9g

Max. connections used was 30 during the 4 days.

Looks like I have a lead. After the last set of changes the server seems to be jailed within 1.1-1.2g (virtual memory). Thanks Mark.

Here is a short summary of the last changes:

The lowered max_heap_table_size does not look to have any relevance. I restarted after 4 days at 2.1g/1.9g (virt./res.) on 25.3. About the same rise as before and even before the changes in march (since I opened the topic).

Those were the changes for the last restart (25.3. 21:00) with much better mem usage:

  • max_connections 150 > 100
  • table_cache 1350 > 400
  • open_files_limit 4000 > 1200
  • thread_cache_size 20 > 10

I will try to isolate the responsible setting and report back.
Will try to check performance impact as well, but this has only minor relevance.

I also managed to nail down the set of changes that caused the issues. Well hidden in times of low usage and config changes so the problems appeared quite a bit later.
Anyway, there is no way to revert.

if you have many tables with blobs and those blobs are large, then table_cache can consume more memory than you think…

see [URL]MySQL Bugs: #38002: table_cache consumes too much memory with blobs for an example of this.