RAM Usage too high MySQL 5.7

Greetings,

It is the first time that I write in this blog, I have been consulting the posts that have already been treated on this subject, but I can not find the problem.

I have a Percona Server 5.7.35-38-log that acts as Master with two Slaves servers. It has a much higher RAM consumption than expected and I can’t find where it is being consumed.

The mysqld process (it is a dedicated server for MySQL) occupies 50/52GB of RAM and never decreased. This is the server configuration:

bind-address = 0.0.0.0
log-bin=mysql-bin
sync_binlog=1
max-binlog-size=2000M
max_connections = 5000
max_allowed_packet=500M
query_cache_type = 0
query_cache_size = 0

expire_logs_days = 6
innodb_buffer_pool_size = 38G
#innodb_buffer_pool_size = 45G
innodb_buffer_pool_instances = 8
innodb_sort_buffer_size=48M
innodb_log_buffer_size = 256M
innodb_log_file_size = 2G
innodb_log_files_in_group = 2
innodb_page_cleaners = 8
innodb_flush_neighbors = 0
innodb_adaptive_flushing = 1
innodb_log_write_ahead_size = 16K
innodb_write_io_threads = 16
innodb_read_io_threads = 16
innodb_checksum_algorithm = crc32
innodb_io_capacity = 1500
innodb_io_capacity_max = 2500
loose-innodb-checksum-algorithm=strict_crc32
loose-innodb-log_checksum-algorithm=crc32
innodb_lru_scan_depth=256
innodb_numa_interleave = 1
log_output=file
slow_query_log=ON
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time=2
log_slow_rate_limit=100
log_slow_rate_type=query
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
slow_query_log_always_write_time=1
slow_query_log_use_global_control=all
innodb_monitor_enable=all
innodb_print_all_deadlocks=ON
userstat=1
performance_schema=1
performance-schema-instrument=‘memory/%=COUNTED’

I have been using the performance schema for troubleshooting:

  • mysql> select * from sys.memory_by_thread_by_current_bytes limit 5;

±----------±-------------------------±-------------------±------------------±------------------±------------------±----------------+
| thread_id | user | current_count_used | current_allocated | current_avg_alloc | current_max_alloc | total_allocated |
±----------±-------------------------±-------------------±------------------±------------------±------------------±----------------+
| 1 | sql/main | 32798 | 41.23 GiB | 1.29 MiB | 39.56 GiB | 44.00 GiB |
| 184461 | zabbix_monitor@localhost | 3615848 | 2.27 GiB | 673 bytes | 2.44 GiB | 52.36 GiB |
| 185565 | zabbix_monitor@localhost | 3445352 | 2.17 GiB | 677 bytes | 2.35 GiB | 50.37 GiB |
| 124637 | XXXxXXXXXXXXX | 127712 | 436.07 MiB | 3.50 KiB | 236.18 MiB | 377.38 GiB |
| 21739 | XXXxXXXXXXXXX | 10713 | 101.63 MiB | 9.71 KiB | 101.68 MiB | 2.28 GiB |
±----------±-------------------------±-------------------±------------------±------------------±------------------±----------------+

I see a lot of consumption from some users, but closing their connections does not free up RAM.

  • mysql> select event_name, current_alloc, high_alloc from sys.memory_global_by_current_bytes where current_count > 0;

±-------------------------------------------------------------------------------±--------------±------------+
| event_name | current_alloc | high_alloc |
±-------------------------------------------------------------------------------±--------------±------------+
| memory/innodb/buf_buf_pool | 39.56 GiB | 39.56 GiB |
| memory/innodb/hash0hash | 1.11 GiB | 1.71 GiB |
| memory/innodb/log0log | 512.08 MiB | 512.09 MiB |
| memory/sql/sp_head::main_mem_root | 192.02 MiB | 733.33 MiB |
| memory/innodb/mem0mem | 50.03 MiB | 8.08 GiB |
| memory/innodb/parallel_doublewrite | 30.27 MiB | 30.27 MiB |
| memory/performance_schema/table_handles | 27.19 MiB | 27.19 MiB |
| memory/sql/TABLE | 22.30 MiB | 25.32 MiB |
| memory/performance_schema/events_statements_summary_by_thread_by_event_name | 18.24 MiB | 18.24 MiB |
| memory/innodb/lock0lock | 14.92 MiB | 14.92 MiB |
| memory/performance_schema/memory_summary_by_thread_by_event_name | 14.06 MiB | 14.06 MiB |
| memory/performance_schema/events_statements_history_long | 13.66 MiB | 13.66 MiB |
| memory/performance_schema/events_statements_summary_by_digest.tokens | 9.77 MiB | 9.77 MiB |
| memory/performance_schema/events_statements_history_long.tokens | 9.77 MiB | 9.77 MiB |
| memory/performance_schema/events_statements_history_long.sqltext | 9.77 MiB | 9.77 MiB |
| memory/sql/TABLE_SHARE::mem_root | 9.70 MiB | 10.77 MiB |
| memory/innodb/row0sel | 8.89 MiB | 407.87 MiB |

  • mysql> select substring_index( substring_index(event_name, ‘/’, 2), ‘/’, -1 ) as event_type, round(sum(CURRENT_NUMBER_OF_BYTES_USED)/1024/1024, 2) as MB_CURRENTLY_USED from performance_schema.memory_summary_global_by_event_name group by event_type having MB_CURRENTLY_USED>0;

±-------------------±------------------+
| event_type | MB_CURRENTLY_USED |
±-------------------±------------------+
| innodb | 42287.55 |
| memory | 8.39 |
| myisam | 0.14 |
| mysys | 13.78 |
| performance_schema | 224.97 |
| sql | 233.33 |
±-------------------±------------------+

  • mysql> SELECT sys.format_bytes(SUM(CURRENT_NUMBER_OF_BYTES_USED)) total_allocated FROM performance_schema.memory_summary_global_by_event_name;

±----------------+
| total_allocated |
±----------------+
| 41.76 GiB |
±----------------+

image

±--------------+
| MAX_MEMORY_GB |
±--------------+
| 40.7709 |
±--------------+
I never have more than 150 concurrent users

  • mysql> SELECT thread_id, IF(t.name = ‘thread/sql/one_connection’, CONCAT(t.processlist_user, ‘@’, t.processlist_host), REPLACE(t.name, ‘thread/’, ‘’)) user, SUM(mt.current_count_used) AS current_count_used, sys.format_bytes(SUM(mt.current_number_of_bytes_used)) AS current_allocated, sys.format_bytes(IFNULL(SUM(mt.current_number_of_bytes_used) / NULLIF(SUM(current_count_used), 0), 0)) AS current_avg_alloc, sys.format_bytes(MAX(mt.current_number_of_bytes_used)) AS current_max_alloc, sys.format_bytes(SUM(mt.sum_number_of_bytes_alloc)) AS total_allocated FROM performance_schema.memory_summary_by_thread_by_event_name AS mt JOIN performance_schema.threads AS t USING (thread_id) GROUP BY thread_id, IF(t.name = ‘thread/sql/one_connection’, CONCAT(t.processlist_user, ‘@’, t.processlist_host), REPLACE(t.name, ‘thread/’, ‘’)) ORDER BY SUM(current_number_of_bytes_used) DESC;

±----------±--------------------------------±-------------------±------------------±------------------±------------------±----------------+
| thread_id | user | current_count_used | current_allocated | current_avg_alloc | current_max_alloc | total_allocated |
±----------±--------------------------------±-------------------±------------------±------------------±------------------±----------------+
| 1 | sql/main | 32798 | 41.23 GiB | 1.29 MiB | 39.56 GiB | 44.00 GiB |
| 184461 | zabbix_monitor@localhost | 3626334 | 2.27 GiB | 673 bytes | 2.45 GiB | 52.51 GiB |
| 185565 | zabbix_monitor@localhost | 3455513 | 2.18 GiB | 677 bytes | 2.36 GiB | 50.50 GiB |
| 124637 | xxxxx | 127876 | 436.91 MiB | 3.50 KiB | 236.68 MiB | 377.72 GiB |
| 21739 | xxxx | 10713 | 101.63 MiB | 9.71 KiB | 101.68 MiB | 2.28 GiB |
| 453 | xxx | -882 | 19.28 MiB | -22925 bytes | 13.79 MiB | 720.29 MiB |
| 115 | xxx | 40839 | 8.77 MiB | 225 bytes | 3.26 MiB | 487.46 GiB |

This used memory data does not make a lot of sense, the slave_sql 25GB??

Any idea where the RAM is being consumed?

Thank you very much

1 Like

Hi jeronimo.asensio,

Checking your configuration I see that innodb_sort_buffer_size (48 MB) is over sized.
This buffer gets allocated entirely per connection everytime there is a sorting (even if you are only sorting a few bytes). You should consider reducing this buffer globally and only increase it on a per session basis for the queries that do need a larger sort buffer.

Also you might wish to disable THP and enable jemalloc. As shown on this blogpost Enabling jemalloc on Percona Server for MySQL 5.7 and 8.0 Series - Percona Database Performance Blog jemalloc can have a positive impact on memory usage. Note that THP and jemalloc should NOT be enabled simoultaneously. Either one, the other or none but NOT both.

Last keep in mind that performance_schema is an estimate and not an accurate way of measuring memory and the buffer pool will use some extra memory on top of what is being configured. After setting jemalloc and resizing sort_buffer_size please check if overall memory usage goes down.

Regards

1 Like

First C Tutte, thanks for the reply.

I will try the option you recommend to decrease the size of the innodb_sort_buffer_size to see if I get the figures right. What you say about using jemallos, I already tried it but I didn’t get an improvement (with TPH disabled), but if you recommend its use I will consider it as a base line for the rest of the installations I will do.

In a few days I’ll let you know how it evolves.

Thank you very much!

1 Like

Good morning to everyone!
I arrived here because we are experiencing the same RAM problems after having upgrader from 5.6 to 5.7
Server version: 5.7.44-48-57-log Percona XtraDB Cluster (GPL), Release rel48
We have set up monitoring with pmm 2.44
we have tried jemalloc as suggested in other similar threads, (with no improvements)
sort_buffer_size is 1MB with max_user connection of 2500
The host memory is 49GB while buffer_pool_size is 20GB ( 20 instances)
The memory slowly increases until OOM Killer stop the instance
So we are currently forced about once a week to reboot one node of the cluster to avoid the OOM killer intervention
Has someone some suggestions?

considering that we’ve been starting using jemalloc.so.1 to try some workarounds, and considering that we’re running mysql on centos 7.9 , do you think could be useful to try to use jemalloc.so.2 ?