Greetings,
It is the first time that I write in this blog, I have been consulting the posts that have already been treated on this subject, but I can not find the problem.
I have a Percona Server 5.7.35-38-log that acts as Master with two Slaves servers. It has a much higher RAM consumption than expected and I can’t find where it is being consumed.
The mysqld process (it is a dedicated server for MySQL) occupies 50/52GB of RAM and never decreased. This is the server configuration:
bind-address = 0.0.0.0
log-bin=mysql-bin
sync_binlog=1
max-binlog-size=2000M
max_connections = 5000
max_allowed_packet=500M
query_cache_type = 0
query_cache_size = 0
expire_logs_days = 6
innodb_buffer_pool_size = 38G
#innodb_buffer_pool_size = 45G
innodb_buffer_pool_instances = 8
innodb_sort_buffer_size=48M
innodb_log_buffer_size = 256M
innodb_log_file_size = 2G
innodb_log_files_in_group = 2
innodb_page_cleaners = 8
innodb_flush_neighbors = 0
innodb_adaptive_flushing = 1
innodb_log_write_ahead_size = 16K
innodb_write_io_threads = 16
innodb_read_io_threads = 16
innodb_checksum_algorithm = crc32
innodb_io_capacity = 1500
innodb_io_capacity_max = 2500
loose-innodb-checksum-algorithm=strict_crc32
loose-innodb-log_checksum-algorithm=crc32
innodb_lru_scan_depth=256
innodb_numa_interleave = 1
log_output=file
slow_query_log=ON
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time=2
log_slow_rate_limit=100
log_slow_rate_type=query
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
slow_query_log_always_write_time=1
slow_query_log_use_global_control=all
innodb_monitor_enable=all
innodb_print_all_deadlocks=ON
userstat=1
performance_schema=1
performance-schema-instrument=‘memory/%=COUNTED’
I have been using the performance schema for troubleshooting:
- mysql> select * from sys.memory_by_thread_by_current_bytes limit 5;
±----------±-------------------------±-------------------±------------------±------------------±------------------±----------------+
| thread_id | user | current_count_used | current_allocated | current_avg_alloc | current_max_alloc | total_allocated |
±----------±-------------------------±-------------------±------------------±------------------±------------------±----------------+
| 1 | sql/main | 32798 | 41.23 GiB | 1.29 MiB | 39.56 GiB | 44.00 GiB |
| 184461 | zabbix_monitor@localhost | 3615848 | 2.27 GiB | 673 bytes | 2.44 GiB | 52.36 GiB |
| 185565 | zabbix_monitor@localhost | 3445352 | 2.17 GiB | 677 bytes | 2.35 GiB | 50.37 GiB |
| 124637 | XXXxXXXXXXXXX | 127712 | 436.07 MiB | 3.50 KiB | 236.18 MiB | 377.38 GiB |
| 21739 | XXXxXXXXXXXXX | 10713 | 101.63 MiB | 9.71 KiB | 101.68 MiB | 2.28 GiB |
±----------±-------------------------±-------------------±------------------±------------------±------------------±----------------+
I see a lot of consumption from some users, but closing their connections does not free up RAM.
- mysql> select event_name, current_alloc, high_alloc from sys.memory_global_by_current_bytes where current_count > 0;
±-------------------------------------------------------------------------------±--------------±------------+
| event_name | current_alloc | high_alloc |
±-------------------------------------------------------------------------------±--------------±------------+
| memory/innodb/buf_buf_pool | 39.56 GiB | 39.56 GiB |
| memory/innodb/hash0hash | 1.11 GiB | 1.71 GiB |
| memory/innodb/log0log | 512.08 MiB | 512.09 MiB |
| memory/sql/sp_head::main_mem_root | 192.02 MiB | 733.33 MiB |
| memory/innodb/mem0mem | 50.03 MiB | 8.08 GiB |
| memory/innodb/parallel_doublewrite | 30.27 MiB | 30.27 MiB |
| memory/performance_schema/table_handles | 27.19 MiB | 27.19 MiB |
| memory/sql/TABLE | 22.30 MiB | 25.32 MiB |
| memory/performance_schema/events_statements_summary_by_thread_by_event_name | 18.24 MiB | 18.24 MiB |
| memory/innodb/lock0lock | 14.92 MiB | 14.92 MiB |
| memory/performance_schema/memory_summary_by_thread_by_event_name | 14.06 MiB | 14.06 MiB |
| memory/performance_schema/events_statements_history_long | 13.66 MiB | 13.66 MiB |
| memory/performance_schema/events_statements_summary_by_digest.tokens | 9.77 MiB | 9.77 MiB |
| memory/performance_schema/events_statements_history_long.tokens | 9.77 MiB | 9.77 MiB |
| memory/performance_schema/events_statements_history_long.sqltext | 9.77 MiB | 9.77 MiB |
| memory/sql/TABLE_SHARE::mem_root | 9.70 MiB | 10.77 MiB |
| memory/innodb/row0sel | 8.89 MiB | 407.87 MiB |
- mysql> select substring_index( substring_index(event_name, ‘/’, 2), ‘/’, -1 ) as event_type, round(sum(CURRENT_NUMBER_OF_BYTES_USED)/1024/1024, 2) as MB_CURRENTLY_USED from performance_schema.memory_summary_global_by_event_name group by event_type having MB_CURRENTLY_USED>0;
±-------------------±------------------+
| event_type | MB_CURRENTLY_USED |
±-------------------±------------------+
| innodb | 42287.55 |
| memory | 8.39 |
| myisam | 0.14 |
| mysys | 13.78 |
| performance_schema | 224.97 |
| sql | 233.33 |
±-------------------±------------------+
- mysql> SELECT sys.format_bytes(SUM(CURRENT_NUMBER_OF_BYTES_USED)) total_allocated FROM performance_schema.memory_summary_global_by_event_name;
±----------------+
| total_allocated |
±----------------+
| 41.76 GiB |
±----------------+
±--------------+
| MAX_MEMORY_GB |
±--------------+
| 40.7709 |
±--------------+
I never have more than 150 concurrent users
- mysql> SELECT thread_id, IF(t.name = ‘thread/sql/one_connection’, CONCAT(t.processlist_user, ‘@’, t.processlist_host), REPLACE(t.name, ‘thread/’, ‘’)) user, SUM(mt.current_count_used) AS current_count_used, sys.format_bytes(SUM(mt.current_number_of_bytes_used)) AS current_allocated, sys.format_bytes(IFNULL(SUM(mt.current_number_of_bytes_used) / NULLIF(SUM(current_count_used), 0), 0)) AS current_avg_alloc, sys.format_bytes(MAX(mt.current_number_of_bytes_used)) AS current_max_alloc, sys.format_bytes(SUM(mt.sum_number_of_bytes_alloc)) AS total_allocated FROM performance_schema.memory_summary_by_thread_by_event_name AS mt JOIN performance_schema.threads AS t USING (thread_id) GROUP BY thread_id, IF(t.name = ‘thread/sql/one_connection’, CONCAT(t.processlist_user, ‘@’, t.processlist_host), REPLACE(t.name, ‘thread/’, ‘’)) ORDER BY SUM(current_number_of_bytes_used) DESC;
±----------±--------------------------------±-------------------±------------------±------------------±------------------±----------------+
| thread_id | user | current_count_used | current_allocated | current_avg_alloc | current_max_alloc | total_allocated |
±----------±--------------------------------±-------------------±------------------±------------------±------------------±----------------+
| 1 | sql/main | 32798 | 41.23 GiB | 1.29 MiB | 39.56 GiB | 44.00 GiB |
| 184461 | zabbix_monitor@localhost | 3626334 | 2.27 GiB | 673 bytes | 2.45 GiB | 52.51 GiB |
| 185565 | zabbix_monitor@localhost | 3455513 | 2.18 GiB | 677 bytes | 2.36 GiB | 50.50 GiB |
| 124637 | xxxxx | 127876 | 436.91 MiB | 3.50 KiB | 236.68 MiB | 377.72 GiB |
| 21739 | xxxx | 10713 | 101.63 MiB | 9.71 KiB | 101.68 MiB | 2.28 GiB |
| 453 | xxx | -882 | 19.28 MiB | -22925 bytes | 13.79 MiB | 720.29 MiB |
| 115 | xxx | 40839 | 8.77 MiB | 225 bytes | 3.26 MiB | 487.46 GiB |
This used memory data does not make a lot of sense, the slave_sql 25GB??
Any idea where the RAM is being consumed?
Thank you very much