Understand Percona Memory Usage

Hello team, I understand that this question may have been asked before, but I’m currently in search of a method to better analyze MySQL’s memory usage. While we do have some insights into configuration and usage at the database level, there seems to be a disconnect when we examine the OS perspective. Unfortunately, I haven’t been successful in pinpointing the specific database processes that are contributing significantly to this disparity.

As you will see in the example below, mysql it is configured to use ~135G of memory, yet its actual consumption exceeds 220 gigabytes. I know that this is NOT an unexpected behavior, but there is a critical need to ascertain the root cause of this variance and what it is consuming the resident memory.

Thanks in advance and feel free to request any additional information.

— At Database level

Server version: 8.0.27-18 Percona Server (GPL), Release 18, Revision 24801e21b45

[(none)]> select event_name,current_alloc from sys.memory_global_by_current_bytes limit 10;
±----------------------------------------------------------------±--------------+
| event_name | current_alloc |
±----------------------------------------------------------------±--------------+
| memory/innodb/buf_buf_pool | 114.35 GiB |
| memory/sql/TABLE | 5.59 GiB |
| memory/innodb/hash0hash | 3.59 GiB |
| memory/performance_schema/table_handles | 3.44 GiB |
| memory/innodb/memory | 3.10 GiB |
| memory/innodb/ha_innodb | 568.27 MiB |
| memory/sql/TABLE_SHARE::mem_root | 476.63 MiB |
| memory/performance_schema/table_io_waits_summary_by_index_usage | 137.50 MiB |
| memory/innodb/row0sel | 93.71 MiB |
| memory/performance_schema/table_shares | 84.00 MiB |
±----------------------------------------------------------------±--------------+
10 rows in set (0.00 sec)

[(none)]> select format_bytes(sum(current_alloc)) from sys.x$memory_global_by_current_bytes;
±---------------------------------+
| format_bytes(sum(current_alloc)) |
±---------------------------------+
| 132.09 GiB |
±---------------------------------+
1 row in set (0.01 sec)

– At OS level

pt-summary:

Memory

   Total | 255.7G
    Free | 2.0G
    Used | physical = 226.5G, swap allocated = 0.0, swap used = 0.0, virtual = 226.5G
  Shared | 3.8M
 Buffers | 27.2G
  Caches | 27.0G
   Dirty | 1100 kB
 UsedRSS | 225.3G

Swappiness | 10
DirtyPolicy | 20, 10
DirtyStatus | 0, 0

Top Processes

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND

2047259 mysql 20 0 228.7g 222.5g 15656 S 300.0 87.0 187507:28 mysqld
801568 nicolas+ 20 0 16808 4388 3428 R 5.9 0.0 0:00.02 top
2047180 root 20 0 904328 122208 7412 S 5.9 0.0 6362:15 cadvisor
2049082 mataque+ 20 0 50712 38640 10668 S 5.9 0.0 57:46.52 mataque+
3975914 prometh+ 20 0 1214972 172456 9624 S 5.9 0.1 6657:38 prometh+
1 root 20 0 177388 19768 8504 S 0.0 0.0 3021:58 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:56.92 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.22 rcu_par+

$ free -h
total used free shared buff/cache available
Mem: 255Gi 226Gi 1.9Gi 3.0Mi 27Gi 26Gi
Swap: 0B 0B 0B

Hello team! Can anyone check on this please?

Thanks!

Hello,
You might also check by enabling performance_schema memory instrument.
And if you don’t use any memory allocator like jemalloc, the default memory allocator can show bad performance and might not release the memory.

Thanks Yunus, Thanks for your response! I already checked performance_schema information but it does not have the information that we are looking for. We need to understand how much memory is consuming each database session in order to diagnose high memory usage.

We are going to load jeamlloc to check the behavior. Thanks!

1 Like