Memory / swap usage

Hi All,

I’ve been using Percona XtraDB Cluster for several years now. Works as a charm most of the time, really happy with it.

We have a 3 node setup. All nodes are quite powerful machines with SSD RAID 10 arrays for the data share and 376 Gb of memory. WSREP traffic goes over a 10Gbit copper interface. We use ProxySQL as a load balancer. Al the write operations go to the ‘primary’. All the reads are equally distributed across the 3 servers. Our application is read intensive. The servers only run PXC.

Recently, after upgrading to the latest version of PXC, my memory usage is getting out of hand. After a few days the “primary” servers eats all the available memory, start swapping an eventually crashes if I don’t reboot.

I’m having a hard time finding why this happens. I’ve already decreased the InnoDB buffer pool size from 320 Gb to 288 Gb. This doesn’t help, it just takes a bit longer before anything crashes.

I’ve did quite a lot of investigating and I’m getting out of options on how to pinpoint this. I followed the steps in this blog post: [url]https://www.percona.com/blog/2018/06/28/what-to-do-when-mysql-runs-out-of-memory-troubleshooting-guide/[/url]

The numbers I collect a nowhere near the memory MySQL consumes:

----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 321921220608
Dictionary memory allocated 2657436026
Internal hash tables (constant factor + variable factor)
Adaptive hash index 15589755328 (4895802944 + 10693952384)
Page hash 9563096 (buffer pool 0 only)
Dictionary cache 3881386762 (1223950736 + 2657436026)
File system 262418160 (812272 + 261605888)
Lock system 765103704 (764992376 + 111328)
Recovery system 0 (0 + 0)
Buffer pool size 18873792
Buffer pool size, bytes 309228208128
Free buffers 6067474
Database pages 12153613
Old database pages 4485737
Modified db pages 24447
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 21609578, not young 394975732
0.00 youngs/s, 0.00 non-youngs/s
Pages read 11589136, created 637621, written 21663292
0.00 reads/s, 1.00 creates/s, 63.98 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 12153613, unzip_LRU len: 0
I/O sum[101760]:cur[0], unzip sum[0]:cur[0]
top - 17:51:17 up 3 days, 3:00, 1 user, load average: 0.92, 1.12, 1.49
Tasks: 294 total, 1 running, 293 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.6 us, 0.3 sy, 0.0 ni, 97.0 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 39624163+total, 1090348 free, 38334976+used, 11801544 buff/cache
KiB Swap: 9764860 total, 1995436 free, 7769424 used. 11406660 avail Mem 

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
3416 mysql 20 0 0.367t 0.357t 2.002g S 71.1 96.8 9237:05 mysqld 

So, I’m looking for some pointers here… Thank you in advance!

Kind regards,

Dirk

Configuration:

[mysqld]
datadir = /db
socket = /var/run/mysqld/mysqld.sock
log_error = /var/log/mysql/mysql.log
pid_file = /var/run/mysqld/mysqld.pid
sql_mode = NO_ENGINE_SUBSTITUTION
skip_name_resolve
symbolic_links = 0
max_allowed_packet = 64M
show_compatibility_56 = ON
sync_binlog = 0
max_connections = 500
pxc_strict_mode = PERMISSIVE
block_encryption_mode = aes-256-cbc

# CHARACTER SET
collation_server = utf8_general_ci
character_set_server = utf8
character_set_client_handshake = FALSE

# INNODB options
innodb_buffer_pool_size = 288G
innodb_buffer_pool_instances = 32
innodb_buffer_pool_chunk_size = 512M
#innodb_change_buffer_max_size = 10
innodb_numa_interleave = 1
innodb_flush_neighbors = 0
innodb_log_file_size = 2G
innodb_log_buffer_size = 64M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_read_io_threads = 16
innodb_write_io_threads = 8
innodb_io_capacity = 5000
innodb_io_capacity_max = 10000
innodb_autoinc_lock_mode = 2

# Other buffers and finetuning
key_buffer_size = 32M
open_files_limit = 1048576
table_open_cache = 524288
table_definition_cache = 262144

# Other vars
group_concat_max_len = 10240

# WSREP options
wsrep_provider = /usr/lib/galera3/libgalera_smm.so
wsrep_provider_options = "gcache.size=2048M; gcs.fc_limit = 256; gcs.fc_factor = 0.99; gcs.fc_master_slave = yes"
wsrep_cluster_name = proactive
wsrep_cluster_address = gcomm://172.16.0.1,172.16.0.2,172.16.0.3
wsrep_node_name = pxc1
wsrep_node_address = 172.16.0.1
wsrep_slave_threads = 56
wsrep_sync_wait = 1
wsrep_auto_increment_control = 0
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sst:THISISSECRET... :-)
wsrep_log_conflicts = 1

[sst]
inno-apply-opts = "--use-memory=20G"

Good to hear PXC has been working great for you.

So understand the recent change wondering what has changed for you

  • What was the old version and what is the current version
  • Have you started using large transaction
  • Is there any other change in workload
  • Once memory shoot up do you see it coming down if you leave it for sometime (w/o any active workload)
  • Do you have a sample test-case for us to try.

First of all, thank you for replying!

Current: 5.7.22-22-57
Previous: 5.7.21-29.26

We try to keep up with the versions. We run it in our test environment for a while, and then proceed to production. Unfortunately, our test system is never experiencing the same kind of load as production. We do see the same swap behavior on our test environment.

No, or not any other than we already where using. We do backup all the databases with mysqldump each night, but we have been doing that forever.

No, no significant one.

No. The system is pretty much idle outside of office hours, except for running the backup. Once the swap is taken, it is never returned to the system.

No, because I cannot pinpoint exactly why mysql starts to swap. I can see the timestamps when it happens, but there is so much running on this system, it’s pretty hard to point to a certain query or batch of queries.

Any resolution to the swapping? We started having the same problem with 5.7.22. We noticed that it got much worse when upgrading to Redhat 7 from Redhat 6. We have tried upgrading to 5.7.25, to find a cure, to no avail. It continues to swap. We also have been running PXC for several years. I’m wondering if your ibdata1 file has become fragmented? Ours has become severely fragmented, so trying to determine if it playing a role. Also, we are running NUMA. There was a big change by Percona between 5.7.21 and 5.7.22. Is NUMA a factor for you?

Thanks.

Hi dunnjoe,

We eventually had a Percona consultant have a look at our setup. We were basically getting near some limits with 1000 DB’s with around 500 tables each and a high workload. He did some changes to our settings (sysctl.conf), which improved things a bit.

We do run NUMA, and have applied the advised settings for this. We run Ubuntu and not RedHat, so I cannot say a lot about that.

About the ibdata1 file: We don’t have that problem. But maybe have a look at this post: [URL]https://www.percona.com/blog/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/[/URL]

sysctl.conf changes:

vm.swappiness=1 (we already had this in place)
vm.dirty_ratio=15
vm.dirty_background_ratio=3
vm.vfs_cache_pressure=500

Hope this helps a bit!

Cheers,

Dirk