If I set gcache.size=5G
would that cap the gcache.page_size
so the volume would not exceed 5G? My page_size is set to the default which is 128M. Or do I have to set the some other option to limit gcache.page_size
. The gcache_page
files are filling up my hard drive.
gcache files are purged automatically when they are no longer needed. If you have excessive gcache files that are not being purged, this is indicative of some other issue with this node. How often does flow control turn on from this node? Is this node configured the same as the other nodes?
Yes they are configured the same.
It is a multi master setup. I do have a designated write server which is getting updated every 5-10 minutes. When I look at the cluster I get sync status for all nodes. Would setting gcache.size=5G
control the amount of pages that are kept? I had a page file back from march and a lot of files July 12 only minutes apart sometimes they were created the same minute.
Can you please verify the version of PXC? Also, please provide the output of SELECT @@ wsrep_provider_options
from this node.
I just switched from 8.0.26 but have been on PXC 8. It looks like gcache.keep_pages_count = 0 gcache.keep_pages_size = 0
mysql Ver 8.0.27-18.1 for Linux on x86_64 (Percona XtraDB Cluster (GPL), Release rel18, Revision ac35177, WSREP version 26.4.3)
base_dir = /db4/
base_host = 192.168.2.61
base_port = 4567
cert.log_conflicts = no
cert.optimistic_pa = no
debug = no
evs.causal_keepalive_period = PT1S
evs.debug_log_mask = 0x1
evs.delay_margin = PT1S
evs.delayed_keep_period = PT30S
evs.inactive_check_period = PT0.5S
evs.inactive_timeout = PT15S
evs.info_log_mask = 0
evs.install_timeout = PT7.5S
evs.join_retrans_period = PT1S
evs.keepalive_period = PT1S
evs.max_install_timeouts = 3
evs.send_window = 10
evs.stats_report_period = PT1M
evs.suspect_timeout = PT5S
evs.use_aggregate = true
evs.user_send_window = 4
evs.version = 1
evs.view_forget_timeout = P1D
gcache.dir = /db2/cache
gcache.freeze_purge_at_seqno = -1
gcache.keep_pages_count = 0
gcache.keep_pages_size = 0
gcache.mem_size = 0
gcache.name = galera.cache
gcache.page_size = 128M
gcache.recover = yes
gcache.size = 5G
gcomm.thread_prio =
gcs.fc_debug = 0
gcs.fc_factor = 1.0
gcs.fc_limit = 100
gcs.fc_master_slave = no
gcs.fc_single_primary = no
gcs.max_packet_size = 64500
gcs.max_throttle = 0.25
gcs.recv_q_hard_limit = 9223372036854775807
gcs.recv_q_soft_limit = 0.25
gcs.sync_donor = no
gmcast.listen_addr = ssl://0.0.0.0:4567
gmcast.mcast_addr =
gmcast.mcast_ttl = 1
gmcast.peer_timeout = PT3S
gmcast.segment = 0
gmcast.time_wait = PT5S
gmcast.version = 0
ist.recv_addr = 192.168.2.61
pc.announce_timeout = PT3S
pc.checksum = false
pc.ignore_quorum = false
pc.ignore_sb = false
pc.linger = PT20S
pc.npvo = false
pc.recovery = true
pc.version = 0
pc.wait_prim = true
pc.wait_prim_timeout = PT30S
pc.weight = 1
protonet.backend = asio
protonet.version = 0
repl.causal_read_timeout = PT30S
repl.commit_order = 3
repl.key_format = FLAT8
repl.max_ws_size = 2147483647
repl.proto_max = 10
socket.checksum = 2
socket.recv_buf_size = auto
socket.send_buf_size = auto
socket.ssl = YES
socket.ssl_ca = /etc/mysql/percona-cert/ca.pem
socket.ssl_cert = /etc/mysql/percona-cert/server-cert.pem
socket.ssl_cipher =
socket.ssl_compression = YES
socket.ssl_key = /etc/mysql/percona-cert/server-key.pem
socket.ssl_reload = 1
And you only see this behavior on this one node? Is there anything in MySQL’s error log while the gcache files are being written to disk? Again, the Galera manual states this is a fixed sized ring buffer. It only grows when the node is out of sync and needs to buffer extra transactions. You don’t have apparmor or selinux enabled that might prevent the purging of files?
yes, i had to rebuild the cluster. Nothing really in the logs to give a hint at the issue. After rebuilding the cluster everything seems ok.