Hi,
we have a 3 node cluster (with only two nodes with MySQL running and one with only garbd daemon running for quorum management) on RHEL 6.2 with 8GB RAB and 4GB swap.
On this cluster, we have a 16 databases fot 160GB of data (more or less) and we use only one server as master to execute our application’s query.
Against the DB’s, are executed a lot of query’s kinds, but on delete query we suffer for server’s swapping till the saturation of swap partition and the consequent mysql crash.
Every transaction delete 20.000 rows.
We have reduced the innodb-buffer-pool-size but the problem still occur.
Here’s part of our my.cnf related to InnoDB and memory:
innodb-buffer-pool-size=3584M
innodb-additional-mem-pool-size=32M
innodb-log-buffer-size=16M
innodb-flush-log-at-trx_commit=1
innodb-file-per-table=1
innodb_data_file_path = ibdata1:100M:autoextend
## You may want to tune the below depending on number of cores and disk sub
innodb_read_io_threads=4
innodb_write_io_threads=4
innodb-doublewrite=1
innodb_log_file_size=512M
innodb-log-files-in-group=2
innodb-buffer-pool-instances=2
innodb-thread-concurrency=0
#innodb-file-format=barracuda
innodb-flush-method = O_DIRECT
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
## avoid statistics update when doing e.g show tables
innodb_stats_on_metadata=0
engine-condition-pushdown=1
key_buffer_size = 24M
tmp_table_size = 32M
max_heap_table_size = 32M
max-allowed-packet = 16M
#sort-buffer-size = 512K
#read-buffer-size = 256K
#read-rnd-buffer-size = 512K
myisam-sort-buffer_size = 8M
skip-name-resolve
memlock=0
sysdate-is-now=1
max-connections=500
thread-cache-size=50
query-cache-type = 0
query-cache-size = 0
table-open_cache=4096
lower-case-table-names=0
open_files_limit=65535
Are the transactions too many large?
What are the max transactions size suggested?
Hi,
Before each transaction can be replicated and applied in the cluster nodes, it has to be fully stored in memory. I suppose those 20.000 rows means actually a lot of data?
And yes, the bigger transactions the worse for XtraDB Cluster, so the best solution is to split this delete into smaller chunks. Or add more RAM. Lowering InnoDB buffer pool too much will only negatively affect overall performance.
Hi,
we hadsplit delete in chunks of 5.000 rows and it seems that now the amount of data don’t causes memory problems.
But recently another scripts that make only insert of 1500 records (for an amount of 480kb more or less) that before it had no memory problems, during a logest run that normal, make server swapping and causes server out of cluster for one moment.
We are tring to analyze the wsrep and galera variables to check if we’re hit a limitation, but we can’t find any element helping us.
We can’t undestand because the galera’s variable: gcs.recv_q_hard_limit is so big in relation to a different examples fuonded over Internet and why with wsrep_max_ws_size and wsrep_max_ws_rows with actual size we hit memory problems with XtraDB.
Can we tune some variable?
We’re missing to check some configurations?
Last Monday we have added 2GB RAM to the server,now it has a total of 10GB.
Our software running every day, but the “problematic” query runs only sunday, so in the last 2 days over XtraDB have worked query who have never had problems but we see in the server’s top an high memory use:
top - 10:31:33 up 1 day, 22:36, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 117 total, 1 running, 116 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 10130576k total, 9988960k used, 141616k free, 155896k buffers
Swap: 4194296k total, 0k used, 4194296k free, 4912120k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2880 mysql 20 0 7296m 6.3g 2.0g S 0.0 65.4 48:22.56 mysqld
Looking on XtraDB cluster 5.6 release notes, we have seen that were fixed some bugs related to memory use and leack.
Can we be affected by one of this issue? We run XtraDB Cluster: Server version: 5.5.34-55-log Percona XtraDB Cluster (GPL), wsrep_25.9.r3928