Hello, this is my first post so please be understanding.
I am trying to configure the size of innodb-buffer-pool-size correctly but I don’t see the difference in the benchmark.
My hardware:
CPU: Intel Xeon E-2236 (6 core, 12 Threads)
Disks: 2 x Micron_7300_MTFDHBE1T9TDF, software Raid 1 with mdadm
OS: Centos 7
RAM: 32GB
Percona 5.7
OS tuning: nothing special: noatime in /etc/fstab, vm.swappiness = 1, tuned profile: throughput-performance
SQL system limits
# cat /proc/29065/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 257181 257181 processes
Max open files 65536 65536 files
Max locked memory unlimited unlimited bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 257181 257181 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Sysbench command:
sysbench oltp_read_only --tables=10 --table-size=1000000 --mysql-socket=/var/lib/mysql/mysql.sock --mysql-db=sbtest --mysql-user=root --mysql-password=xxx --time=300 --threads=12 --report-interval=1 run
Results without any customization in /etc/my.cnf:
mysql> SHOW VARIABLES LIKE 'innodb_buffer_pool_size';
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| innodb_buffer_pool_size | 134217728 |
+-------------------------+-----------+
SQL statistics:
queries performed:
read: 37424674
write: 0
other: 5346382
total: 42771056
transactions: 2673191 (8910.55 per sec.)
queries: 42771056 (142568.86 per sec.)
ignored errors: 0 (0.00 per sec.)
reconnects: 0 (0.00 per sec.)
General statistics:
total time: 300.0018s
total number of events: 2673191
Latency (ms):
min: 0.78
avg: 1.35
max: 17.85
95th percentile: 1.39
sum: 3596988.45
Threads fairness:
events (avg/stddev): 222765.9167/1429.75
execution time (avg/stddev): 299.7490/0.01
Result with such a configuration file:
[mysqld]
datadir = /var/lib/mysql
log-error = /var/log/mysqld.log
socket = /var/lib/mysql/mysql.sock
pid-file = /var/run/mysqld/mysqld.pid
innodb-buffer-pool-size = 16G
innodb-log-file-size = 4G
innodb-flush-method = O_DIRECT
mysql> SHOW VARIABLES LIKE 'innodb_buffer_pool_size';
+-------------------------+-------------+
| Variable_name | Value |
+-------------------------+-------------+
| innodb_buffer_pool_size | 17179869184 |
+-------------------------+-------------+
SQL statistics:
queries performed:
read: 37749712
write: 0
other: 5392816
total: 43142528
transactions: 2696408 (8987.94 per sec.)
queries: 43142528 (143807.07 per sec.)
ignored errors: 0 (0.00 per sec.)
reconnects: 0 (0.00 per sec.)
General statistics:
total time: 300.0018s
total number of events: 2696408
Latency (ms):
min: 0.70
avg: 1.33
max: 13.08
95th percentile: 1.39
sum: 3596925.29
Threads fairness:
events (avg/stddev): 224700.6667/937.97
execution time (avg/stddev): 299.7438/0.00
These results are basically the same. Shouldn’t there be a noticeable performance improvement when increasing innodb-buffer-pool-size? I repeated the tests 5 times and took into account the latest results. Am I doing something wrong?