Weird innodb_buffer_pool_size setting

Cannot for the life of me figure out what is going on.

Here is a snippet from my pxc yaml:

  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.21-12.1
    autoRecovery: true
    configuration: |
      [sst]
      xbstream-opts=--decompress
      [xtrabackup]
      compress=lz4
      [mysqld]
      wsrep_provider_options="gcache.size=1G; gcache.recover=yes"
      binlog_expire_logs_seconds=604800
      skip-name-resolve=ON
      default-time-zone="EST5EDT"
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        memory: 2G
        cpu: 600m
      limits:
        memory: 2G
        cpu: "1"

/etc/mysql/conf.d/auto-config.cnf looks right:

[mysqld]
innodb_buffer_pool_size = 1500000000
max_connections = 158

/etc/percona-xtradb-cluster.conf.d/init.cnf looks right:

[sst]
xbstream-opts=--decompress
[xtrabackup]
compress=lz4
[mysqld]
wsrep_provider_options="gcache.size=1G; gcache.recover=yes"
binlog_expire_logs_seconds=604800
skip-name-resolve=ON
default-time-zone="EST5EDT"

show variables:

+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 2147483648 |
+-------------------------+------------+

+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 158   |
+-----------------+-------+

Where in the world is that innodb_buffer_pool_size=2147483648 coming from?

1 Like

Seems to be something about 2G. When I scale the cluster to 1G I get:

+-------------------------+-----------+
| Variable_name           | Value     |
+-------------------------+-----------+
| innodb_buffer_pool_size | 805306368 |
+-------------------------+-----------+

When I scale it to 4G I get:

+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 3221225472 |
+-------------------------+------------+

I still do not know where 2147483648 is coming from.

1 Like

One more data point. It looks like it sees the 1.5G (2.0G * .75), but something is raising the buffer pool somewhere.

[0] pxc-operator.cluster1-pxc-2.innobackup.backup.log: [1616688190.547952510, {"log"=>"xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --server-id=12 --innodb_flush_log_at_trx_commit=0 --innodb_flush_method=O_DIRECT --innodb_file_per_table=1 --innodb_buffer_pool_size=1500000000 --defaults_group=mysqld "}]
[17] pxc-operator.cluster1-pxc-2.innobackup.backup.log: [1616688190.569888718, {"log"=>"Number of pools: 1"}]
[291] pxc-operator.cluster1-pxc-2.innobackup.backup.log: [1616688193.204098578, {"log"=>"210325 16:03:13 [00] Compressing and streaming ib_buffer_pool to <STDOUT>"}]
[0] pxc-operator.cluster1-pxc-2.innobackup.backup.log: [1616688919.934593590, {"log"=>"xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --server-id=12 --innodb_flush_log_at_trx_commit=0 --innodb_flush_method=O_DIRECT --innodb_file_per_table=1 --innodb_buffer_pool_size=1500000000 --defaults_group=mysqld "}]
[17] pxc-operator.cluster1-pxc-2.innobackup.backup.log: [1616688919.954328414, {"log"=>"Number of pools: 1"}]
[272] pxc-operator.cluster1-pxc-2.innobackup.backup.log: [1616688922.612918563, {"log"=>"210325 16:15:22 [00] Compressing and streaming ib_buffer_pool to <STDOUT>"}]
1 Like

Hello @mleklund ,

weird bug. Thank you for submitting this. I have created a JIRA ticket to fix this: https://jira.percona.com/browse/K8SPXC-682

We will ship it in 1.9.0 version.

1 Like

Hi @mleklund

MySQL automatically adjust the buffer pool size as described in the docs:

Buffer pool size must always be equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances . If you configure innodb_buffer_pool_size to a value that is not equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances , buffer pool size is automatically adjusted to a value that is equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances .

Also, there is a post in Percona blog that describes exactly the same scenario:

And the final size is 2GB. Yes! you intended to set the value to 1.5GB and you succeeded in setting it to 2GB. Even if you set 1 byte higher, like setting: 1073741825, you will end up with a buffer pool of 2GB.

1 Like

Then, IMO, the operator should be better about it’s math since it is setting the innodb_buffer_pool_size based on a fraction (75%) of the container limit, so users do not end up in an OOM kill situation. That 2G limit is a hard limit in K8s.

1 Like

@mleklund I agree with you. Operator should be smarter here.
We will handle it through https://jira.percona.com/browse/K8SPXC-682.

1 Like

@mleklund As I said before, MySQL automatically adjusts the buffer pool size according to buffer pool chunk size and buffer pool instances. To allow 75% of 2G, we need to specify buffer pool chunk size in auto tune config along with buffer pool size.

I submitted a PR to fix the issue.

2 Likes