MySQL memory creeping up in PXC node

Greetings great minds
I have a node that’s part of a 4 nodes galera cluster . I have allocated 16GB innodb buffer pool size and innodb log buffer size is 4GB. The total RAM in this box is 23GB. Recently using TOP I found out that the mysqld’s memory usage creeps up from 78% to 85% and now 93%. I would like to find out why is the memory usage growing when I absolutely have nothing going on in the process( mysql processlist) the only process is just one insert That’s all.
Note: this node is not the even the busiest. No little or no traffics directed to it. Any suggestion would be helpful. I have done some few searches as wel

1 Like

Hi @Dba1 , thanks for posting to the Percona forums!
Are you using PMM against this instance? If so there is a Memory section on the MySQL Instance Summary dashboard that may indicate where the memory is being allocated. There is also the Node Summary dashboard that will show actual memory allocation (vs VIRT). Please share with us your dashboards when you can and we can give them a review. Thanks!

1 Like

Than you Mr.Coburn, Unfortunately I am not using PMM in my environment for one reason my organization cannot reach to an external repos. Is there any better way of installing PMM and it client. For the memory leak I was able to solve it. Thank you

1 Like

Hi @Dba1

Did you try the quickstart? Install Percona Monitoring and Management
If you can download the docker container + pmm2-client package then you won’t need to connect later to external repositories

1 Like

Hello Michael thanks a lot visiting this post again. For one reason or another which I believe it was proxy configurations I can’t even pull the docker image, also is there anyway I won’t need to install PMM client on all my database server.

1 Like

Hi @Dba1

Once you have PMM Server set up then you can avoid installing anything on the clients by using the Remote MySQL Monitoring feature

1 Like

Hello,

I have the same problem with 8.0.29.
We’re evaluating version 8 because we’re planning to switch from 5.7.
So I fired up one node in bootstrap mode and tried to restore a dump of our 5.7 DB on it. It’s around 400GB. The InnoDB buffer size is set to 6GB but while restoring with myloader htop looks like this:

The memory usage increases until the OOM killer strikes.
I have PMM up and running if you need any metrics.

2 Likes

Hi @Ralph_Moser , thanks for participating! What I am looking for in PMM is the following:

1 Like

@Michael_Coburn it looks like a memory leak.
Sorry I have to split this into three posts because I’m a new user which can’t embed more than one media item…

HTOP:

1 Like

PMM:

1 Like

PMM:

1 Like

Thanks @Ralph_Moser and @Dba1 - I’ve posted to Percona internal Slack seeking additional information in order to troubleshoot this, then I’ll update this thread :+1:

1 Like

Hi @Ralph_Moser and @Dba1 -

We have a confirmed bug related to memory usage, my recommendation is to follow this issue in JIRA so you’ll see when it hits a release and then you can test the version
https://jira.percona.com/browse/PXC-4051

1 Like

Hi @Ralph_Moser ,
If I understand correctly you have one node and try to load data into it.
In such a case, please try setting

set global wsrep_trx_fragment_unit = 'rows';
set global wsrep_trx_fragment_size = 100000;

prior to myloader usage, or try using --queries-per-transaction parameter of myloader to limit the transaction size.

1 Like

Hi @Kamil_Holubicki
Yes, you did.

I tried enabling streaming replication with the statements you provided but sadly it doesn’t change anything. Memory usage is still growing steadily.
On our 5.7 test cluster restoring mydumper dumps to one node always worked perfectly without any special setup. (Restore on a single node, added the other two nodes, SST, done)

1 Like

I think wasn’t patient enough. Memory usage now stabilized at around 14.5G.
Is this some new behavior in Galera 4 which makes it necessary to enable streaming?

1 Like

@Ralph_Moser
Could you post the exact myloader command line you are using?
If possible please try experimenting with following settings:

  1. reduce wsrep_trx_fragment_size to 10000 rows or even less
  2. use 1 thread when using myloader ( --threads=1)
  3. limit myloader transaction size (–queries-per-transaction parameter)
  4. use jemalloc Enabling jemalloc on Percona Server for MySQL 5.7 and 8.0 Series - Percona Database Performance Blog
1 Like

jemalloc is already enabled. :wink:

Like I wrote before it stabilized. I just wasn’t waiting long enough.
Thanks for all the help!

The only thing I’m still asking myself is why this didn’t happen with 5.7.

1 Like

Hi @Ralph_Moser ,
I’ve just checked with PXC 5.7.39 and to be honest I observe the behavior very similarly to 8.0.
If you’ve got a reproducible test case that shows the problem/behavior difference, please create a ticket in jira.percona.com providing all needed details (steps to reproduce, config, etc)

1 Like