A production environment mysql database, after upgrading from 5.6.32 to 5.7.22, memory usage has been rising, already using 6G swap
Buffer pool size and other configurations are very small
I can’t find a reason to use so much memory?
pmm-mggd215_mysql3306_el6-2018-07-17T17-11-31-summary.zip (33 KB)
[root@ugc215 ~]# ps aux |grep mysql5722
root 10501 0.0 0.0 106228 140 ? S Jul13 0:00 /bin/sh /usr/local/mysql5722/bin/mysqld_safe --datadir=/data/mysql --pid-file=/data/mysql/mysql3306.pid
mysql 12159 89.6 92.1 82892980 60716672 ? Sl Jul13 6840:17 /usr/local/mysql5722/bin/mysqld --basedir=/usr/local/mysql5722 --datadir=/data/mysql --plugin-dir=/usr/local/mysql5722/lib/plugin --user=mysql --log-error=/log/mysql/my3306.err --open-files-limit=65535 --pid-file=/data/mysql/mysql3306.pid --socket=/tmp/mysql.sock --port=3306
[root@ugc215 ~]# free -mt
total used free shared buffers cached
Mem: 64375 61548 2826 0 362 623
-/+ buffers/cache: 60562 3813
Swap: 30719 5909 24810
Total: 95095 67457 27637
Now, the secondary node(ugc216), which consumes more memory than the primary node, has used 50% of the swap.
Could you upload (or send) you my.cnf files and any error log files etc that may be getting written. Also, I can see there’s a zip but if you could also just restate the environment information, versions etc. Thanks…
Zip is the system summary and mysql summary downloaded from pmm.
There is no error in the error log.
OS: RHEL 6.5 x64
MEMORY: 64G
Replication: M-M
my-5.7.22 configuration:
##Global
innodb_buffer_pool_size = 20G
key_buffer_size = 2M
tmp_table_size = 64M
max_heap_table_size = 64M
innodb_log_buffer_size = 4M
Per connection
read_buffer_size = 2M
read_rnd_buffer_size = 2M
sort_buffer_size = 2M
join_buffer_size = 2M
binlog_cache_size = 4M
Others
myisam_sort_buffer_size = 4M
myisam_max_sort_file_size = 10G
bulk_insert_buffer_size = 16M
Query cache OFF
query_cache_size = 0
query_cache_type = 0
Threads
thread_stack = 512K
thread_cache_size = 500
my-5.6.32 configuration:
##Global
innodb_buffer_pool_size = 30G
key_buffer_size = 8M
tmp_table_size = 64M
max_heap_table_size = 64M
innodb_log_buffer_size = 8M
Per connection
read_buffer_size = 16M
read_rnd_buffer_size = 16M
sort_buffer_size = 16M
join_buffer_size = 16M
binlog_cache_size = 4M
Others
myisam_sort_buffer_size = 8M
myisam_max_sort_file_size = 10G
bulk_insert_buffer_size = 32M
Query cache OFF
query_cache_size = 0
query_cache_type = 0
Threads
thread_stack = 512K
thread_cache_size = 1000
Did you compile from source?
Based on this, it seems like you might be using a self compiled binary? As we would expect the version_comment to be “MySQL Community”
version 5.7.22-log
version_comment MySQL
If so, could you try the binary provided by either Oracle (MySQL Community) or by Percona?
With MySQL 5.7 you can use memory instrumentation, and we could get some more information from that. Here is the documentation page: [url]https://dev.mysql.com/doc/refman/5.7/en/memory-summary-tables.html[/url]
And here is the instruction you would add to my.cnf
performance-schema-instrument='memory/%=ON'
If you can then restart the instance, then we should catch much more detail about memory usage if and when the memory goes high again.
Please update when you have tried these things? THANK YOU!
Thank you for your prompt reply.
I am using the binary version officially released by oracle:
[url]https://cdn.mysql.com//Downloads/MySQL-5.7/mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz[/url]
Performance-schema-instrument, has been opened in accordance with the official documentation, the memory-related view query results in sys database, has been uploaded, please view the attached file: mysql_sys_memory.txt
Thanks for that update. We installed the binary from that link you provided in the previous post and saw a different version_comment. So somewhere there has been something different introduced to your setup OR maybe an issue with the path to the binaries?
You can check the binary using this:
$ /usr/local/mysql5722/bin/mysqld --version
So, could you try using the binary from the link you just provided and then feedback so that we definitely know you are accessing the right set?
You would need to use the cmake option and recompile from source.
[mysql@ugc215 slowquery]$ /usr/local/mysql5722/bin/mysqld --version
/usr/local/mysql5722/bin/mysqld Ver 5.7.22 for linux-glibc2.12 on x86_64 (MySQL Community Server (GPL))
OK, thanks. So the P_S memory instruments look like they were enabled at some point but the previous allocations are not available. Would you be able to restart the instance with memory instruments enabled via my.cnf we can see what happens, if it runs high again.
We’ll be wanting to take a look at ps aux outputs and sys memory to see what’s going on.
Thanks!
OK, thank you for your reply.
P_S memory instruments, I will upload the results after restarting the instance.
ps aux outputs, I have already written on the second floor.
In the post on the 5th floor, I attached some results that I think are useful.
My.cnf.txt ------- mysql configuration file
OS_memory.txt ------- OS meminfo, free command result, ps aux result
Mysql_sys_memory.txt ------- mysql sys memory results for the last 2 days
Global_variables.txt ------- mysql show global variables
Global_status.txt ------- mysql show global status
Hello, because the secondary node memory consumption is too fast, the secondary node was restarted last night.
Until now, the memory of the secondary node has not increased as before.
After the secondary node is restarted, the memory usage of the primary node is also slow decreasing.
How does the secondary node affect the primary node?
That’s good that you are on the mend. I’ll let the team know but I suspect that explaining this level of detail is a bit out of scope for our open source forum channel. … hope it carries improving. Thanks!