We have a setup where we replicate from MySQL 5.6 master to a MySQL 5.7 slave. Recently we changed all tables from slave server to TokuDB. Before, the server was able to keep up, while now the server falls behind. When executing “show full processlist”, the slave node is almost always caught in state “Opening tables”. Executing periodically “show global status like ‘opened_tables’;” shows that a large number of tables are opened continuously and then executing “show open tables from databaseX” shows already all existing tables as being open.
Master database: 5.6.28-76.1, slave: 5.7.22-22
Database settings:
open_files_limit
table_definition_cache = 200 000
table_open_cache = 200 000
table_open_cache_instances = 20
Replication settings:
Master:
binlog-format = mixed
sync_binlog = 1
expire_logs_days = 30
max_binlog_size = 500M
binlog-ignore-db = information_schema
binlog-ignore-db = performance_schema
binlog-ignore-db = mysql
Slave:
log-slave-updates = 1
replicate-ignore-db = information_schema
replicate-ignore-db = performance_schema
replicate-ignore-db = mysql
slave-parallel-workers = 24
The MySQL process shows over 40 000 open file handlers and total amount of tables across all databases is less than 50 000. Based on behavior, it looks like during replication, MySQL instance is not using the open table cache or there is some kind of funky locking that prevents it to be used efficiently. To be noted that value of opened_tables increases at such a fast rate that in one week it almost caught up with the value from master database which has an uptime of 1.5 years.
The database still has tables running also with InnoDB.
Is there any special finetuning setting that I can apply or this is a bug?