Is there a way to actually limit InnoDB data dictionary?

I would like downgrade AWS instance to save the costs however due to the number of tables and workload (logical backups) InnoDB data dictionary grows extremely large.

mysql> SELECT COUNT(*) FROM information_schema.INNODB_SYS_TABLES;
+----------+
| COUNT(*) |
+----------+
| 1020034 |
+----------+
1 row in set (4.61 sec)

mysql> SELECT COUNT(*) FROM information_schema.INNODB_SYS_INDEXES;
+----------+
| COUNT(*) |
+----------+
| 2628181 |
+----------+
1 row in set (4.99 sec)
mysql> SHOW GLOBAL VARIABLES LIKE 'table_%';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| table_definition_cache | 400 |
| table_open_cache | 1 |
| table_open_cache_instances | 1 |
+----------------------------+-------+
3 rows in set (0.00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open%table%';
+--------------------------+----------+
| Variable_name | Value |
+--------------------------+----------+
| Open_table_definitions | 400 |
| Open_tables | 1 |
| Opened_table_definitions | 2312885 |
| Opened_tables | 22403462 |
+--------------------------+----------+
4 rows in set (0.00 sec)
mysql> SHOW GLOBAL STATUS LIKE 'Innodb_mem_dictionary';
+-----------------------+------------+
| Variable_name | Value |
+-----------------------+------------+
| Innodb_mem_dictionary | 4517841711 |
+-----------------------+------------+
1 row in set (0.00 sec)

table_definition_cache

Any idea why this is not happening? I was wondering if enebling innodb_file_per_table would help in this case.

Some extra info:

mysql> SHOW GLOBAL VARIABLES LIKE 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name | Value |
+-------------------------+------------+
| innodb_buffer_pool_size | 8589934592 |
+-------------------------+------------+
1 row in set (0.00 sec)

mysql> SELECT @@version, @@version_comment;
+--------------------+---------------------------------------------------------------------------------------------------+
| @@version | @@version_comment |
+--------------------+---------------------------------------------------------------------------------------------------+
| 5.6.30-76.3-56-log | Percona XtraDB Cluster (GPL), Release rel76.3, Revision aa929cb, WSREP version 25.16, wsrep_25.16 |
+--------------------+---------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

# free -m
total used free shared buff/cache available
Mem: 14881 14239 326 6 315 378
Swap: 4095 1237 2858

# ps -o %mem,rss,vsize -C mysqld
%MEM RSS VSZ
94.2 14364148 16262404

It looks like your implementation in 5.5 was more reliable:

What is SHOW CREATE TABLE ?

Do you have any FOREIGN KEYS defined ? Such tables are not removed from dictionary cache

https://blogs.oracle.com/mysqlinnodb/entry/mysql_5_6_data_dictionary

Thanks Peter, that may be the case here:

Number of databases:

mysql> SELECT COUNT(*) FROM information_schema.SCHEMATA;
+----------+
| COUNT(*) |
+----------+
| 2740 |
+----------+
1 row in set (0.01 sec)

Number of tables:

# ls -lR /var/lib/mysql | grep -c "\.frm$"
836242

Number of tables with foreign keys per database:

mysql> SELECT COUNT(DISTINCT TABLE_NAME) FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE REFERENCED_TABLE_NAME IS NOT NULL AND TABLE_SCHEMA = '<database_name>';
+----------------------------+
| COUNT(DISTINCT TABLE_NAME) |
+----------------------------+
| 76 |
+----------------------------+
1 row in set (0.01 sec)

Yep. So you need to have enough memory for those or get rid of Foreign keys :slight_smile:

You may wonder how it worked in earlier versions of Percona Server - I think our code did not handle some edge cases very well (some race conditions when you have tables with foreign keys) which could cause problems under high load.