4,400 drupal database = 1.5M tables

Hello,
Looking for input / opinion on project.
I am currently working on a large Drupal deployment with many sites. The majority of the sites will get very low amounts of traffic.
On my test Percona cluster using innodb with one file per table empty cloned site database using 23MB ends up being 234MB on the filesystem. So file storage efficiency with this setup is non-workable.

The current options I am looking at

  1. Put everything on a single giant tablespace file and use Percona Cluster to load balance.

  2. Separate low load databases from the high load ones into separate servers.
    a. Use Percona Cluster of for high load
    b. basic master slave replication using ARIA or MyISAM engine.

Questions:
Is option (1) feasible? What kinda of issues will I hit with this volume on databases and tables in a single tablespace file?

Any other ideas on how to tackle this volume on databases?

Thanks for any input
Joe

Hello Joe,

Indeed file per table disk space overhead in InnoDB may be a problem for many tables, that’s why using single table space will be more efficient. Also memory footprint may strike you. See this article for some examples: [URL]http://mysqlmaniac.com/2012/how-having-many-tables-affects-mysql-memory-usage/[/URL]

It surely makes sense to split high load databases from idle ones as both workloads are completely different. Also dedicating available memory for InnoDB buffer for hot tables is more important then for rarely accessed tables.
If you care about your data, don’t use MyISAM nor Aria engines. Also currently only InnoDB is supported in PXC.
Option 1 is feasible, but full state transfers (or backups) will be the slower the more .frm files you have. Of course SSD disks will be much better in accessing millions such small files.
Anyway, for a single table space, I have seen instances having more then 10M InnoDB tables, it was standard replication though.