I have a Percona server with around 50 databases, and each database has around 600 tables.
We create a lot of this databases for testing automation.
Table creation takes a lot of time, and I don’t find the cause of this.
We use innodb and file per table is activated.
Here is more data:
SHOW PROFILE FOR QUERY x;
| Status | Duration |
| starting | 0.000138 |
| checking permissions | 0.000008 |
| Opening tables | 0.000094 |
| creating table | 3.104143 |
| After create | 0.000050 |
| query end | 0.056317 |
| closing tables | 0.000013 |
| freeing items | 0.000093 |
| cleaning up | 0.000061 |
BUFFER POOL AND MEMORY
Total memory allocated 549453824; in additional pool allocated 0
Total memory allocated by read views 1312
Internal hash tables (constant factor + variable factor)
Adaptive hash index 13801856 (8853472 + 4948384)
Page hash 553976 (buffer pool 0 only)
Dictionary cache 12334921 (2214224 + 10120697)
File system 7685104 (812272 + 6872832)
Lock system 1333496 (1329176 + 4320)
Recovery system 0 (0 + 0)
Dictionary memory allocated 10120697
Buffer pool size 32767
Buffer pool size, bytes 536854528
Free buffers 8265
Database pages 22999
Old database pages 8469
Modified db pages 0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 41998, not young 226275
0.00 youngs/s, 0.00 non-youngs/s
Pages read 17188, created 5811, written 69152
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 22999, unzip_LRU len: 2450
I/O sum:cur, unzip sum:cur
Right now it’s not possible to give more RAM to the server (which is a virtual machine).
The thing is that this started happening 3 months ago when we had Percona Server 5.5.
We upgraded to 5.6 and the problem persists.
Also DROP DATABASE takes a lot to complete.
Chances are this is related to poor disk performance. If it used to be fast, then I would figure out what changed at the time things started to go bad. Look at the overall host usage to see if that has increased and is causing i/o wait.
Try looking at the hardware itself. Things like RAID level, file system type (i.e. ext3), and then the underlying hardware itself (drives and RAID controllers) have a large impact on the performance. You might have a failing disk in the RAID setup, or a failing RAID controller (you said it’s a virtual server, so what you can do hardware wise depends on where the VM is hosted).
Look at any configuration that might have changed, or maybe another service that was installed on the server that is driving up i/o wait.
Basically things do not just change for no reason; either something broke, someone changed something, or the activity level on the host changed enough to cause degraded performance.
Thanks for your reply.
After I gathered more data with sysstat I also suspect poor disk performance.
Are there any improvements I can make to the filesystem until the disk problem is fixed?
I read about improvements like this one from fstab:
UUID=xxxx /mnt/mysql ext4 defaults,noatime,nodirtime,data=writeback,barrier=0,nobh,dioread_nolock 0 1
I know that some of the above are not recommended for data integrity.
Below are a few articles you could read for things to try. What will work in your situation will vary, so you would just have to run some benchmarks to look for improvements:
Since you are running on a virtual machine, what you can do from within the VM is more limited. Since you are really just sitting on top of the physical hardware, you will always be limited by the host hardware and config. But depending on if you own the host running the VM, you might be able to apply some of the same changes to the host itself which will then benefit the VM.