The MySQL instance is crushing all the time in the running docker container.

Hello there,

We have running PMM server (1.17.0) in the docker container. In the last few days we have problem with QAN.
After some troubleshooting I found that MySQL instance which is running inside is crushing all the time with the following error:

181129 14:32:41 [Note] /usr/sbin/mysqld: ready for connections.
Version: ‘5.5.61-38.13’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 Percona Server (GPL), Release 38.13, Revision 812705b
181129 14:33:04 InnoDB: Assertion failure in thread 140113881638656 in file btr0cur.c line 332
InnoDB: Failing assertion: page_is_comp(get_block->frame) == page_is_comp(page)
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: about forcing recovery.
14:33:04 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at

It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1133032 K bytes of memory
Hope that’s ok; if not, decrease some variables in the equation.

Thread pointer: 0x3652b10
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 7f6ece249d70 thread_stack 0x40000

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7f6e1403f0a8): INSERT INTO query_class_metrics (query_class_id,instance_id,start_ts,end_ts,query_count,lrq_count,Query_time_sum,Query_time_min,Query_time_avg,Query_time_med,Query_time_p95,Query_time_max,Lock_time_sum,Lock_time_min,Lock_time_avg,Lock_time_med,Lock_time_p95,Lock_time_max,Rows_sent_sum,Rows_sent_min,Rows_sent_avg,Rows_sent_med,Rows_sent_p95,Rows_sent_max,Rows_examined_sum,Rows_examined_min,Rows_examined_avg,Rows_examined_med,Rows_examined_p95,Rows_examined_max,Rows_affected_sum,Rows_affected_min,Rows_affected_avg,Rows_affected_med,Rows_affected_p95,Rows_affected_max,Bytes_sent_sum,Bytes_sent_min,Bytes_sent_avg,Bytes_sent_med,Bytes_sent_p95,Bytes_sent_max,Tmp_tables_sum,Tmp_tables_min,Tmp_tables_avg,Tmp_tables_med,Tmp_tables_p95,Tmp_tables_max,Tmp_disk_tables_sum,Tmp_disk_tables_min,Tmp_disk_tables_avg,Tmp_disk_tables_med,Tmp_disk_tables_p95,Tmp_disk_tables_max,Tmp_table_sizes_sum,Tmp_table_sizes_min,Tmp_table_sizes_avg,Tmp_table_sizes_med,Tmp_table_sizes_p95,Tmp_table_sizes_max,QC_Hit_sum,Full_scan_sum,Full_jo
Connection ID (thread ID): 346

You may download the Percona Server operations manual by visiting You may find information
in the manual which will help you identify the cause of the crash.
181129 14:33:05 [Note] /usr/sbin/mysqld (mysqld 5.5.61-38.13) starting as process 25748 …
181129 14:33:05 [Note] Plugin ‘FEDERATED’ is disabled.
181129 14:33:05 InnoDB: The InnoDB memory heap is disabled
181129 14:33:05 InnoDB: Mutexes and rw_locks use GCC atomic builtins
181129 14:33:05 InnoDB: Compressed tables use zlib 1.2.7
181129 14:33:05 InnoDB: Using Linux native AIO
181129 14:33:05 InnoDB: Initializing buffer pool, size = 128.0M
181129 14:33:05 InnoDB: Completed initialization of buffer pool
181129 14:33:05 InnoDB: highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 4951549098911
181129 14:33:05 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files…
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer…
InnoDB: Doing recovery: scanned up to log sequence number 4951553435356

I tried to recreate the container, downgrade to previous version, nothing is solve this problem.
Do you know if we can do something in order to fix it ?

Thanks in advance.

Hi elig

It appears your MySQL instance has become corrupted. At this point the easiest is to destroy the pmm-data container and do a clean installation.

If you want to take a more advanced approach, the high level process might look like:
[]back up the data in Query Analytics to disk using mysqldump (either forward mysqld port out of container, or install mysqldump locally and copy out of the container)
]stop mysqld in the container, delete contents of datadir
[]execute mysql_install_db
]re-load the data
[/LIST] If you have a Percona Support contract, please open a ticket and we can help you with this process and potentially recover all Query Analytics data.

Hi Michael,

Thank you for your answer.
There is any options to start DB from the scratch ?
Or in other options can I somehow to point/use external MySQL instance instead of local one ?

Thanks in advance.

and I have another question:
In our case the mysql data was corrupted, so I just create dump of schema without data.
I tried to start the instance from the scratch by doing the following steps:

  • deleting all data in the “/var/lib/mysql” folder.
  • running “mysql_install_db”
  • restarting container
  • now the mysql is running after the wipe. imported mysql schema and fixed qan-api missing grants.
    Now it looks like that everything is working but I have 2 problems with this approach:
  • all clients can send any qan data to the pmm (because the mysql data is gone). I can solve it by reinstalling the client.
  • when I restarting the container I lost access to the DB, instance is running but I can’t connect to it.

There is any options to register existed client without reinstalling it ?

Thanks again.

[]You can delete the datadir, run mysql_install_db and start mysqld. but this will leave you with missing records so your Query Analytics configuration will be out of sync from the rest of PMM Server.
]I am aware of some users relying on an external MySQL instance (PXC clusters are very common for this work) but it isn’t a supported practice, so I don’t have any help articles or documentation to link you to.

Yes! you’re in a good place. On each host, you’ll want to remove then add the mysql:queries service. You may need to use the --force option.

I’m not clear on what you mean about “restarting the container I lost access to the DB” ?