QAN too many connections

Hello,
Recently I experienced an issue where our automated installation of the pmm client failed to configure mysql:queries due to an error talking to QAN API.
It turns out that QAN API was receiving a “too many connections” error from the PMM MySQL instance. That seems to have been due to a long running query slowing everything down and causing connections to pile up.
Is it possible to give us options to
a) pass my.cnf parameters into the docker image to tune performance of the PMM MySQL instance
b) add an option to allow PMM client configuration to go through even if the client cannot currently communicate with the server?
Cheers

Hi MickMc

We’re going to be removing MySQL from the docker image in our PMM 2 release (expected in next few months).

Can you share with us an example of the errors - for example, was this seen on the client side in pmm-mysql-queries log file? Or on docker qan-api error log? This is a good feature request - API should receive requests and provide correct error to clients.

Hi Michael,
Apologies in advance for the dodgy control characters - the docker image doesn’t seem to like my terminal type!
There are a lot of errors in the gan-api log along these lines:
2018/10/17 09:45:19 mysql.go:133: WARNING: cannot update query class, skipping: updateQueryClass UPDATE query_classes: Error 1205: Lock wait timeout exceeded; try restarting transaction: &event.Class{Id:“59B94C22AC61F5D3”, Fingerprint:“SELECT OBJECT_SCHEMA , OBJECT_NAME , COUNT_READ_NORMAL , COUNT_READ_WITH_SHARED_LOCKS , COUNT_READ_HIGH_PRIORITY , COUNT_READ_NO_INSERT , COUNT_READ_EXTERNAL , COUNT_WRITE_ALLOW_WRITE , COUNT_WRITE_CONCURRENT_INSERT , COUNT_WRITE_LOW_PRIORITY , COUNT_WRITE_NORMAL , COUNT_WRITE_EXTERNAL , SUM_TIMER_READ_NORMAL , SUM_TIMER_READ_WITH_SHARED_LOCKS , SUM_TIMER_READ_HIGH_PRIORITY , SUM_TIMER_READ_NO_INSERT , SUM_TIMER_READ_EXTERNAL , SUM_TIMER_WRITE_ALLOW_WRITE , SUM_TIMER_WRITE_CONCURRENT_INSERT , SUM_TIMER_WRITE_LOW_PRIORITY , SUM_TIMER_WRITE_NORMAL , SUM_TIMER_WRITE_EXTERNAL FROM performance_schema . table_lock_waits_summary_by_table WHERE OBJECT_SCHEMA NOT IN (…)”, Metrics:(*event.Metrics)(0xc519a85c40), TotalQueries:0xc, UniqueQueries:0x0, Example:(*event.Example)(0xc52a6de900), outliers:0x0, lastDb:“”, sample:false}: MySQL b177e71b1b9a494b450ade241c283180

and a lot of these:

^[[0m^[[0;31mERROR 2018/10/17 09:45:20 init.go:236: auth agent: auth.MySQLHandler.GetAgentId: dbm.Open: Error 1040: Too many connections
^[[0m2018/10/17 09:45:20 server.go:2923: http: response.WriteHeader on hijacked connection
2018/10/17 09:45:20 server.go:2923: http: response.Write on hijacked connection
^[[0;31mERROR 2018/10/17 09:45:20 results.go:336: Response write failed: http: connection has been hijacked
^[[0m2018/10/17 09:45:20.476 127.0.0.1 500 1.562109ms WS /agents/e5bc0777f2014b7868af761cab127a4d/data

This is the agent registration error from our ansible log:

Error adding MySQL queries: problem with agent registration on QAN API: exit status 1

Let me know if I can provide more info or clarify anything

Thank you for the detail MickMc . I haven’t been able to reproduce this, but to be sure you should file this as a bug at [url]Log in - Percona JIRA so that we can have it reviewed by our QA team. Please post the JIRA link here when you have a chance. Thanks!

Will do, once i get my jira account working! thanks Michael