PMM: Custom Queries added to PMM Kubernetes pod not showing up in dashboard

Description:

When adding any queries into /usr/local/percona/pmm2/collectors/custom-queries/mysql/low-resolution/queries-mysqld.yml , they cannot be seen in the insights Advanced Data Exploration dashboard

Steps to Reproduce:

  1. Create a configMap with data from: pmm-custom-queries/mysql/innodb-index-stats.yml at master · Percona-Lab/pmm-custom-queries · GitHub
  2. Load the configmap into the PMM pod, when creating with helmfile:
    helm upgrade --install pmm percona/pmm -f values.yaml

3.Log into the shell of the pod, review the file:
cat /usr/local/percona/pmm2/collectors/custom-queries/mysql/low-resolution/queries-mysqld.yml

Version:

[percona/pmm-server:2.43.2]

Logs:

No errors or data indicating that the custom queries are loaded or not.

Expected Result:

Query should show up in dashboard ( advanced-data-exploration)

Actual Result:

Query is not found.

Additional Information:

The file is getting populated with content from the conFigmap, and appears as expected in: /usr/local/percona/pmm2/collectors/custom-queries/mysql/low-resolution/queries-mysqld.yml , at launch of the pod/container

Content of file:

# Provides information about index statistics from mysql.innodb_index_stats
# Requires persistent statistics to be enabled (default in MySQL 8.0)
#
mysql_innodb_index_stats:
   query: "select database_name as schema_name,table_name,index_name,stat_value*@@innodb_page_size as size_bytes from mysql.innodb_index_stats where stat_name='size';"
   metrics:
    - table_name:
        usage: "LABEL"
        description: "Table name"
    - size_bytes:
        usage: "GAUGE"
        description: "Index Size in Bytes"
    - schema_name:
        usage: "LABEL"
        description: "Schema Name"
    - index_name:
        usage: "LABEL"
        description: "Index name"
mysql_innodb_index_stats_leaf:
   query: "select database_name as schema_name,table_name,index_name,stat_value*@@innodb_page_size as pages_size_bytes from mysql.innodb_index_stats where stat_name='n_leaf_pages';"
   metrics:
    - table_name:
        usage: "LABEL"
        description: "Table name"
    - pages_size_bytes:
        usage: "GAUGE"
        description: "Leaf Pages Size in Bytes"
    - schema_name:
        usage: "LABEL"
        description: "Schema Name"
    - index_name:
        usage: "LABEL"
        description: "Index name"

Did you verify the queries work and return data when executed manually as the same user being used by the PMM agent?

Thanks, I ran the queries in mysql cli, as the intended user, and I get results as expected.

Ah, you mentioned PMM agent. I am connecting remotely to an RDS instance. I thought that the PMM agent was/is built in, and gets called from the pmm-server.

It does. I did not know this was RDS; you didn’t mention that above.

Just to clarify, in the PMM server pod, the queries-mysqld.yml is correctly populated? If so, I would be looking through all the logs of the agent, and PMM within the pod to determine if any error is happening.

Also, you can enable the general_log on the target MySQL to log all queries executed. Watch this log and see if your custom queries show up. If you never see them after 30m, there’s either a silent error occurring within the PMM agent, or something else entirely.

On the PMM server pod, the queries-mysqld.yaml is correctly populated.
There is very little in the logs that I saw. I’ll enable “debug” logging (again) and let you know what I find, thanks.

Interestingly the sample query that i used in the queries-mysqld.yml works from my command line, but i see this error in the log… (i will work on it)

time="2024-12-03T13:48:07.917+00:00" level=info msg="ts=2024-12-03T13:48:07.912Z caller=custom_query.go:542 level=info scraper=custom_query.lr unexpectederrorparsingcolumn:mysql_innodb_index_stats,size_bytes,%!s(uint641589248)=(MISSING)" func="process.(*processLogger).Write" file="process/process_logger.go:91" agentID=/agent_id/b0465458-1c5f-4432-a928-5b0338e02638 component=agent-process type=mysqld_exporter
time="2024-12-03T13:48:07.917+00:00" level=info msg="ts=2024-12-03T13:48:07.912Z caller=custom_query.go:542 level=info scraper=custom_query.lr unexpectederrorparsingcolumn:mysql_innodb_index_stats,size_bytes,%!s(uint6470123520)=(MISSING)" func="process.(*processLogger).Write" file="process/process_logger.go:91" agentID=/agent_id/b0465458-1c5f-4432-a928-5b0338e02638 component=agent-process type=mysqld_exporter

Because its on Kubernetes, I have been taking the approach of looking for logs emitted to stdout from the pod; this showed very little. I got the log for the pmm-agent from the web interface, itself. For those stumbling upon this post, saying “where are the logs?” , for new users to PMM, I found the full logs can be downloaded through the interface.

The good news is, the pmm-agent is working as it should, trying to run the queries.
Its an issue with my test query, so lets consider this a user error.

thanks

Ok, I successfully got a custom query to run in my PMM.

I created a simpler query, and put it in my customQueries configmap.

xmysql_active_connections:
  query: "SELECT 'active_connections' as label, COUNT(*) as count FROM information_schema.processlist WHERE COMMAND != 'Sleep';"
  metrics:
    - label:
        usage: "LABEL"
        description: "Static label identifying this metric"
    - count:
        usage: "GAUGE"
        description: "Number of active (non-sleeping) MySQL connections"

I now see my custom query available in the Dropdown for Advanced Data Exploration:

To solve this issue, the query and label setup needed to be correct.
I will press on with getting my “real” custom queries integrated.

Thank you for your guidance.

Ultimately, I need to be able to alert on when i have lock waits and blocking transactions

so, this is what i have to get working:

innodb_lock_waits:
  query: |
    SELECT 
        b.trx_id AS blocking_trx_id, 
        b.trx_mysql_thread_id AS blocking_thread, 
        b.trx_query AS blocking_query,
        r.trx_id AS waiting_trx_id,
        r.trx_mysql_thread_id AS waiting_thread,
        r.trx_query AS waiting_query
    FROM 
        information_schema.innodb_lock_waits w
    INNER JOIN 
        information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id
    INNER JOIN 
        information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;
  metrics:
    - blocking_trx_id:
        usage: "LABEL"
        description: "ID of the blocking transaction"
    - blocking_thread:
        usage: "GAUGE"
        description: "MySQL thread ID of the blocking transaction"
    - blocking_query:
        usage: "GAUGE"
        description: "Query being executed by the blocking transaction"
    - waiting_trx_id:
        usage: "LABEL"
        description: "ID of the waiting transaction"
    - waiting_thread:
        usage: "GAUGE"
        description: "MySQL thread ID of the waiting transaction"
    - waiting_query:
        usage: "GAUGE"
        description: "Query being executed by the waiting transaction"