Description:
Hi there, I am trying to monitor in PMM an RDS Amazon MySQL database. To do so, I have followed the official procedure Amazon RDS - Percona Monitoring and Management by creating a specific IAM account with all the grants needed. On PMM console I can see properly the slow query, but only the ones performed manually on mysql via cli. In other terms, I cannot see queries performed by the external application, even if I can see them properly both on FILE or TABLE (mysql.slow_query). Performance schema is properly enabled, with all the grants needed.
Steps to Reproduce:
Following all the procedure indicated here: Amazon RDS - Percona Monitoring and Management
Version:
2.43.2
Hi @Skipper_Technicolor,
The mysql slow log is separate from performance_schema, which is used when connecting PMM to RDS. Please ensure that the performance_schema
is enabled, and try selecting from performance_schema.events_statements_summary_by_digest
as this is the table which PMM scrapes for QAN data. if this table is empty, then you might need to enabled the perf schema and restart the RDS instance.
Hi Matthew, thanks for reply.
The issue is that I can see only query performed against the performance_schema (and not the overall db).
SELECT *
FROM performance_schema.events_statements_summary_by_digest
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 10;
Sample:
| NULL | c4cd244732c5eac028c14329b0657778 | SELECT `performance_schema` . `events_statements_summary_by_digest` . `SCHEMA_NAME` , `performance_schema` . `events_statements_summary_by_digest` . `DIGEST` , `performance_schema` . `events_statements_summary_by_digest` . `DIGEST_TEXT` , `performance_schema` . `events_statements_summary_by_digest` . `COUNT_STAR` , `performance_schema` . `events_statements_summary_by_digest` . `SUM_TIMER_WAIT` , `performance_schema` . `events_statements_summary_by_digest` . `MIN_TIMER_WAIT` , `performance_schema` . `events_statements_summary_by_digest` . `AVG_TIMER_WAIT` , `performance_schema` . `events_statements_summary_by_digest` . `MAX_TIMER_WAIT` , `performance_schema` . `events_statements_summary_by_digest` . `SUM_LOCK_TIME` , `performance_schema` . `events_statements_summary_by_digest` . `SUM_ERRORS` , `performance_schema` . `events_statements_summary_by_digest` . `SUM_WARNINGS` , `performance_schema` . `events_statements_summary_by_digest` . `SUM_ROWS_AFFECTED` , `performance_schema` . | 32 | 385167461000 | 677270000 | 12036483000 | 31118016000 | 15212000000 | 0 | 0 | 0 | 2802 | 2802 | 0 | 0 | 0 | 0 | 0 | 0 | 32 | 0 | 0 | 0 | 0 | 32 | 0 | 2024-11-05 15:41:00 | 2024-11-05 16:12:00 |
Just confirming this is RDS MySQL and not RDS MariaDB, correct? Have a look through this documentation on performance schema.
Apologies if I do not have explicitly indicated, it is a MariaDB instance.
MariaDB may not provide the information PMM needs. Check the docs for a workaround.
In order to address the issue we found a workaround which enables us to maintain the slow query enabled. Through a sidecar application we download on the local installation of PMM the slow query log file, through aws-cli. After setting the pmm-agent, now the issue is that only a small number of queries is shown (e.g. 3 instead of 100).
For instance, this one is visible:
# Time: YYYYMMDD hh:mm:ss
# User@Host: [username][username] @ [host_ip]
# Thread_id: [thread_id] Schema: [schema_name] QC_hit: [QC_hit]
# Query_time: [query_time] Lock_time: [lock_time] Rows_sent: [rows_sent] Rows_examined: [rows_examined]
# Rows_affected: [rows_affected] Bytes_sent: [bytes_sent]
SET timestamp=[unix_timestamp];
SELECT uuid, offerID, territory, isASeries, data, merlinProgramVariantProgramId
FROM offers
WHERE territory = '[territory_value]'
AND EXISTS (
SELECT 1
FROM offer_availabilities AS oa
WHERE oa.offerUUID = offers.uuid
AND '[date_time]' BETWEEN oa.availabilityStartDateTime AND oa.availabilityEndDateTime
)
AND JSON_VALUE(data, '$.jensen.lastUpdatedAt') > '[last_updated_date]'
ORDER BY uuid
LIMIT [limit_value] OFFSET [offset_value];
and this one (which is very similar) not:
# Time: YYYYMMDD hh:mm:ss
# User@Host: [username][username] @ [host_ip]
# Thread_id: [thread_id] Schema: [schema_name] QC_hit: [QC_hit]
# Query_time: [query_time] Lock_time: [lock_time] Rows_sent: [rows_sent] Rows_examined: [rows_examined]
# Rows_affected: [rows_affected] Bytes_sent: [bytes_sent]
SET timestamp=[unix_timestamp];
SELECT
o.uuid,
o.offerID,
o.status AS offerStatus,
o.territory,
o.createdAt AS offerCreatedAt,
o.merlinProgramVariantProgramID AS merlinProgramVariantProgramId,
o.updatedAt AS offerUpdatedAt,
o.data AS offerData,
om.contentID,
om.status AS metadataStatus,
om.createdAt AS metadataCreatedAt,
om.updatedAt AS metadataUpdatedAt,
om.data AS metadataData
FROM offermetadata_checks AS omc,
offers AS o,
offer_metadata AS om
WHERE omc.offerUUID = o.uuid
AND omc.contentID = om.contentID
AND o.uuid = om.offerUUID
AND omc.checkUUID = '[checkUUID]'
AND o.territory = '[territory_value]'
AND EXISTS (
SELECT 1
FROM offer_availabilities AS oa
WHERE oa.offerUUID = o.uuid
AND '[date_time]' BETWEEN oa.availabilityStartDateTime AND oa.availabilityEndDateTime
)
LIMIT [limit_value] OFFSET [offset_value];
Timestamps are not so close, and the only things are changing are the limit/offset values.
Check the variable long_query_time. This value determines if a query is “slow” or not which controls if it is written to the slow query log. Queries that take less time than this value will not be written to the log.