Not the answer you need?
Register and ask your own question!

No data in Query Analytics

NickmobNickmob ContributorCurrent User Role Patron
Hi!
I'm running PMM 1.5.3 in Docker.
Just after installation everything worked fine. But after a couple of hours, Query Analytics shows "There is no query data because the MySQL Server is not been configured for monitoring".
On the client side I see the following errors:

2018/01/12 13:25:10.291158 WARNING data-sender Waiting for API to ack qan_1515691320066033553: read tcp 192.168.0.200:11118->192.168.0.211:80: i/o timeout
2018/01/12 13:25:18.295135 WARNING data-sender Waiting for API to ack qan_1515691320066033553: read tcp 192.168.0.200:11146->192.168.0.211:80: i/o timeout
2018/01/12 13:25:26.298139 WARNING data-sender Waiting for API to ack qan_1515691320066033553: read tcp 192.168.0.200:11154->192.168.0.211:80: i/o timeout

Network betwen server and client is fine (1G LAN).
Mysql and Linux metrics work withoun any problem.
Mysql (PS 5.7) is configured according to instructions (with slow-query-log).
Log from settings tab in web UI is attached.

Comments

  • PeterPeter Percona CEO Percona Moderator Role
    Hi,

    What does "pmm-admin check-network" ?

    Looks like the agent is not able to connect to your PMM Server at 192.168.0.200
  • NickmobNickmob Contributor Current User Role Patron
    Network is OK. Linux and mysql metrics work.

    pmm-admin check-network
    PMM Network Status

    Server Address | 192.168.0.211
    Client Address | 192.168.0.200

    * System Time
    NTP Server (0.pool.ntp.org) | 2018-01-12 17:25:10 +0300 MSK
    PMM Server | 2018-01-12 14:25:10 +0000 GMT
    PMM Client | 2018-01-12 17:25:10 +0300 MSK
    PMM Server Time Drift | OK
    PMM Client Time Drift | OK
    PMM Client to PMM Server Time Drift | OK

    * Connection: Client --> Server

    SERVER SERVICE STATUS

    Consul API OK
    Prometheus API OK
    Query Analytics API OK

    Connection duration | 288.435µs
    Request duration | 774.328µs
    Full round trip | 1.062763ms


    * Connection: Client <-- Server





    SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD





    linux:metrics block2 192.168.0.200:42000 OK YES -
    mysql:metrics block2 192.168.0.200:42002 OK YES -
  • PeterPeter Percona CEO Percona Moderator Role
    Hm. Interesting... still for some reason there is a timeout...

    Can you run and post this command:

    docker exec -it 7aa4a20d980b tail -100 /var/log/qan-api.log


    Where "7aa4a20d980b" is container id from "docker ps"
  • NickmobNickmob Contributor Current User Role Patron
    Here's the output (truncated to fit):


    2018/01/12 14:33:56.290 127.0.0.1 0 11.817899887s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    2018/01/12 14:34:04.293 127.0.0.1 0 8.043045867s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    2018/01/12 14:34:12.297 127.0.0.1 0 6.573526094s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    WARN: query crashes sqlparser: SELECT hotel_id, small_url_fld, big_url_fld FROM hotel_tbl h, gallery_tbl g INNER JOIN ( SELECT gallerycategory_id, MIN(order_fld) AS minord FROM gallery_tbl GROUP BY gallerycategory_id ) go ON g.gallerycategory_id = go.gallerycategory_id AND g.order_fld = go.minord WHERE hotel_id IN(782) AND g.gallerycategory_id = h.gallerycategory_id
    2018/01/12 14:34:36.688 127.0.0.1 0 9.461987074s WS /agents/0ccbcae3abbe41b84d3298ce98d92663/data
    WARN: query crashes sqlparser: SELECT hotel_id, small_url_fld, big_url_fld FROM hotel_tbl h, gallery_tbl g INNER JOIN ( SELECT gallerycategory_id, MIN(order_fld) AS minord FROM gallery_tbl GROUP BY gallerycategory_id ) go ON g.gallerycategory_id = go.gallerycategory_id AND g.order_fld = go.minord WHERE hotel_id IN(782) AND g.gallerycategory_id = h.gallerycategory_id
    WARN: query crashes sqlparser: SELECT hotel_id, small_url_fld, big_url_fld FROM hotel_tbl h, gallery_tbl g INNER JOIN ( SELECT gallerycategory_id, MIN(order_fld) AS minord FROM gallery_tbl GROUP BY gallerycategory_id ) go ON g.gallerycategory_id = go.gallerycategory_id AND g.order_fld = go.minord WHERE hotel_id IN(782) AND g.gallerycategory_id = h.gallerycategory_id
    2018/01/12 14:34:44.753 127.0.0.1 0 15.008034951s WS /agents/0ccbcae3abbe41b84d3298ce98d92663/data
    2018/01/12 14:34:52.824 127.0.0.1 0 17.987698174s WS /agents/0ccbcae3abbe41b84d3298ce98d92663/data
    2018/01/12 14:34:59.290 127.0.0.1 0 12.129309638s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    2018/01/12 14:35:07.293 127.0.0.1 0 8.822824516s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    2018/01/12 14:35:15.295 127.0.0.1 0 6.935616177s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    WARN: query crashes sqlparser: SELECT hotel_id, small_url_fld, big_url_fld FROM hotel_tbl h, gallery_tbl g INNER JOIN ( SELECT gallerycategory_id, MIN(order_fld) AS minord FROM gallery_tbl GROUP BY gallerycategory_id ) go ON g.gallerycategory_id = go.gallerycategory_id AND g.order_fld = go.minord WHERE hotel_id IN(782) AND g.gallerycategory_id = h.gallerycategory_id
    2018/01/12 14:35:39.687 127.0.0.1 0 9.980938148s WS /agents/0ccbcae3abbe41b84d3298ce98d92663/data
    WARN: query crashes sqlparser: SELECT hotel_id, small_url_fld, big_url_fld FROM hotel_tbl h, gallery_tbl g INNER JOIN ( SELECT gallerycategory_id, MIN(order_fld) AS minord FROM gallery_tbl GROUP BY gallerycategory_id ) go ON g.gallerycategory_id = go.gallerycategory_id AND g.order_fld = go.minord WHERE hotel_id IN(782) AND g.gallerycategory_id = h.gallerycategory_id
    WARN: query crashes sqlparser: SELECT hotel_id, small_url_fld, big_url_fld FROM hotel_tbl h, gallery_tbl g INNER JOIN ( SELECT gallerycategory_id, MIN(order_fld) AS minord FROM gallery_tbl GROUP BY gallerycategory_id ) go ON g.gallerycategory_id = go.gallerycategory_id AND g.order_fld = go.minord WHERE hotel_id IN(782) AND g.gallerycategory_id = h.gallerycategory_id
    2018/01/12 14:35:47.750 127.0.0.1 0 12.086512495s WS /agents/0ccbcae3abbe41b84d3298ce98d92663/data
    2018/01/12 14:35:55.815 127.0.0.1 0 15.330671879s WS /agents/0ccbcae3abbe41b84d3298ce98d92663/data
    2018/01/12 14:36:02.290 127.0.0.1 0 11.758294001s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    2018/01/12 14:36:10.293 127.0.0.1 0 8.076024197s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    2018/01/12 14:36:18.296 127.0.0.1 0 6.505272012s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    WARN: query crashes sqlparser: SELECT hotel_id, small_url_fld, big_url_fld FROM hotel_tbl h, gallery_tbl g INNER JOIN ( SELECT gallerycategory_id, MIN(order_fld) AS minord FROM gallery_tbl GROUP BY gallerycategory_id ) go ON g.gallerycategory_id = go.gallerycategory_id AND g.order_fld = go.minord WHERE hotel_id IN(782) AND g.gallerycategory_id = h.gallerycategory_id
    2018/01/12 14:36:42.687 127.0.0.1 0 9.551345834s WS /agents/0ccbcae3abbe41b84d3298ce98d92663/data
    WARN: query crashes sqlparser: SELECT hotel_id, small_url_fld, big_url_fld FROM hotel_tbl h, gallery_tbl g INNER JOIN ( SELECT gallerycategory_id, MIN(order_fld) AS minord FROM gallery_tbl GROUP BY gallerycategory_id ) go ON g.gallerycategory_id = go.gallerycategory_id AND g.order_fld = go.minord WHERE hotel_id IN(782) AND g.gallerycategory_id = h.gallerycategory_id
    WARN: query crashes sqlparser: SELECT hotel_id, small_url_fld, big_url_fld FROM hotel_tbl h, gallery_tbl g INNER JOIN ( SELECT gallerycategory_id, MIN(order_fld) AS minord FROM gallery_tbl GROUP BY gallerycategory_id ) go ON g.gallerycategory_id = go.gallerycategory_id AND g.order_fld = go.minord WHERE hotel_id IN(782) AND g.gallerycategory_id = h.gallerycategory_id
    2018/01/12 14:36:50.750 127.0.0.1 0 11.816388156s WS /agents/0ccbcae3abbe41b84d3298ce98d92663/data
    2018/01/12 14:36:58.813 127.0.0.1 0 15.16848304s WS /agents/0ccbcae3abbe41b84d3298ce98d92663/data
    2018/01/12 14:37:05.290 127.0.0.1 0 11.408586132s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    2018/01/12 14:37:13.293 127.0.0.1 0 8.180707767s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
    2018/01/12 14:37:21.298 127.0.0.1 0 6.584041635s WS /agents/f711ea63cb4c41cf4f6f45476b298bab/data
  • PeterPeter Percona CEO Percona Moderator Role
    Hm,

    This surely does not look normal. It states normal but it might be breaking something.... Did you say if you restart the docker you get new data to appear but when.. no more ?

    You say

    "
    I'm running PMM 1.5.3 in Docker.
    Just after installation everything worked fine. But after a couple of hours, Query Analytics shows "There is no query data because the MySQL Server is not been configured for monitoring".
    On the client side I see the following errors:"

    Do I understand correctly in couple of hours you do not have any data at all, even data which was there - ie if you go from 1 hour to 24h or so ?
  • NickmobNickmob Contributor Current User Role Patron
    Currenly I can see Query data only from starting period, a few hours just after pmm-server installation. Later I see no data. Restarting pmm-server or pmm-client doesn't work.
    I tried to reinstall pmm-client - still no luck.
    It's the same with 2 clients - one local (that I showed in logs), other is located in remote DC.
    PMM-server is running on pretty ancient hardware (Athlon X2-6000 - 2 cores, 3Ghz, 6Gb RAM, 1 SATA HDD). Could be that a problem? This issue is the same for 1 or 2 active pmm-clients.
  • NickmobNickmob Contributor Current User Role Patron
  • Agustin GAgustin G Percona Percona Staff Role
    Hi,

    I see you mentioned "Query Analytics" in the title, and I can see from the link you sent that both linux and mysql metrics are working. If you need to see data from queries, you need to have the mysql:queries exporter running, which you are not at the moment:

    * Connection: Client <-- Server





    SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD





    linux:metrics block2 192.168.0.200:42000 OK YES -
    mysql:metrics block2 192.168.0.200:42002 OK YES -


    You need to issue:

    shell> pmm-admin add mysql:queries

    Make sure that long_query_time is set to something appropriate, too, because it may be set to 10 seconds (default), and only queries that take more than that will be logged.


    I should also mention that it is probably a good idea for you to setup a user and password if it's a public-facing deployment :) For this, you will need to stop the current pmm-server container, remove it, and start a new one with additional options enabled. Check https://www.percona.com/doc/percona-monitoring-and-management/glossary.option.html (SERVER_PASSWORD (Option) section).

    shell> docker stop pmm-server && docker rm pmm-server
    shell> docker run \
    [...other options here...]
    -e SERVER_USER=jsmith \
    -e SERVER_PASSWORD=pass1234 \
    --restart always \
    percona/pmm-server:latest
  • NickmobNickmob Contributor Current User Role Patron
    Thank for reply!

    Query analyzer is running on the client, here's log pmm-mysql-queries-0.log:

    2018/01/13 06:02:05.295661 WARNING data-sender Waiting for API to ack qan_1515807060016995269: read tcp 192.168.0.200:36588->192.168.0.211:80: i/o timeout
    2018/01/13 06:02:40.289482 WARNING data-sender Waiting for API to ack qan_1515807060016995269: read tcp 192.168.0.200:36676->192.168.0.211:80: i/o timeout
    2018/01/13 06:02:50.934798 WARNING data-sender Waiting for API to ack qan_1515807120021468869: read tcp 192.168.0.200:36692->192.168.0.211:80: i/o timeout
    2018/01/13 06:03:40.460082 WARNING data-sender Timeout sending data: 65.17s > 63s
    2018/01/13 06:04:45.822864 WARNING data-sender Timeout sending data: 65.36s > 63s
  • NickmobNickmob Contributor Current User Role Patron
    It seems that pmm-server just can't cope with load. At night I started to see some queries from low-loaded server. Production server (with 2-3 kQPS) has no queries. Changing metrics resolution to 3s doesn't help.
  • PeterPeter Percona CEO Percona Moderator Role
    Nikmob,

    Metrics Resolution only impacts Metrics not queries. What data source are you using for Queries - is it using Slow Query Log or Performance Schema. Does switching query source works ?

    What MySQL version do you have and how is it configured ?

    https://www.percona.com/doc/percona-monitoring-and-management/conf-mysql.html

    How powerful is PMM Server you use ?

    It is possible for some reason you have the query load which your PMM Server is not able to take in.
  • NickmobNickmob Contributor Current User Role Patron
    Peter wrote: »
    Nikmob,

    Metrics Resolution only impacts Metrics not queries. What data source are you using for Queries - is it using Slow Query Log or Performance Schema. Does switching query source works ?

    What MySQL version do you have and how is it configured ?

    https://www.percona.com/doc/percona-monitoring-and-management/conf-mysql.html

    How powerful is PMM Server you use ?

    It is possible for some reason you have the query load which your PMM Server is not able to take in.

    MySQL is PS 5.7.20-19, Ubuntu 16.04.

    MySQL config is recomended by docs using Slow Query log:
    # PMM logs
    log_output = file
    slow_query_log = ON
    long_query_time = 0
    log_slow_rate_limit = 100
    log_slow_rate_type = query
    log_slow_verbosity = full
    log_slow_admin_statements = ON
    log_slow_slave_statements = ON
    slow_query_log_always_write_time = 1
    slow_query_log_use_global_control = all
    innodb_monitor_enable = all
    userstat=1

    slow_query_log_file = /var/log/mysql/mysql-slow.log

    PMM Server is the weak spot: Athlon X2 6000 (2 cores 3GHz), 6 Gb RAM, 1 SATA HDD. How to find bottleneck (can be CPU or HDD). As far as I see, PMM server generates about 80 IOPS on HDD, maybe that is the problem?
    Is there some recommendations about PMM server hardware config?
  • PeterPeter Percona CEO Percona Moderator Role
    Hi,

    Wow your server is indeed pretty slow one, especially HDD might be the problem. I have very low powered Intel Atom which is still able to handle few servers

    You should go to System Overview and pick "pmm-server" as a host. This way you will be able to view resource usage.

    https://pmmdemo.percona.com/graph/dashboard/db/system-overview?refresh=1m&orgId=1&var-interval=$__auto_interval&var-host=pmm-server

    I would also recommend testing with some very low traffic MySQL server to see if you can get queries to show up.
  • NickmobNickmob Contributor Current User Role Patron
    Low traffic MySQL server works OK (mysql, linux and query metrics). With high-traffic server there's no query data (no problems with mysql and linux metrics).
    I attached load metrics from PMM server. At 19.10 I enabled stats from high traffic (in addition to low-traffc).
    As far as I understand the bottleneck is HDD iops. Is threre some way to reduce iops by tuning PMM?
  • PeterPeter Percona CEO Percona Moderator Role
    The best bet for you would be to change MySQL Server settings, which is likely the main cause of your disk IO
    To do it:

    Enter Docker Container:

    docker exec -it pmm-server bash

    vi /etc/my.cnf.d/00-pmm.cnf

    Change innodb_buffer_pool_size=128M to amount of memory you would like to allocate to MySQL. Note it is not the only memory user is PMM. For 6GB you should set it to "1G" or so

    Run "mysqladmin shutdown"

    SupervisorD will automatically restart MySQL for you

    We should make MySQL option settings better in the future
  • NickmobNickmob Contributor Current User Role Patron
    Peter wrote: »
    The best bet for you would be to change MySQL Server settings, which is likely the main cause of your disk IO
    To do it:

    Enter Docker Container:

    docker exec -it pmm-server bash

    vi /etc/my.cnf.d/00-pmm.cnf

    Change innodb_buffer_pool_size=128M to amount of memory you would like to allocate to MySQL. Note it is not the only memory user is PMM. For 6GB you should set it to "1G" or so

    Run "mysqladmin shutdown"

    SupervisorD will automatically restart MySQL for you

    We should make MySQL option settings better in the future




    SOLVED.

    Thanks Peter!

    I finally got it working!

    First, I tried your advice - changed innodb_buffer_pool_size to 1G. But that didn't work.
    Then I noticed innodb_flush_log_at_trx_commit = 1. It was a root cause of 80 write IOPS. I changed it to innodb_flush_log_at_trx_commit = 2 and now everything works like a charm.
    I now that it breaks durability, but what data is collected in MySQL?

    IO load graph attached.
  • PeterPeter Percona CEO Percona Moderator Role
    Thanks for update. Yes that is another one. Now while it breaks durability you only will lose queries for 1 second or so... it is not a big deal.
  • RoelVandePaarRoelVandePaar Contributor Inactive User Role Beginner
    We have confirmed that the SQL parser does not support sub queries, and logged;
    PMM-1931 SQL Parser doesn't support sub queries
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.