Not the answer you need?
Register and ask your own question!

PMM Gravana does not show all data

Centos 7, percona server 5.7.17.
We have to servers connected to PMM. Both configured to send same info to PMM Grafana. But one of them do that successfully, other lost some info, buh not all.

For example:









mysqld.cnf from good server

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
bind-address = 0.0.0.0
character_set_server = utf8
collation_server = utf8_general_ci
max_allowed_packet = 256M
innodb_buffer_pool_size = 10G
innodb_page_size = 32K
innodb_data_file_path = ibdata1:10M:autoextend
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2 # 0 # 1
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M
innodb_log_files_in_group = 2
innodb_doublewrite = 0
innodb_io_capacity = 600
innodb_strict_mode=OFF
#skip-grant-tables
secure-file-priv = /tmp
sql_mode= #NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
sql_mode=STRICT_ALL_TABLES
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
# bin-log replication options
server_id=11
log_bin=mysql_bin.log
expire_logs_days=2
max_binlog_size=100M
binlog_format=ROW
max_binlog_files = 10
# statistics for PMM server
userstat = ON
query_response_time_stats = ON
innodb_monitor_enable=all
log_output=file
slow_query_log=OFF
slow_query_log_file=/var/lib/mysql/slowquery.log
long_query_time=0
log_slow_rate_limit=100
log_slow_rate_type=query
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
log_slow_sp_statements=ON
slow_query_log_always_write_time=1
slow_query_log_use_global_control=all
open_files_limit=5000

mysqld.cnf from bad server:

[mysql]
default_character_set=utf8
prompt="\[email protected]\h \d> "

[mysqld_safe]
open_files_limit=262144

[mysqld]
max_heap_table_size=16M
query_cache_size=1048576
query_cache_type=OFF
table_definition_cache=1400
table_open_cache=2000
tmp_table_size=16M
open_files_limit=5000
user=mysql
port=3306
character_set_server=utf8
collation_server=utf8_general_ci
expand_fast_index_creation=ON
innodb_file_per_table=ON
innodb_flush_log_at_trx_commit=1
innodb_flush_method=O_DIRECT
innodb_lock_wait_timeout=120
innodb_old_blocks_time=10000
innodb_open_files=16384
innodb_page_size=32K
innodb_temp_data_file_path=ibtmp1:12M:autoextend:m ax:1024M
innodb_strict_mode=OFF
sql_mode=STRICT_ALL_TABLES
datadir=/var/lib/mysql
log_error=/var/log/mysqld.log
log_timestamps=SYSTEM # MUST BE ADDED TO CERT. DISCUSS WITH JAVIER
pid_file=/var/run/mysqld/mysqld.pid
secure_file_priv =
socket=/var/lib/mysql/mysql.sock
symbolic_links=ON
tmpdir=/tmp
max_allowed_packet=128M
max_connect_errors=1000000
max_connections=800
net_read_timeout=600
net_write_timeout=600
skip_name_resolve=ON
query_response_time_stats=ON
innodb_monitor_enable=all
performance_schema=ON
userstat=ON
## SLOWQUERYLOG FOR PMMC:
## SECTION MAY BE DISABLED IF QAN USE
## PERFORMANCE_SCHEMA INSTEAD SLOW QUERY LOG
slow_query_log=OFF
log_output=file
log_slow_admin_statements=ON
log_slow_rate_limit=100
log_slow_rate_type=query
log_slow_slave_statements=ON
log_slow_sp_statements=ON
log_slow_verbosity=full
long_query_time=0
slow_query_log_always_write_time=1
slow_query_log_file=/var/lib/mysql/slowquery.log
slow_query_log_use_global_control=all

innodb_buffer_pool_size=8G
innodb_io_capacity=600
innodb_log_files_in_group=2
innodb_log_file_size=512M
«13

Comments

  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    How to download images on your forum in right way? I think images from first topic not usable.
  • MykolaMykola Percona Percona Staff Role
    can you try now?
    we fixed some compression options on forum
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    Example 1:
    Good server:
    Bad server:
    No value for max connection, InniDB Buffer Pool Size, Buffer Pool Size of Total RAM

    Example 2:
    Good Server:
    Bad Server:
    No data of performance schema

    Example 3:
    Good Server

    Image was not downloaded because of restriction - 5 images in post. But there is exist data missing on image below

    Bad Server:
    No data for process
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    After upgrading to 1.1.1 problem still exist
  • MykolaMykola Percona Percona Staff Role
    can you run the following command for both servers?
    curl https://login:password@pmm-client:42002/metrics-hr --insecure | grep mysql_global_status_connections
    
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    curl [url]https://root:[email protected][/url]mysql-bad-server:42002/metrics-hr --insecure | grep mysql_global_status_connections
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0# HELP mysql_global_status_connections Generic metric from SHOW GLOBAL STATUS.
    # TYPE mysql_global_status_connections untyped
    mysql_global_status_connections 8.817595e+06
    100 103k 100 103k 0 0 502k 0 --:--:-- --:--:-- --:--:-- 502k

    curl [url]https://root:[email protected][/url]mysql-good-server:42002/metrics-hr --insecure | grep mysql_global_status_connections
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0# HELP mysql_global_status_connections Generic metric from SHOW GLOBAL STATUS.
    # TYPE mysql_global_status_connections untyped
    mysql_global_status_connections 3.038542e+06
    100 103k 100 103k 0 0 324k 0 --:--:-- --:--:-- --:--:-- 325k
  • MykolaMykola Percona Percona Staff Role
    Can you share result table from prometheus?
    http://PMM-SERVER-IP/prometheus/grap...tions&g0.tab=1
    result should be similar to
    mysql_global_status_connections{instance="good-server",job="mysql"} 12312
    mysql_global_status_connections{instance="bad-server",job="mysql"} 45645
    
    most interesting thing in this table - data value in prometheus database.

    also please open prometheus graph itself
    http://PMM-SERVER-IP/prometheus/grap...tions&g0.tab=0
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    mysql_global_status_connections{instance="bad-server",job="mysql"} 9282063
    mysql_global_status_connections{instance="good-server",job="mysql"} 3116737

    And graph.
    Green - bad-server
    Red - good-server
    4.png 45.2K
  • MykolaMykola Percona Percona Staff Role
    As we can see - data is exists is prometheus

    can you try to fetch the same data via grafana call?
    [url]http://[/url]PMM-SERVER-IP/graph/api/datasources/proxy/1/api/v1/query_range?query=rate(mysql_global_status_connections%7Binstance%3D%22PMM-CLIENT%22%7D%5B15m%5D)%20or%20irate(mysql_global_status_connections%7Binstance%3D%22PMM-CLIENT%22%7D%5B5m%5D)&start=1487602415&end=1487775215&step=900
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    http://PMM-SERVER-IP/graph/api/datasources/proxy/1/api/v1/query_range?query=rate(mysql_global_status_connections%7Binstance%3D%22bad-server%22%7D%5B15m%5D)%20or%20irate(mysql_global_status_connections%7Binstance%3D%22bad-server%22%7D%5B5m%5D)&start=1487602415&end=1487775215&step=900

    {"status":"success","data":{"resultType":"matrix","result":[{"metric":{"instance":"bad-server","job":"mysql"},"values":[[1487602415,"7.431598922801917"],[1487603315,"8.679653703730484"],[1487604215,"7.827586206896551"],[1487605115,"13.115640326005584"],[1487606015,"21.040044493882093"],[1487606915,"20.8587319243604"],[1487607815,"52.10678531701891"],[1487608715,"25.737486095661843"],[1487609615,"27.490575629116385"],[1487610515,"8.874304783092324"],[1487611415,"8.519466073414904"],[1487612315,"8.582869855394883"],[1487613215,"8.838719509142948"],[1487614115,"10.100111234705226"],[1487615015,"8.870967741935482"],[1487615915,"9.618464961067852"],[1487616815,"8.926595024021163"],[1487617715,"8.774193548387096"],[1487618615,"8.711911804128816"],[1487619515,"8.71523915461624"],[1487620415,"9.469420989344817"],[1487621315,"19.89096786321706"],[1487622215,"8.507239718842847"],[1487623115,"20.553948832035594"],[1487624015,"17.98222244963565"],[1487624915,"10.957730812013349"],[1487625815,"8.421579532814237"],[1487626715,"8.479421579532815"],[1487627615,"8.444938820912125"],[1487628515,"8.479431011602905"],[1487629415,"9.76862094702898"],[1487630315,"8.476084538375973"],[1487631215,"8.617362199513014"],[1487632115,"8.626318558653962"],[1487633015,"20.828698553948833"],[1487633915,"13.146961426755107"],[1487634815,"8.479412147483707"],[1487635715,"8.546162402669633"],[1487636615,"8.368186874304783"],[1487637515,"41.71412680756396"],[1487638415,"51.358175750834256"],[1487639315,"8.599555061179087"],[1487640215,"11.082301354503498"],[1487641115,"8.729699666295884"],[1487642015,"27.963292547274747"],[1487642915,"14.82758620689655"],[1487643815,"8.56952169076752"],[1487644715,"8.506117908787541"],[1487645615,"8.606219570389799"],[1487646515,"42.02108785196012"],[1487647415,"99.82080108920903"],[1487648315,"36.98109010011123"],[1487649215,"8.994438264738598"],[1487650115,"9.885428253615126"],[1487651015,"8.71635150166852"],[1487651915,"8.703003337041157"],[1487652815,"8.921013436025099"],[1487653715,"8.64516129032258"],[1487654615,"8.622914349276973"],[1487655515,"8.547284257268362"],[1487656415,"10.353726362625137"],[1487657315,"9.051167964404893"],[1487658215,"8.696338928074448"],[1487659115,"9.092324805339265"],[1487660015,"9.057842046718577"],[1487660915,"8.381535038932146"],[1487661815,"8.447163515016685"],[1487662715,"8.364840528542238"],[1487663615,"8.463858135548538"],[1487664515,"6.559606924450945"],[1487665415,"4.180412332494625"],[1487666315,"1.9165739710789764"],[1487667215,"10.963280352302165"],[1487668115,"5.261407409796896"],[1487669015,"3.04560284137615"],[1487669915,"6.440489432703003"],[1487670815,"11.263626251390432"],[1487671715,"2.9499443826473857"],[1487672615,"2.7541713014460507"],[1487673515,"2.7830892290442395"],[1487674415,"14.717447477811483"],[1487675315,"5.571746384872079"],[1487676215,"2.7430508821478115"],[1487677115,"11.429378675615881"],[1487678015,"15.68520578420467"],[1487678915,"5.904338153503893"],[1487679815,"2.894323810540812"],[1487680715,"2.717463848720801"],[1487681615,"2.75194660734149"],[1487682515,"3.0578386453407727"],[1487683415,"4.052280311457174"],[1487684315,"3.77641404180863"],[1487685215,"2.8019991078986566"],[1487686115,"4.086763070077864"],[1487687015,"2.855391707017011"],[1487687915,"2.6307007786429364"],[1487688815,"5.8220244716351495"],[1487689715,"3.5027808676307006"],[1487690615,"2.764182424916574"],[1487691515,"4.395995550611791"],[1487692415,"8.761938238179669"],[1487693315,"14.272525027808674"],[1487694215,"10.838709677419354"],[1487695115,"6.530589543937707"],[1487696015,"1.9833147942157952"],[1487696915,"15.423804226918797"],[1487697815,"1.9632903634145011"],[1487698715,"13.863181312569521"],[1487699615,"14.737469702480865"],[1487700515,"14.341490545050055"],[1487701415,"8.542834864109972"],[1487702315,"8.61400599109456"],[1487703215,"13.457174638487206"],[1487704115,"15.165722841242667"],[1487705015,"14.20799309455718"],[1487705915,"1.9410348107073934"],[1487706815,"2.152393940371458"],[1487775215,"3.3515016685205783"]]}]}}

    http://PMM-SERVER-IP/graph/api/datasources/proxy/1/api/v1/query_range?query=rate(mysql_global_status_connections%7Binstance%3D%22good-server%22%7D%5B15m%5D)%20or%20irate(mysql_global_status_connections%7Binstance%3D%22good-server%22%7D%5B5m%5D)&start=1487602415&end=1487775215&step=900

    {"status":"success","data":{"resultType":"matrix","result":[{"metric":{"instance":"good-server","job":"mysql"},"values":[[1487602415,"7.843159065628476"],[1487603315,"7.745272525027809"],[1487604215,"8.075729429693324"],[1487605115,"9.013348164627363"],[1487606015,"8.701890989988875"],[1487606915,"8.898776418242491"],[1487607815,"9.080088987764181"],[1487608715,"8.666295884315906"],[1487609615,"8.458286985539488"],[1487610515,"8.783092324805338"],[1487611415,"8.711902113459399"],[1487612315,"8.60734149054505"],[1487613215,"8.581757508342601"],[1487614115,"8.796440489432703"],[1487615015,"8.388209121245827"],[1487615915,"8.558398220244715"],[1487616815,"8.60511679644049"],[1487617715,"9.249165739710788"],[1487618615,"9.338153503893215"],[1487619515,"9.174638487208009"],[1487620415,"9.342602892102336"],[1487621315,"9.268075639599553"],[1487622215,"9.565082942250214"],[1487623115,"9.532814238042269"],[1487624015,"9.478309232480534"],[1487624915,"9.506117908787541"],[1487625815,"9.596271391943226"],[1487626715,"9.612903225806452"],[1487627615,"9.540600667408231"],[1487628515,"9.568409343715238"],[1487629415,"9.462746899607229"],[1487630315,"9.438264738598441"],[1487631215,"9.204671857619577"],[1487632115,"9.401557285873192"],[1487633015,"9.467185761957731"],[1487633915,"9.191323692992214"],[1487634815,"9.479421579532813"],[1487635715,"9.489432703003336"],[1487636615,"9.50166852057842"],[1487637515,"9.442829667548768"],[1487638415,"9.42602892102336"],[1487639315,"9.470522803114571"],[1487640215,"9.345939933259176"],[1487641115,"9.339265850945495"],[1487642015,"9.157953281423802"],[1487642915,"9.349276974416018"],[1487643815,"9.18464961067853"],[1487644715,"9.22246941045606"],[1487645615,"9.43381535038932"],[1487646515,"9.520578420467185"],[1487647415,"9.262513904338153"],[1487648315,"9.476084538375973"],[1487649215,"9.318131256952169"],[1487650115,"9.340378197997776"],[1487651015,"9.421579532814238"],[1487651915,"9.273637374860957"],[1487652815,"9.334816462736372"],[1487653715,"9.404894327030034"],[1487654615,"9.46607341490545"],[1487655515,"9.21134593993326"],[1487656415,"9.39599555061179"],[1487657315,"9.298109010011123"],[1487658215,"9.39710789766407"],[1487659115,"9.452714735578715"],[1487660015,"9.472747497219132"],[1487660915,"9.521690767519464"],[1487661815,"9.332591768631811"],[1487662715,"9.441601779755283"],[1487663615,"9.457174638487206"],[1487664515,"7.217416838995568"],[1487665415,"1.3914951877765434"],[1487666315,"1.282536151279199"],[1487667215,"1.2803114571746383"],[1487668115,"1.2858731924360398"],[1487669015,"1.3081201334816464"],[1487669915,"1.271412680756396"],[1487670815,"1.2847608453837598"],[1487671715,"1.292547274749722"],[1487672615,"1.279199110122358"],[1487673515,"1.2803114571746383"],[1487674415,"1.3047830923248052"],[1487675315,"1.2914349276974415"],[1487676215,"1.6918798665183536"],[1487677115,"1.7808676307007785"],[1487678015,"1.7675194660734148"],[1487678915,"1.7997775305895438"],[1487679815,"1.6952169076751946"],[1487680715,"1.7808676307007785"],[1487681615,"1.6952169076751946"],[1487682515,"1.6596218020022246"],[1487683415,"1.6818687430478307"],[1487684315,"1.618464961067853"],[1487685215,"1.5617352614015572"],[1487686115,"1.5550611790878752"],[1487687015,"1.5728587319243603"],[1487687915,"1.4438264738598443"],[1487688815,"1.2892102335928808"],[1487689715,"1.314794215795328"],[1487690615,"1.3181312569521688"],[1487691515,"1.271412680756396"],[1487692415,"1.2625139043381532"],[1487693315,"1.268075639599555"],[1487694215,"1.2669632925472747"],[1487695115,"1.2614015572858732"],[1487696015,"1.2636262513904337"],[1487696915,"1.2658509454949944"],[1487697815,"1.264738598442714"],[1487698715,"1.260289210233593"],[1487699615,"1.2658509454949944"],[1487700515,"1.2636262513904337"],[1487701415,"1.2669632925472747"],[1487702315,"1.2625139043381532"],[1487703215,"1.2614015572858732"],[1487704115,"1.264738598442714"],[1487705015,"1.2703003337041154"],[1487705915,"1.2614015572858732"],[1487706815,"1.2636262513904337"],[1487775215,"1.268075639599555"]]}]}}
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    And i have to delete some strings from responses because of limitation on forum.
  • MykolaMykola Percona Percona Staff Role
    As I can see, all data fetchable from exporters, all data exists in prometheus database and grafana backend successfully returns it.
    so you can try to disable any kind of adblocker and any similar tool, open "Developer Tools" in your browser and try to catch any javascript or network errors.
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    Mykola, for example, do you really think than blocking one metric from three on "MySQL Overview" page "MySQL Connections" can be blocked on client level? Good server show 3 metrics: "Max Connections", "Max Used Connections", "Connections", bad server shows two of them, it doesnt show "Max Connections". Look at the graph higher. Of course, i checked it by the different browsers(opera, firefox, chrome) from 2 computers located in different subnetworks, but result was expected - nothing change :) And other question - we checked that data exist, but we do not check why some of data missing, not all, but part of them.
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    And more. Few days ago i install new PMM server with new container, and reconfigure bad server to sending data to new PMM. And problem doesnt resolve. I.e. looks like problem not on PMM Server side, looks like pmm-client do not send all necessary info.
  • MykolaMykola Percona Percona Staff Role
    Hi Aleksey,

    let me describe PMM architecture - https://www.percona.com/doc/percona-monitoring-and-management/architecture.html
    we have two independent parts in PMM - Query Analytics and Metrics Monitor.
    Metrics Monitor consists of
    - prometheus/mysqld_exporter - daemon which fetch metrics directly from mysql and provide prometheus compatible API for them
    - prometheus/prometheus - some kind of time series database + daemon which fetch data from exporters
    - grafana/grafana - complicated javascript interface which can visualize graphs + daemon which can fetch data from prometheus and represent it in JSON

    pmm-client don't collect and don't send any data, it is just create start-up file for mysqld_exporter and run it.

    in post #7 we have checked that mysqld_exporter collect needed metric from mysql
    in post #9 we have checked that prometheus successfully fetched and stored this data inside.
    in post #11 we have checked that grafana backend can return metrics values in javascript compatible form.

    Grafana it is mostly javascript tool, so then you open dashboard, JS fetch dashboard template from backend via the following call
    https://PMM-SERVER-IP/graph/api/dashboards/db/mysql-overview
    you can open "Developer Tools" in browser and open graphs for good and bad hosts and check that "mysql-overview" dashboard template always the same for all hosts (and has mysql_global_status_connections line in description).

    after template fetching JS parse it and make many "query_range" calls to backend.
    https://PMM-SERVER-IP/graph/api/datasources/proxy/1/api/v1/query_range?query=LINE_FORMULA_FROM_TEMPLATE
    we already checked this call in post #11
    now we should understand why javascript don't fetch or don't show fetchable data.
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    Hi, Mykola! Sorry about doubt, you are very systematic :)
    About "Developer Console" - i had never use id before, so maybe i made some mistakes. I wasn’t able to do everything you asking about, and looks like dashboard template always same. I will spent more time for it to be sure for 100%. But there some errors appears in console when i switch from good server to bad server:

    File mt.app-1487925994963.log in dropbox

    It is to large to post directly
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    News:
    All servers: good server
  • MykolaMykola Percona Percona Staff Role
    Gaps looks like network issues:
    prometheus very depends on network delays.
    if communication between exporter<->prometheus near 1s you can receive such issues.
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    Yep, network. Yesterday reinstall pmm server on new host and newer docker, and gaps has gone.

    But problem with bad server stayed. Funny, but bad server shows good statistics for some time before reinstalling, but not long, less then hour, all metrics, all values. If you look on last picture, you can see max connection metric not exist, after that exist.
    Really, i try to use Developer Tool to "open graphs for good and bad hosts and check that "mysql-overview" dashboard template always the same for all hosts (and has mysql_global_status_connections line in description)". May be i don’t understand something, but i don’t see differences. If you can show some screen shot about how did you do that, it would help me to check that i do it in right way.
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    I had check Prometheus again. Looks like we watch wrong parameter(mysql_global_status_connections). Look on screen shot(third server were added, and its work fine):
  • MykolaMykola Percona Percona Staff Role
    can you check exporter output?
    curl https://login:password&#64;pmm-client:42002/metrics-hr --insecure | grep mysql_global_variables_max_connections
    
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    [email protected]:/opt# curl [URL="https://root:pass&#64;good_server:42002/metrics-hr"]https://root:[email protected]_server:42002/metrics-hr[/URL] --insecure | grep mysql_global_variables_max_connections
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    100 102k 100 102k 0 0 1436k 0 --:--:-- --:--:-- --:--:-- 1446k

    [email protected]:/opt# curl [URL="https://root:pass&#64;bad_server:42002/metrics-hr"]https://root:[email protected]_server:42002/metrics-hr[/URL] --insecure | grep mysql_global_variables_max_connections
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    100 103k 100 103k 0 0 1205k 0 --:--:-- --:--:-- --:--:-- 1215k
  • MykolaMykola Percona Percona Staff Role
    Oh, Aleksey, sorry, right command
    curl https://login:password&#64;pmm-client:42002/metrics-lr --insecure | grep mysql_global_variables_max_connections
    
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    [email protected]:/opt# curl https://root:[email protected]_server:42002/metrics-lr --insecure | grep mysql_global_variables_max_connections
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0# HELP mysql_global_variables_max_connections Generic gauge metric from SHOW GLOBAL VARIABLES.
    # TYPE mysql_global_variables_max_connections gauge
    mysql_global_variables_max_connections 151
    100 5742k 100 5742k 0 0 2845k 0 0:00:02 0:00:02 --:--:-- 2846k


    [email protected]:/opt# curl https://root:[email protected]_server::42002/metrics-lr --insecure | grep mysql_global_variables_max_connections
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 31.4M 0 1056 0 0 130 0 70:24:32 0:00:08 70:24:24 215# HELP mysql_global_variables_max_connections Generic gauge metric from SHOW GLOBAL VARIABLES.
    # TYPE mysql_global_variables_max_connections gauge
    mysql_global_variables_max_connections 800
    100 31.4M 100 31.4M 0 0 3487k 0 0:00:09 0:00:09 --:--:-- 8014k
  • MykolaMykola Percona Percona Staff Role
    So, am I understand correctly,
    values exists in exporter output and in prometheus database?
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    May be i confuse you with previous picture, because there is 3 servers but we talk about 2. mysql_global_variables_max_connections does not showed on picture for bad server, there is only two good servers.
    I.e. mysql_global_variables_max_connections values exists in exporter output for bad and good server, values exists in prometheus ONLY for good server(s).
    I just cant exclude second good server from picture. But if you compare mysql_global_status_connections and
  • MykolaMykola Percona Percona Staff Role
    Hi,

    can you open targrets page? http://PMM-SERVER-IP/prometheus/targets
    and check status (UP or DOWN) of target.
    also can you copy-paste "Last Scrape" value, wait a few seconds, refresh page and copy new "Last Scrape" values again (3-5 values needed)
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    instance "mysql.db" is a bad server. Others good.
    5.png 135.5K
  • aleksey.filippovaleksey.filippov Contributor Current User Role Beginner
    Last Scrape:
    1m4.203s ago
    11.794s ago
    19.312s ago
    28.762s ago
    37.713s ago
    44.95s ago
    51.95s ago
    58.661s ago
  • MykolaMykola Percona Percona Staff Role
    do you see any errors in /var/log/prometheus.log ?
    can you share it?
This discussion has been closed.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.