Not the answer you need?
Register and ask your own question!

PMM 1.14 No Percona PXC Info

ctmrocks2435ctmrocks2435 ContributorCurrent User Role Supporter
Hello,
For ~4 months we ran a VM with PMM 1.9 running on it looking at a cluster of 5 servers running Percona XtraDB Cluster (GPL), Release rel20, Revision 1702aea, WSREP version 29.26, wsrep_29.26

We downloaded and setup a new PMM 1.14 OVM machine, and removed and reinstalled clients with version pmm-admin 1.14.1

Everything works great, but the new PMM no longer sees our Percona cluster on the HA pages. We have tried restarting the server, clients etc. Nothing seems to help. The only item I see that stands out is it appears PMM can not get the version of MySQL. See the attached image.

Any ideas would be appreciated.

Comments

  • Michael CoburnMichael Coburn Principal Architect, Percona Percona Staff Role
    Hi ctmrocks2435

    I'd want to know if the PXC nodes are reporting any metrics at all in PMM, or if they are not being scraped / stored. Likely the easiest option is if you were to download the logs.zip from your instance and DM or email me [email protected] and I can take a look. Other troubleshooting steps:
  • ctmrocks2435ctmrocks2435 Contributor Current User Role Supporter
    Thanks Michael Coburn ,

    - Yes there are details for Node and MySQLd exporter. See image attached.
    - Yes, we are getting all system overview details.
    - Yes, we are getting MySQL overview.

    It really seems like everything is alright, except no PXC info on the HA section... It is very odd, because I feel like PMM has the data it needs, but obviously not.

    One more item I noticed, we are not getting the size of the InnoDB Buffer Pool...

    When you say download the logs.zip, what logs are you referring to, the client logs?
  • Michael CoburnMichael Coburn Principal Architect, Percona Percona Staff Role
    Hi ctmrocks2435

    From your images, it looks like the collection stopped at 11am graph time - was this intentional? It looks like the exporters were all shut down at the same time.

    Is there anything in the PXC dashboard for entries under Cluster ?

    If nothing is there, try loading https://<PMM-SERVER>/prometheus and then type in mysql_global_status_wsrep_cluster_size and click Execute. Do you see any values come back? If so please share a screenshot.

    You can gather the logs by visiting the Prometheus Dashboard and clicking PMM Server Logs > server logs link.
    [IMG2=JSON]{"data-align":"none","data-size":"full","src":"https:\/\/www.percona.com\/forums\/image\/gif;base64,R0lGODlhAQABAPABAP\/\/\/wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw=="}[/IMG2]​
  • ctmrocks2435ctmrocks2435 Contributor Current User Role Supporter
    Hello, I exported the logs and reviewed them, they seem fine for me to send you via PM.

    As for the 11am thing, the VM did not have it's system lock set as UTC so there was a time shift, that is now fixed, but did not solve the cluster issue.
    You can see from my 4th image in the previous post, there is no data nor is there a cluster available in the selection drop-down.

    I'll send you system logs.

    Update: Michael Coburn It seems like I can not make a new DM, is that something you need to initiate due to my low rep?
  • lorraine.pocklingtonlorraine.pocklington Percona Community Manager Legacy User Role Patron
    Hello ctmrocks2435 you'd be welcome to email me if DM isn't working for you [email protected] and I'll get it across to the PMM team.
  • ctmrocks2435ctmrocks2435 Contributor Current User Role Supporter
    Michael Coburn I have taken a screenshot of the Prometheus query, everything looks good.
  • ctmrocks2435ctmrocks2435 Contributor Current User Role Supporter
    Hello All,
    It has been quiet for a while. I have since installed PMM 1.13.0 virtual machine, and have the same issue... Everything except Percona XtraDB stats work.
    Is it possible this is something to do with the collector on the local systems? I can not downgrade from 1.14.1 for pmm-client since it seems your repository only hosts the latest? Or my apt / search skills are not good enough to find how to install 1.13.0. pmm-client.
  • ctmrocks2435ctmrocks2435 Contributor Current User Role Supporter
    Well, I found the client packages finally, https://www.percona.com/downloads/pmm/1.3.0/binary/debian/
    I now am running 1.13.0 client and server, and still have the same issue...

    If that helps anyone...
    Soon I'll have to try going back to 1.9 and see if that works...
  • Michael CoburnMichael Coburn Principal Architect, Percona Percona Staff Role
    Hi ctmrocks2435

    I've reviewed your data collection as Lorraine forwarded it on to me, however unfortunately I didn't find any specific issue that would lead to PXC nodes not appearing on your PXC dashboards.

    Do you have any values in the drop-down for Cluster name? Or is the box empty? [IMG2=JSON]{"data-align":"none","data-size":"full","src":"https:\/\/www.percona.com\/forums\/image\/gif;base64,R0lGODlhAQABAPABAP\/\/\/wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw=="}[/IMG2]​

    https://pmmdemo.percona.com/graph/d/s_k9wGNiz/pxc-galera-cluster-overview?var-cluster=pxc57-cluster

    As you can see, PMM Demo has this populated with two cluster names.

    This field is built using Variables in Grafana's interface, and leverages mysql_global_state_wsrep_local_state and mysql_galera_variables_info metric series:

    You can also query for them directly via /prometheus/targets page:

    Please let me know if you see similar values from your PMM installation. Thanks,
  • ctmrocks2435ctmrocks2435 Contributor Current User Role Supporter
    Thanks Michael,

    I tried querying those variables "mysql_galera_variables_info" and "mysql_global_state_wsrep_local_state" in Prometheus, there is "no data".
    Similar variables like "mysql_global_status_wsrep_cluster_size" report the expected number of 5.

    There is no cluster name in the cluster option box. I can only select "None". So in this case it seems like the exporters are not sending the expected data...


    I think at this point I'm going to try rolling back everything to 1.9.0, just so I can get some continuous data on our system again. I'll cross my fingers that this issue does not continue to follow back to our last working version.
  • ctmrocks2435ctmrocks2435 Contributor Current User Role Supporter
    I have now reinstalled the 1.9.0 client and server... Still no Cluter data, so something must be wrong on my local systems with the client. Any ideas of how to troubleshoot this?

    It seems like somehow during the 1.9 client uninstall process, and 1.14.1 install process, I lost the ability on all 5 servers to export PXC cluster data. I do not remember configuring anything specifically for PXC.
  • Michael CoburnMichael Coburn Principal Architect, Percona Percona Staff Role
    Not sure ctmrocks2435 - I'm calling in some additional Engineers to look at this case, watch for updates in the next day or so.

    Offhand it sounds like the exporter isn't collecting PXC metrics, but why it would collect MySQL and not PXC doesn't make sense to me. We're looking into this..
  • altmannmarceloaltmannmarcelo Percona Current User Role Novice
    Hi ctmrocks2435.

    Can you please let share a bit more about your setup. We would like to get from one of the nodes:

    MySQL:
    - Output of SHOW GLOBAL STATUS
    - Output of SHOW GLOBAL VARIABLES

    OS:
    - ps -ef | grep export
    - pmm-admin list
    - pmm-admin info
  • ctmrocks2435ctmrocks2435 Contributor Current User Role Supporter
    I have now performed fresh install of PMM 1.14.1 both the server and clients. Just to make sure everything is stock.

    altmannmarcelo

    SHOW GLOBAL STATUS
    I have attached a file with the output from "SHOW GLOBAL STATUS" so it does not eat up the whole forum post.

    SHOW GLOBAL VARIABLES
    "SHOW GLOBAL VARIABLES" gives me an error "SQL Error (1682): Native table 'performance_schema'.'global_variables' has the wrong structure"

    ps_export.txt
    collin   28814  8042  0 09:56 pts/0    00:00:00 grep --color=auto export
    root     49036     1  0 09:42 ?        00:00:00 /bin/sh -c /usr/local/percona/pmm-client/node_exporter -web.listen-address=3.28.6.246:42000 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml -web.ssl-key-file=/usr/local/percona/pmm-client/server.key -web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt -collectors.enabled=diskstats,filefd,filesystem,loadavg,meminfo,netdev,netstat,stat,time,uname,vmstat,meminfo_numa >> /var/log/pmm-linux-metrics-42000.log 2>&1
    root     49038 49036  3 09:42 ?        00:00:25 /usr/local/percona/pmm-client/node_exporter -web.listen-address=3.28.6.246:42000 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml -web.ssl-key-file=/usr/local/percona/pmm-client/server.key -web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt -collectors.enabled=diskstats,filefd,filesystem,loadavg,meminfo,netdev,netstat,stat,time,uname,vmstat,meminfo_numa
    root     49093     1  0 09:42 ?        00:00:00 /bin/sh -c /usr/local/percona/pmm-client/mysqld_exporter -web.listen-address=3.28.6.246:42002 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml -web.ssl-key-file=/usr/local/percona/pmm-client/server.key -web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt -collect.auto_increment.columns=false -collect.binlog_size=true -collect.global_status=true -collect.global_variables=true -collect.info_schema.innodb_metrics=true -collect.info_schema.innodb_cmp=true -collect.info_schema.innodb_cmpmem=true -collect.info_schema.processlist=true -collect.info_schema.query_response_time=true -collect.info_schema.tables=false -collect.info_schema.tablestats=false -collect.info_schema.userstats=true -collect.perf_schema.eventswaits=true -collect.perf_schema.file_events=true -collect.perf_schema.indexiowaits=false -collect.perf_schema.tableiowaits=false -collect.perf_schema.tablelocks=false -collect.slave_status=true >> /var/log/pmm-mysql-metrics-42002.log 2>&1
    root     49095 49093  2 09:42 ?        00:00:20 /usr/local/percona/pmm-client/mysqld_exporter -web.listen-address=3.28.6.246:42002 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml -web.ssl-key-file=/usr/local/percona/pmm-client/server.key -web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt -collect.auto_increment.columns=false -collect.binlog_size=true -collect.global_status=true -collect.global_variables=true -collect.info_schema.innodb_metrics=true -collect.info_schema.innodb_cmp=true -collect.info_schema.innodb_cmpmem=true -collect.info_schema.processlist=true -collect.info_schema.query_response_time=true -collect.info_schema.tables=false -collect.info_schema.tablestats=false -collect.info_schema.userstats=true -collect.perf_schema.eventswaits=true -collect.perf_schema.file_events=true -collect.perf_schema.indexiowaits=false -collect.perf_schema.tableiowaits=false -collect.perf_schema.tablelocks=false -collect.slave_status=true
    


    pmm-list.txt
    pmm-admin 1.14.1
    
    PMM Server      | 3.28.7.221 (password-protected)
    Client Name     | ctd-ea
    Client Address  | 3.28.6.246
    Service Manager | linux-systemd
    
    -------------- ------- ----------- -------- ------------------------------------------ --------------------------------------------------------------------------------------
    SERVICE TYPE   NAME    LOCAL PORT  RUNNING  DATA SOURCE                                OPTIONS                                                                               
    -------------- ------- ----------- -------- ------------------------------------------ --------------------------------------------------------------------------------------
    mysql:queries  ctd-ea  -           YES                 pmm:***&#64;unix(/var/run/mysqld/mysqld.sock)  query_source=slowlog, query_examples=true, slow_log_rotation=true, retain_slow_logs=1
    linux:metrics  ctd-ea  42000       YES                 -                                                                                                                                
    mysql:metrics  ctd-ea  42002       YES                 -                                                                                                                               
    

    pmm-admin.txt
    pmm-admin 1.14.1
    
    PMM Server      | 3.28.7.221 (password-protected)
    Client Name     | ctd-ea
    Client Address  | 3.28.6.246
    Service Manager | linux-systemd
    
    Go Version      | 1.10.1
    Runtime Info    | linux/amd64
    




    [B]Allow remote SSH:[/B]
    
    vi /etc/ssh/sshd_config
    Find PasswordAuthentication and set to "yes"
    
    
    [B]Static IP:[/B]
    
    3. Add the following to the bottom of /etc/cloud/cloud.cfg file (vi edit can be used):
    ([URL]https://www.percona.com/forums/questions-discussions/percona-monitoring-and-management/50853-ova-server-change-ip[/URL])  
     network:    config: disabled 
    Then use network manager to change DHCP to static
    
    # IP Address in CIDR notation
    nmcli con mod "System eth0" ipv4.addresses 3.28.7.221/22
    # Default gateway
    nmcli con mod "System eth0" ipv4.gateway 3.28.4.254
    
    # DNS
    nmcli con mod "System eth0" ipv4.dns "10.220.220.220 10.220.220.221"
    nmcli con mod "System eth0" ipv4.method manual
    nmcli con mod "System eth0" connection.autoconnect yes
    
    
    [B]Go to IP in web browser and setup system with pmm / pmm user and password.[/B]
    
    [B]Then add server config on each client:[/B]
    
    sudo pmm-admin config --server 3.28.7.221 --server-user pmm --server-password 'pmm'
    
    Then add monitoring services with password for mysql
    
    sudo pmm-admin add mysql --user pmm --password pmm
    
    I have also included the configuration changes I have been making to the PMM install.
  • altmannmarceloaltmannmarcelo Percona Current User Role Novice
    Hi ctmrocks2435

    I think your issue is related to global variables table been inaccessible via performance_schema.


    > "SHOW GLOBAL VARIABLES" gives me an error "SQL Error (1682): Native table 'performance_schema'.'global_variables' has the wrong structure"

    PMM relay on those outputs to populate data.

    Can you please run mysql_upgrade on your nodes and make sure you can run SHOW GLOBAL VARIABLES. Please note that system tables such as global_variables require a mysql service restart to take effect.

    Thanks in advance.
  • ctmrocks2435ctmrocks2435 Contributor Current User Role Supporter
    Finally had a window to cycle the service after running "mysql_upgrade".
    Now I have PMM showing cluster status again.
    So it seems like the PMM install caused this, since until this install, there was no issues with "global_variables". This would also explain why going back to 1.9 failed to work.
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.