Cannot add remote MySQL instance due to Connection check failed

On a fresh PMM 2.6.1 installation I’m not able to add a remote MySQL instance, because of connection check failures. I’ve followed the procedure for running PMM server via Docker and I have set up a user on the MySQL server instance as described in Creating a MySQL User Account to Be Used with PMM. My Docker host is a fully updated CentOS 8.1 virtual machine and my MySQL server instance is on a physical machine in the same network segment. I have access from the virtual machine to the MySQL sever with the pmm’s user and password. From within the container I have ping to the MySQL server instance. However, when adding new remote MySQL instance I’m getting either of the following errors:

  • Connection check failed: dial tcp 192.168.0.1:3306: i/o timeout.
  • Connection check failed: dial tcp 192.168.0.1:3306: connect: no route to host.
What could be the problem?



Hi, @gdsotirov
Thank you for your question. This could be a bug. Could you create a task in Jira (https://jira.percona.com/projects/PMM/issues), our developers will investigate it.
Thank you.

@“daniil.bazhenov” It could be something with the configuration of the container or the host, because I meanwhile noticed that if I run the container with --network=host there are no errors when adding the same MySQL instance. I’m still doubtful it’s a bug, because I use the official container and followed the official procedure, which are expected to work.


@gdsotirov
Yeah, you might have a bad password or some kind of network error. 
P.S. I recommended Jira, because engineers can investigate the problem there. 

@“daniil.bazhenov” To me it seems that PMM is not even getting to password verification. Otherwise, I ensured that the user/password work from the same host where the container runs. And also as I wrote before it works just fine if I run the container with --network=host option, but this is not my preference except for testing. If found no such problem reported in JIRA or elsewhere on Internet, so I still wonder what could be the problem in my case. What else I could try and/or check before reporting a bug?
P.S. I have pulled the container image from Docker Hub and executed the procedure at least twice to be sure I haven’t mixed up something the first time.



@gdsotirov The fastest way to create a task at Jira, then the team will check it out. It’s free :) 

Fair enough. Issue PMM-6041 created.




So this feels like network issues so here’s a few things to try (starting basic and going up from there):
Are you able to monitor clients using the pmm-client (vs the remote monitoring)?  
Can your docker host ping the server in question?  If you can ping the server can the docker host telnet to that server on port 3306 (telnet <servername> 3306)?  If no to either (or both) I’m wondering if there’s a firewall in the way or possibly mysql’s not listening to a port but instead bound to a socket.  
I’m confused by the “no route to host” which typically says the pmm server either doesn’t have a network connection OR the routing tables are messed up and trying to get to the 192.168.0.x network via the wrong interface (maybe check to make sure your subnet masks are correct on both mysql server and docker host?  I had this be the issue once…chased my tail for HOURS only to find I fat-fingered 255.255.255.50 or something silly) 
Lets start there and I might have more thoughts based on what you discover.  

Are you able to monitor clients using the pmm-client (vs the remote monitoring)?
No, because the client is not available for the OS on which the MySQL server is running.
Can your docker host ping the server in question?
Yes, as I indicated in my original post I could ping the server from the docker host (CentOS) and from inside the container. Here are the two examples:
[root@centos ~]# ping -c 1 192.168.0.1<br>PING 192.168.79.5 (192.168.0.1) 56(84) bytes of data.<br>64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.156 ms<br><br>--- 192.168.0.1 ping statistics ---<br>1 packets transmitted, 1 received, 0% packet loss, time 0ms<br>rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms
[root@42ebccdb24c6 opt]# ping -c 1 192.168.0.1<br>PING 192.168.79.5 (192.168.0.1) 56(84) bytes of data.<br>64 bytes from 192.168.0.1: icmp_seq=1 ttl=63 time=0.234 ms<br><br>--- 192.168.0.1 ping statistics ---<br>1 packets transmitted, 1 received, 0% packet loss, time 0ms<br>rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms
If you can ping the server can the docker host telnet to that server on port 3306 (telnet <servername> 3306)?  If no to either (or both) I'm wondering if there's a firewall in the way or possibly mysql's not listening to a port but instead bound to a socket.
Yes, as I indicated in one of my follow ups "I ensured that the user/password work from the same host where the container runs". Here's an example:
[root@centos ~]# mysql -h 192.168.0.1 -u pmm -p<br>Enter password:<br>Welcome to the MySQL monitor.&nbsp; Commands end with ; or \g.<br>Your MySQL connection id is 507073<br>Server version: 5.7.30-log Source distribution<br><br>Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.<br><br>Oracle is a registered trademark of Oracle Corporation and/or its<br>affiliates. Other names may be trademarks of their respective<br>owners.<br><br>Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.<br>mysql&gt;<br>

And with telnet:

[root@centos ~]# telnet 192.168.0.1 3306<br>Trying 192.168.0.1...<br>Connected to 192.168.0.1.<br>Escape character is '^]'.<br>N<br>5.7.30-log▒m?r_6▒▒Jic?gUJea(^mysql_native_password^CConnection closed by foreign host.

I cannot try the same from the container, because neither command mysql nor telnet is available.

<code>[root@42ebccdb24c6 opt]# mysql<br>bash: mysql: command not found<br>[root@42ebccdb24c6 opt]# telnet<br>bash: telnet: command not found<br></code>

There is no firewall and both the docker host and the MySQL server are in the same network segment. The MySQL server is well listening on the network port as confirmed above.

I'm confused by the "no route to host" which typically says the pmm server either doesn't have a network connection OR the routing tables are messed up and trying to get to the 192.168.0.x network via the wrong interface (maybe check to make sure your subnet masks are correct on both mysql server and docker host?
I'm confused as well, so I decided to start this post :) On the docker host there are the following interfaces:
[root@centos ~]# ifconfig<br>docker0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;&nbsp; mtu 1500<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet 172.17.0.1&nbsp; netmask 255.255.0.0&nbsp; broadcast 172.17.255.255<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet6 fe80::42:71ff:fe0d:132c&nbsp; prefixlen 64&nbsp; scopeid 0x20&lt;link&gt;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ether 02:42:71:0d:13:2c&nbsp; txqueuelen 0&nbsp; (Ethernet)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX packets 64520&nbsp; bytes 44355116 (42.3 MiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX errors 0&nbsp; dropped 0&nbsp; overruns 0&nbsp; frame 0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX packets 51299&nbsp; bytes 5509338 (5.2 MiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX errors 0&nbsp; dropped 0 overruns 0&nbsp; carrier 0&nbsp; collisions 0<br><br>ens32: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;&nbsp; mtu 1500<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet 192.168.0.1&nbsp; netmask 255.255.255.128&nbsp; broadcast 192.168.0.127<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet6 fe80::250:56ff:fe03:84&nbsp; prefixlen 64&nbsp; scopeid 0x20&lt;link&gt;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet6 2001:470:7038:1979:250:56ff:fe03:84&nbsp; prefixlen 64&nbsp; scopeid 0x0&lt;global&gt;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ether 00:50:56:03:00:84&nbsp; txqueuelen 1000&nbsp; (Ethernet)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX packets 2127919&nbsp; bytes 360648671 (343.9 MiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX errors 0&nbsp; dropped 1&nbsp; overruns 0&nbsp; frame 0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX packets 704897&nbsp; bytes 120451110 (114.8 MiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX errors 0&nbsp; dropped 0 overruns 0&nbsp; carrier 0&nbsp; collisions 0<br><br>lo: flags=73&lt;UP,LOOPBACK,RUNNING&gt;&nbsp; mtu 65536<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet 127.0.0.1&nbsp; netmask 255.0.0.0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet6 ::1&nbsp; prefixlen 128&nbsp; scopeid 0x10&lt;host&gt;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; loop&nbsp; txqueuelen 1000&nbsp; (Local Loopback)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX packets 0&nbsp; bytes 0 (0.0 B)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX errors 0&nbsp; dropped 0&nbsp; overruns 0&nbsp; frame 0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX packets 0&nbsp; bytes 0 (0.0 B)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX errors 0&nbsp; dropped 0 overruns 0&nbsp; carrier 0&nbsp; collisions 0<br><br>veth5dc95c9: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;&nbsp; mtu 1500<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet6 fe80::946a:aeff:febd:79&nbsp; prefixlen 64&nbsp; scopeid 0x20&lt;link&gt;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ether 96:6a:ae:bd:00:79&nbsp; txqueuelen 0&nbsp; (Ethernet)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX packets 64520&nbsp; bytes 45258396 (43.1 MiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX errors 0&nbsp; dropped 0&nbsp; overruns 0&nbsp; frame 0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX packets 51619&nbsp; bytes 5532818 (5.2 MiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX errors 0&nbsp; dropped 0 overruns 0&nbsp; carrier 0&nbsp; collisions 0<br><br>virbr0: flags=4099&lt;UP,BROADCAST,MULTICAST&gt;&nbsp; mtu 1500<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet 192.168.122.1&nbsp; netmask 255.255.255.0&nbsp; broadcast 192.168.122.255<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ether 52:54:00:e2:e4:25&nbsp; txqueuelen 1000&nbsp; (Ethernet)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX packets 0&nbsp; bytes 0 (0.0 B)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX errors 0&nbsp; dropped 0&nbsp; overruns 0&nbsp; frame 0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX packets 0&nbsp; bytes 0 (0.0 B)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX errors 0&nbsp; dropped 0 overruns 0&nbsp; carrier 0&nbsp; collisions 0

And on the MySQL server host:

root@sotirov:~# ifconfig<br>eth0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;&nbsp; mtu 1500<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet 192.168.0.1&nbsp; netmask 255.255.255.128&nbsp; broadcast 192.168.0.127<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet6 2001:470:7038:1979:c6ca:6bb0:3f8c:2f5&nbsp; prefixlen 64&nbsp; scopeid 0x0&lt;global&gt;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet6 fe80::9eb6:20c:f215:9576&nbsp; prefixlen 64&nbsp; scopeid 0x20&lt;link&gt;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ether d8:50:e6:4b:e4:78&nbsp; txqueuelen 1000&nbsp; (Ethernet)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX packets 322041226&nbsp; bytes 209841554320 (195.4 GiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX errors 0&nbsp; dropped 17&nbsp; overruns 0&nbsp; frame 0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX packets 755094716&nbsp; bytes 1054750639701 (982.3 GiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX errors 0&nbsp; dropped 0 overruns 0&nbsp; carrier 0&nbsp; collisions 0<br><br>lo: flags=73&lt;UP,LOOPBACK,RUNNING&gt;&nbsp; mtu 65536<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet 127.0.0.1&nbsp; netmask 255.0.0.0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inet6 ::1&nbsp; prefixlen 128&nbsp; scopeid 0x10&lt;host&gt;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; loop&nbsp; txqueuelen 1&nbsp; (Local Loopback)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX packets 170692305&nbsp; bytes 40904383500 (38.0 GiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RX errors 0&nbsp; dropped 0&nbsp; overruns 0&nbsp; frame 0<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX packets 170692305&nbsp; bytes 40904383500 (38.0 GiB)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; TX errors 0&nbsp; dropped 0 overruns 0&nbsp; carrier 0&nbsp; collisions 0

I do not think the problem are the network masks, because the MySQL server is well accessible from anywhere on my network.

Apologies for missing some of those details in the beginning!  
since you’re comfortable being inside the docker container, it’s based on CentOS so you can 

yum install telnet mysql-client

to be able to diagnose.  Because this works with --network=host I wonder if somehow the iptables NAT rules are preventing access (don't ask me how...just trying to think through what is between the container and the destination host).  would you mind posting the output of 

``` iptables -L -t nat -nv ``` and it might be helpful to see if something like SELinux is stomping on docker somehow: 

```

iptables -L -nv

```

I have added a few remote instances on my test boxes with no issue (well had to change the grant statements to allow the pmm user to connect from server but the error reflected that it was a permission denied vs couldn't connect).  

Another place to look is in /srv/logs/pmm-managed.log when you're trying to add in case there's more details there vs the screen message.  I think it's safe to say that if you can connect to the port inside the docker container then it's clearly a PMM software issue and I'll escalate accordingly...if you could attach the server logs to your jira ticket right after the failure (PMM--> Settings --> Diagnostics --> Download Server Logs) that'll help see what's happening and where.  

I’m not sure why my code snippets in previous post are visualized as one line. I’m not used to this editor. Sorry about that.So I tried to install telnet into the container and had many “Could not resolve host: mirror.centos.org; Unknown error” like errors as yum tried different mirrors, but it ultimately failed. So I tried the following inside the container:
[root@42ebccdb24c6 opt]# ping mirror.centos.org
ping: mirror.centos.org: Name or service not known
[root@42ebccdb24c6 opt]# ping disney.com
ping: disney.com: Name or service not known

And then on the host:
[root@centos ~]# ping -c 1 mirror.centos.org
PING mirror.centos.org(2604:eb80:1:4::10 (2604:eb80:1:4::10)) 56 data bytes
^CFrom 2001:470:7038:1979::1: icmp_seq=1 Destination unreachable: No route

mirror.centos.org ping statistics —
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[root@centos ~]# ping disney.com
PING disney.com (130.211.198.204) 56(84) bytes of data.
64 bytes from 204.198.211.130.bc.googleusercontent.com (130.211.198.204): icmp_seq=1 ttl=104 time=139 ms
64 bytes from 204.198.211.130.bc.googleusercontent.com (130.211.198.204): icmp_seq=2 ttl=104 time=139 ms
64 bytes from 204.198.211.130.bc.googleusercontent.com (130.211.198.204): icmp_seq=3 ttl=104 time=139 ms
64 bytes from 204.198.211.130.bc.googleusercontent.com (130.211.198.204): icmp_seq=4 ttl=104 time=139 ms
^C
disney.com ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 6ms
So why is my container not able to resolve hosts? I’m using my network’s internal DNS server which of course uses as forwarders my ISP’s DNSes and then Google’s. The DNS server works just fine as tested from several other hosts (Linux and Windows) from the same network.I’ll keep digging, but I’m sharing meanwhile if anyone has ideas.P.S. I removed the Code format as it seems to visualize only the first line.


Note mirror.centos.org resolved to IPv6 IP  while disney.com is IPv4 - could there be some issue with ipv6 routing in your setup ?

Yeah, I’ll have to check this later today. My network is IPv4 only. Both hosts resolve properly on the host, but not in the container.

OK. IPv6 routing doesn’t currently work in my IPv4 network. I got promised that the problem would be worked out soon. However, this was an issue found out as I was trying to install telnet in the container to probe the MySQL server port. I cannot assume that PMM server is IPv6 only and this is why I cannot add MySQL instances for monitoring.

Have not had the time to write earlier, but I already solved it. The problem was in firewalld, which wasn’t allowing DNS requests from the container. After I fixed this I was able to add remote MySQL instances without any problem.


Sometimes those darn firewalls are too effective ?. Glad you were able to figure it out!