Pmm-client cannot connect to pmm3.0

latest release of everest begin to support pmm3.0
but when using latest everest helm chart and deploy mongodb 8.0.12

pmm-client cannot connect to my pmm-server

pmm-server is running on external host, everest is running on kubernetes (aws eks)
pmm-server is configured with nginx reverse proxy with tls.

using browser using pmm-server has no issue. working out of box.

when running “pmm-admin status –debug” from mongodb pod,

bash-5.1$ pmm-admin status --debug
DEBUG 2025-12-16 08:14:12.765900331Z: POST /local/Status HTTP/1.1
Host: 127.0.0.1:7777
User-Agent: Go-http-client/1.1
Content-Length: 26
Accept: application/json
Content-Type: application/json
Accept-Encoding: gzip

{“get_network_info”:true}

DEBUG 2025-12-16 08:14:12.766818219Z: HTTP/1.1 200 OK
Content-Length: 495
Content-Type: application/json
Date: Tue, 16 Dec 2025 08:14:12 GMT
Grpc-Metadata-Content-Type: application/grpc

{
“agent_id”: “82cb1507-912c-4a77-b123-1778da7a0690”,
“runs_on_node_id”: “”,
“node_name”: “”,
“server_info”: {
“url”: “https://service_token:XXXX@pmm.harrie-cluster.cloud:443/”,
“insecure_tls”: true,
“connected”: false,
“version”: “”,
“latency”: null,
“clock_drift”: null
},
“agents_info”: ,
“config_filepath”: “/usr/local/percona/pmm/config/pmm-agent.yaml”,
“agent_version”: “3.5.0”,
“connection_uptime”: 0
}
DEBUG 2025-12-16 08:14:12.766949106Z: Result: &commands.statusResult{PMMAgentStatus:(*agentlocal.Status)(0xc000137d60), PMMVersion:“3.5.0”}
DEBUG 2025-12-16 08:14:12.766961547Z: Error:
Agent ID : 82cb1507-912c-4a77-b123-1778da7a0690
Node ID :
Node name:

PMM Server:
URL : https://pmm.harrie-cluster.cloud:443/
Version:

PMM Client:
Connected : false
Connection uptime: 0
pmm-admin version: 3.5.0
pmm-agent version: 3.5.0
Agents:

error message is:
Failed to register pmm-agent on PMM Server: Auth method is not service account token

i am running pmm 3.0 on external host (single baremetal from hosting company)

and running everest on kubernetes, mongodb also running on same cluster.

i am seeing when register database node to pmm. node’s ip address is internal
for example 192.168.87.124
not node’s external ip or domain name or loadbalancer’s ip address.

i think this is root problem.

Hello @jjpark ,

Have you not added PMM using Everest option “Add monitoring endpoint”?
Please follow this to add PMM

If the PMM is reachable, it will add and the only thing you have to do is just enable the monitoring for whatever cluster you have.

You dont need to do anything manually.

Have you not added PMM using Everest option “Add monitoring endpoint”?

answer is “i already add monitoring endpoint and setup enable monitoring”

then i have these issues

additional info.

screenshot of pmm-server’s grafana
maybe node’s ip address has issue

and logs from pmm-client container

time=“2025-12-20T04:25:22.333+00:00” level=info msg=Done. component=local-server
time=“2025-12-20T04:25:23.205+00:00” level=error msg=“Failed to establish two-way communication channel: context canceled.” component=client
time=“2025-12-20T04:25:23.206+00:00” level=error msg=“Client error: failed to receive message: rpc error: code = Canceled desc = context canceled”
time=“2025-12-20T04:25:23.206+00:00” level=warning msg=“Failed to read directory ‘/tmp/pmm’: open /tmp/pmm: no such file or directory” component=main
time=“2025-12-20T04:25:23.206+00:00” level=info msg=Done. component=main

pmm-server runs on baremetal installed with custom docker-compose
everest runs on kubernetes with helm installation

monitoring endpoint registered.
monitoring option on mongodb is “on”

if you need more detail information, i’ll happy to provide it.
thanks.

This seems to be an issue with network connections in between bare metal PMM server and K8s environment where Everest exists.
Can you make sure the networks settings are in place as required in between the PMM server and K8s?

PMM client container is installed on the Mongo pods and it should be able to reach the PMM server.

Hi there,
recently I tried to set up Percona Everest, including PMM, following the official documentation.

Since the docs mention that PMM 3 is not yet supported, I initially decided to deploy a PMM 2.44 server by setting the pmm.image.tag Helm value.
However, the connection from pmm-clients didn’t work because they were running PMM 3, which doesn’t appear to be backward-compatible with a 2.44 server.

I ran a few tests to try to make this setup work, but in the end I decided to redeploy a PMM 3 server instead.

During this process, I encountered several issues mainly related to volume / filesystem permissions, which I solved by setting the following Helm values:

podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000

I also had to apply a few fixes related to internal ports used by NGINX, since it is not running as root.

Service fix
Map external ports to the internal ones used by the container:

  • External 443 → Internal 8443

  • External 80 → Internal 8080

kubectl patch svc monitoring-service -n everest-system --type='json' -p='[
  {"op": "replace", "path": "/spec/ports/0/targetPort", "value": 8443},
  {"op": "replace", "path": "/spec/ports/1/targetPort", "value": 8080}
]'

Readiness probe fix
Tell K3s to check port 8080 instead of 80:

kubectl patch statefulset everest-pmm -n everest-system --type='json' -p='[
  {"op": "replace", "path": "/spec/template/spec/containers/0/readinessProbe/httpGet/port", "value": 8080}
]'

Looking for any kind of advice to better deploy it if I’m wrong or hoping this can help someone.

A.