Description:
Hi. i have a situation.
i am running mongodb using psmdb on kubernetes
and my pmm is running on other single linux machine hosting from hetzner
when let mongodb pmm enabled,
it can make service token on pmm
but when register node that running mongodb to pmm, node’s ip-address is internal ip address.
i am using loadbalancer for access from internet
Steps to Reproduce:
run pmm-server on seperated baremetal
run operator and monodb on kubernetes with loadbalancer
Version:
3.5
Logs:
log of pmm-client container
time=“2025-12-17T01:58:50.694+00:00” level=info msg=“Connecting to https://service_token:***@pmm.harrie-cluster.cloud:443/ …” component=client
time=“2025-12-17T01:58:50.789+00:00” level=info msg=“Connected to pmm.harrie-cluster.cloud:443.” component=client
time=“2025-12-17T01:58:50.789+00:00” level=info msg=“Establishing two-way communication channel …” component=client
time=“2025-12-17T01:58:55.694+00:00” level=error msg=“Failed to establish two-way communication channel: context canceled.” component=client
screenshot of pmm’s nodes page
Expected Result:
working out of box
Actual Result:
due to mis-configurated node’s address (which is internal ip address, not it’s public ip address) pmm-server cannot communicate with pmm-client
Additional Information:
-
output of “pmm-admin status –debug” from pmm-client container
DEBUG 2025-12-17 02:04:54.285647278Z: POST /local/Status HTTP/1.1 Host: 127.0.0.1:7777 User-Agent: Go-http-client/1.1 Content-Length: 26 Accept: application/json Content-Type: application/json Accept-Encoding: gzip {“get_network_info”:true} DEBUG 2025-12-17 02:04:54.289504068Z: HTTP/1.1 200 OK Content-Length: 495 Content-Type: application/json Date: Wed, 17 Dec 2025 02:04:54 GMT Grpc-Metadata-Content-Type: application/grpc { “agent_id”: “ac716163-8733-4e40-9a7d-b6d0c2ed0ece”, “runs_on_node_id”: “”, “node_name”: “”, “server_info”: { “url”: “https://service_token:xxxxx@pmm.harrie-cluster.cloud:443/”, “insecure_tls”: true, “connected”: false, “version”: “”, “latency”: null, “clock_drift”: null }, “agents_info”: , “config_filepath”: “/usr/local/percona/pmm/config/pmm-agent.yaml”, “agent_version”: “3.5.0”, “connection_uptime”: 0 } DEBUG 2025-12-17 02:04:54.28963476Z: Result: &commands.statusResult{PMMAgentStatus:(*agentlocal.Status)(0xc0002bce60), PMMVersion:“3.5.0”} DEBUG 2025-12-17 02:04:54.28964831Z: Error: Agent ID : ac716163-8733-4e40-9a7d-b6d0c2ed0ece Node ID : Node name: PMM Server: URL : https://pmm.harrie-cluster.cloud:443/ Version: PMM Client: Connected : false Connection uptime: 0 pmm-admin version: 3.5.0 pmm-agent version: 3.5.0 Agents: -
output of “/usr/local/percona/pmm/config/pmm-agent.yaml” from pmm-client
id: ac716163-8733-4e40-9a7d-b6d0c2ed0ece
listen-address: 0.0.0.0
listen-port: 7777
server:
address: pmm.harrie-cluster.cloud:443
username: service_token
password: xxxxx
insecure-tls: true
paths:
paths_base: /usr/local/percona/pmm
exporters_base: /usr/local/percona/pmm/exporters
node_exporter: /usr/local/percona/pmm/exporters/node_exporter
mysqld_exporter: /usr/local/percona/pmm/exporters/mysqld_exporter
mongodb_exporter: /usr/local/percona/pmm/exporters/mongodb_exporter
postgres_exporter: /usr/local/percona/pmm/exporters/postgres_exporter
proxysql_exporter: /usr/local/percona/pmm/exporters/proxysql_exporter
rds_exporter: /usr/local/percona/pmm/exporters/rds_exporter
azure_exporter: /usr/local/percona/pmm/exporters/azure_exporter
valkey_exporter: /usr/local/percona/pmm/exporters/valkey_exporter
vmagent: /usr/local/percona/pmm/exporters/vmagent
nomad: /usr/local/percona/pmm/tools/nomad
tempdir: /tmp/pmm
nomad_data_dir: /usr/local/percona/pmm/data/nomad
pt_summary: /usr/local/percona/pmm/tools/pt-summary
pt_pg_summary: /usr/local/percona/pmm/tools/pt-pg-summary
pt_mysql_summary: /usr/local/percona/pmm/tools/pt-mysql-summary
pt_mongodb_summary: /usr/local/percona/pmm/tools/pt-mongodb-summary
ports:
min: 30100
max: 30105
log-level: “”
debug: false
trace: false
loglinescount: 1024
perfschema-refresh-rate: 5
