Missing Mongos metrics on the PMM 2.15.1

Hello, I’m trying to upgrade from PMM 2.9.1 to PMM 2.15.1.
I already run the PMM 2.15.1 on my local setup but it seems that there are so many missing items for Mongos on the PMM 2.15.1.

Could you let me know when the Mongos’s metric items are available?

Hi, have you upgraded the pmm-client ? you need to run the same client and server versions in order to get everything working properly.

1 Like

Thank you for your reply. Yes. I already have upgraded the pmm-client to 2.15.1 but same result.

1 Like

OK, if versions match everything should work. Can you give me an example of which MongoDB metrics are showing up and which are not? screenshots would help. They are all collected by the same exporter btw, anything showing up on the journal? You can che with journalctl -u pmm-agent -xe

1 Like

I’m using the PMM as a container on the k8s. And due to security policy on my work place, I could not upload the screenshots. :frowning:
But here are the logs for that. I could upload the text.

Following logs got from the pmm-client container which is collecting metrics from the mongos.

bash-4.2$ /usr/local/percona/pmm2/exporters/mongodb_exporter --version
mongodb_exporter - MongoDB Prometheus exporter
Version: v0.20.1
Commit: 8b80aefd79020034c351243fdf067a7c4c24c61d
Build date: 2021-03-18T09:09:06+0000
bash-4.2$ pmm-admin --version
ProjectName: pmm-admin
Version: 2.15.1
PMMVersion: 2.15.1
Timestamp: 2021-03-18 09:09:50 (UTC)
FullCommit: c621cd2878edcffce71b6660f57495412f88ac06
bash-4.2$ journalctl -u pmm-agent -xe
No journal files were found.
-- No entries --
bash-4.2$ ps -ef
pmm-age+     1     0  0 02:13 ?        00:00:00 run
pmm-age+    85     1  0 02:13 ?        00:00:00 /usr/local/percona/pmm2/exporters/mongodb_exporter --compatible-mode --mongodb.global-conn-pool --web.listen-address=:42001
pmm-age+    86     1  0 02:13 ?        00:00:03 /usr/local/percona/pmm2/exporters/node_exporter --collector.bonding --collector.buddyinfo --collector.cpu --collector.diskstats --collector.entropy --collector.filefd --collector.
pmm-age+   110     1  0 02:13 ?        00:00:02 /usr/local/percona/pmm2/exporters/vmagent -envflag.enable=true -httpListenAddr= -loggerLevel=INFO -promscrape.config=/tmp/vm_agent/agent_id/395213e3-7a1b-4352-99fd-
pmm-age+   141     0  0 02:21 pts/0    00:00:00 bash
pmm-age+   147   141  0 02:21 pts/0    00:00:00 ps -ef
bash-4.2$ cat /tmp/vm_agent/agent_id/395213e3-7a1b-4352-99fd-dd84b2f7e475/vmagentscrapecfg
global: {}
    - job_name: mongodb_exporter_agent_id_ba652010-1943-4bcb-8471-ec9ba9c040d7_hr-10s
      honor_timestamps: false
      scrape_interval: 10s
      scrape_timeout: 9s
      metrics_path: /metrics
        - targets:
            agent_id: /agent_id/ba652010-1943-4bcb-8471-ec9ba9c040d7
            agent_type: mongodb_exporter
            cluster: system/mongodb-sample-cluster
            instance: /agent_id/ba652010-1943-4bcb-8471-ec9ba9c040d7
            node_id: /node_id/3a2aac33-ed74-4e1c-87d5-6804710b772c
            node_name: mongodb-sample-cluster-mongos-0
            node_type: container
            service_id: /service_id/69f25033-4d65-4b69-b707-3842f614e556
            service_name: mongodb-sample-cluster-mongos-0-mongodb
            service_type: mongodb
        username: pmm
        password: /agent_id/ba652010-1943-4bcb-8471-ec9ba9c040d7
    - job_name: node_exporter_agent_id_c3112585-f59c-4c5f-882a-04498447f1ae_hr-10s
      honor_timestamps: false
            - buddyinfo
            - cpu
            - diskstats
            - filefd
            - filesystem
            - loadavg
            - meminfo
            - meminfo_numa
            - netdev
            - netstat
            - processes
            - standard.go
            - standard.process
            - stat
            - textfile.hr
            - time
            - vmstat
      scrape_interval: 10s
      scrape_timeout: 9s
      metrics_path: /metrics
        - targets:
            agent_id: /agent_id/c3112585-f59c-4c5f-882a-04498447f1ae
            agent_type: node_exporter
            instance: /agent_id/c3112585-f59c-4c5f-882a-04498447f1ae
            node_id: /node_id/3a2aac33-ed74-4e1c-87d5-6804710b772c
            node_name: mongodb-sample-cluster-mongos-0
            node_type: container
        username: pmm
        password: /agent_id/c3112585-f59c-4c5f-882a-04498447f1ae
    - job_name: node_exporter_agent_id_c3112585-f59c-4c5f-882a-04498447f1ae_mr-31s
      honor_timestamps: false
            - hwmon
            - textfile.mr
      scrape_interval: 31s
      scrape_timeout: 10s
      metrics_path: /metrics
        - targets:
            agent_id: /agent_id/c3112585-f59c-4c5f-882a-04498447f1ae
            agent_type: node_exporter
            instance: /agent_id/c3112585-f59c-4c5f-882a-04498447f1ae
            node_id: /node_id/3a2aac33-ed74-4e1c-87d5-6804710b772c
            node_name: mongodb-sample-cluster-mongos-0
            node_type: container
        username: pmm
        password: /agent_id/c3112585-f59c-4c5f-882a-04498447f1ae
    - job_name: node_exporter_agent_id_c3112585-f59c-4c5f-882a-04498447f1ae_lr-2m1s
      honor_timestamps: false
            - bonding
            - entropy
            - textfile.lr
            - uname
      scrape_interval: 121s
      scrape_timeout: 10s
      metrics_path: /metrics
        - targets:
            agent_id: /agent_id/c3112585-f59c-4c5f-882a-04498447f1ae
            agent_type: node_exporter
            instance: /agent_id/c3112585-f59c-4c5f-882a-04498447f1ae
            node_id: /node_id/3a2aac33-ed74-4e1c-87d5-6804710b772c
            node_name: mongodb-sample-cluster-mongos-0
            node_type: container
        username: pmm
        password: /agent_id/c3112585-f59c-4c5f-882a-04498447f1ae
    - job_name: vmagent_agent_id_395213e3-7a1b-4352-99fd-dd84b2f7e475_mr-31s
      honor_timestamps: false
      scrape_interval: 31s
      scrape_timeout: 10s
        - targets:
            agent_id: /agent_id/395213e3-7a1b-4352-99fd-dd84b2f7e475
            agent_type: vmagent
            instance: /agent_id/395213e3-7a1b-4352-99fd-dd84b2f7e475
            node_id: /node_id/3a2aac33-ed74-4e1c-87d5-6804710b772c
            node_name: mongodb-sample-cluster-mongos-0
            node_type: container
</html>bash-4.2$ curl --basic -G -u pmm:/agent_id/ba652010-1943-4bcb-8471-ec9ba9c040d7 http://localhost:42001/metrics
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 2.3819e-05
go_gc_duration_seconds{quantile="0.25"} 2.9402e-05
go_gc_duration_seconds{quantile="0.5"} 3.3851e-05
go_gc_duration_seconds{quantile="0.75"} 4.4999e-05
go_gc_duration_seconds{quantile="1"} 0.000224606
go_gc_duration_seconds_sum 0.001644593
go_gc_duration_seconds_count 34
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 10
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.15.7"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.869224e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 9.5247224e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.46617e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 560634
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 8.096755687977828e-06
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 5.207376e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 3.869224e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 6.0506112e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 5.881856e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 14624
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 5.9449344e+07
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 6.6387968e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.6184536668484185e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 575258
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 13888
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 150960
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 180224
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 6.390592e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.601398e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 720896
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 720896
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 7.5580416e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 10
# HELP mongodb_mongod_locks_time_acquiring_global_microseconds_total sum of serverStatus.locks.Global.timeAcquiringMicros.[r|w]
# TYPE mongodb_mongod_locks_time_acquiring_global_microseconds_total gauge
mongodb_mongod_locks_time_acquiring_global_microseconds_total 0
# HELP mongodb_mongod_storage_engine The storage engine used by the MongoDB instance
# TYPE mongodb_mongod_storage_engine gauge
mongodb_mongod_storage_engine{engine="Engine is unavailable"} 1
# HELP mongodb_mongod_wiredtiger_cache_evicted_total wiredtiger cache evicted total
# TYPE mongodb_mongod_wiredtiger_cache_evicted_total gauge
mongodb_mongod_wiredtiger_cache_evicted_total 0
# HELP mongodb_mongos_db_collections_total Total number of collections
# TYPE mongodb_mongos_db_collections_total gauge
mongodb_mongos_db_collections_total{db="admin",shard="rs0"} 1
mongodb_mongos_db_collections_total{db="admin",shard="rs1"} 1
mongodb_mongos_db_collections_total{db="admin",shard="rs2"} 1
mongodb_mongos_db_collections_total{db="config",shard="rs0"} 8
mongodb_mongos_db_collections_total{db="config",shard="rs1"} 7
mongodb_mongos_db_collections_total{db="config",shard="rs2"} 7
# HELP mongodb_mongos_db_data_size_bytes The total size in bytes of the uncompressed data held in this database
# TYPE mongodb_mongos_db_data_size_bytes gauge
mongodb_mongos_db_data_size_bytes{db="admin",shard="rs0"} 568
mongodb_mongos_db_data_size_bytes{db="admin",shard="rs1"} 525
mongodb_mongos_db_data_size_bytes{db="admin",shard="rs2"} 525
mongodb_mongos_db_data_size_bytes{db="config",shard="rs0"} 248770
mongodb_mongos_db_data_size_bytes{db="config",shard="rs1"} 190643
mongodb_mongos_db_data_size_bytes{db="config",shard="rs2"} 189695
# HELP mongodb_mongos_db_index_size_bytes The total size in bytes of all indexes created on this database
# TYPE mongodb_mongos_db_index_size_bytes gauge
mongodb_mongos_db_index_size_bytes{db="admin",shard="rs0"} 72
mongodb_mongos_db_index_size_bytes{db="admin",shard="rs1"} 81
mongodb_mongos_db_index_size_bytes{db="admin",shard="rs2"} 81
mongodb_mongos_db_index_size_bytes{db="config",shard="rs0"} 61637
mongodb_mongos_db_index_size_bytes{db="config",shard="rs1"} 51515
mongodb_mongos_db_index_size_bytes{db="config",shard="rs2"} 50933
# HELP mongodb_mongos_db_indexes_total Contains a count of the total number of indexes across all collections in the database
# TYPE mongodb_mongos_db_indexes_total gauge
mongodb_mongos_db_indexes_total{db="admin",shard="rs0"} 1
mongodb_mongos_db_indexes_total{db="admin",shard="rs1"} 1
mongodb_mongos_db_indexes_total{db="admin",shard="rs2"} 1
mongodb_mongos_db_indexes_total{db="config",shard="rs0"} 10
mongodb_mongos_db_indexes_total{db="config",shard="rs1"} 9
mongodb_mongos_db_indexes_total{db="config",shard="rs2"} 9
# HELP mongodb_mongos_sharding_balancer_enabled Balancer is enabled
# TYPE mongodb_mongos_sharding_balancer_enabled gauge
mongodb_mongos_sharding_balancer_enabled 1
# HELP mongodb_mongos_sharding_changelog_10min_total mongodb_mongos_sharding_changelog_10min_total
# TYPE mongodb_mongos_sharding_changelog_10min_total gauge
mongodb_mongos_sharding_changelog_10min_total{event="moveChunk.commit"} 178
mongodb_mongos_sharding_changelog_10min_total{event="moveChunk.from.success"} 178
mongodb_mongos_sharding_changelog_10min_total{event="moveChunk.start"} 179
mongodb_mongos_sharding_changelog_10min_total{event="moveChunk.to.success"} 179
# HELP mongodb_mongos_sharding_chunks_is_balanced Shards are balanced
# TYPE mongodb_mongos_sharding_chunks_is_balanced gauge
mongodb_mongos_sharding_chunks_is_balanced 0
# HELP mongodb_mongos_sharding_chunks_total Total number of chunks
# TYPE mongodb_mongos_sharding_chunks_total gauge
mongodb_mongos_sharding_chunks_total 1024
# HELP mongodb_mongos_sharding_collections_total Total # of Collections with Sharding enabled
# TYPE mongodb_mongos_sharding_collections_total gauge
mongodb_mongos_sharding_collections_total 1
# HELP mongodb_mongos_sharding_databases_total Total number of sharded databases
# TYPE mongodb_mongos_sharding_databases_total gauge
mongodb_mongos_sharding_databases_total{type="partitioned"} 0
mongodb_mongos_sharding_databases_total{type="unpartitioned"} 0
# HELP mongodb_mongos_sharding_shard_chunks_total Total number of chunks per shard
# TYPE mongodb_mongos_sharding_shard_chunks_total gauge
mongodb_mongos_sharding_shard_chunks_total{shard="rs0"} 819
mongodb_mongos_sharding_shard_chunks_total{shard="rs1"} 103
mongodb_mongos_sharding_shard_chunks_total{shard="rs2"} 102
# HELP mongodb_mongos_sharding_shards_draining_total Total number of drainingshards
# TYPE mongodb_mongos_sharding_shards_draining_total gauge
mongodb_mongos_sharding_shards_draining_total 0
# HELP mongodb_mongos_sharding_shards_total Total number of shards
# TYPE mongodb_mongos_sharding_shards_total gauge
mongodb_mongos_sharding_shards_total 3
# HELP mongodb_up Whether MongoDB is up.
# TYPE mongodb_up gauge
mongodb_up 1
# HELP mongodb_version_info The server version
# TYPE mongodb_version_info gauge
mongodb_version_info{mongodb="server version is unavailable"} 1
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 1.25
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 15
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 2.8520448e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.61845282015e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.40253696e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1

I don’t see anything unusual on the logs. I see the journal is empty, probably related to the fact you are running inside a container. Can you show docker logs then?
Also can you tell me specifically the names of the dashboards you are seeing empty? you might also try running the exporter in debug mode to check for errors.

1 Like

I have checked the docker logs but I could not found any issues. The dashboards which I could not see any data are “QPS of Mongos Service in the MongoDB Cluster Summary” and “All fields in the MongoDB Instances Overview with specific Node Name that has mongos’s node name”.

And I’m using the Percona MongoDB 4.4.4 version now.

Here is the mongodb_exporter’s logs that with debug mode for monogs.

bash-4.2$ /usr/local/percona/pmm2/exporters/mongodb_exporter --compatible-mode --mongodb.global-conn-pool --web.listen-address=:43000 --log.level="debug" --mongodb.uri=mongodb://
DEBU[0000] Compatible mode: true
DEBU[0000] Connection URI: mongodb://
INFO[0000] Starting HTTP server for http://:43000/metrics ...  source="server.go:144"
DEBU[0102] getDiagnosticData result
DEBU[0102] cannot create metric for my state: cannot get replicaset config: (CommandNotFound) no such cmd: replSetGetConfig
DEBU[0102] getting stats for databases: [admin config]
ERRO[0102] cannot get replSetGetStatus: replSetGetStatus is not supported through mongos
DEBU[0102] getDiagnosticData result
ERRO[0102] cannot get replSetGetStatus: replSetGetStatus is not supported through mongos
DEBU[0102] cannot create metric for my state: cannot get replicaset config: (CommandNotFound) no such cmd: replSetGetConfig
DEBU[0102] getting stats for databases: [admin config]

Here is a result of the “getDiagnosticData” from the mongos.

Percona Server for MongoDB shell version v4.4.4-6
connecting to: mongodb://
Implicit session: session { "id" : UUID("13452ca6-f605-4d1c-b3e5-375d83b55710") }
Percona Server for MongoDB server version: v4.4.4-6
Welcome to the Percona Server for MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
Questions? Try the support group
The server generated these startup warnings when booting:
        2021-04-16T02:40:19.162+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted. You can use percona-server-mongodb-enable-auth.sh to fix it
        2021-04-16T02:40:19.162+00:00: While invalid X509 certificates may be used to connect to this server, they will not be considered permissible for authentication
mongos> use admin
switched to db admin
mongos> db.adminCommand({"getDiagnosticData": 1})
        "data" : {

        "ok" : 1,
        "operationTime" : Timestamp(1618547012, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1618547012, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)

And it seems that the getDiagnosticData is not supported by mongos. Please see following comment of the code.

1 Like

Thanks for the update. I suggest you open a bug with this information so that the dev team can take a look. https://jira.percona.com/projects/PMM/issues

1 Like

Thank you for your guide. Here is the ticket for this issue.

1 Like