No Mongodb Metrics while using mongodb exporter via pmm

Hello Team,

Installed percona mongodb operator and deployed a single node mongodb with allowUnsafeConfigurations: true. There are no issues on using the mongodb whereas not seeing any mongodb metrics on PMM. When mongodb exporter deployed as sidecar too no mongodb specific metrics. Sample metrics which is getting generated is shown below

# HELP collector_scrape_time_ms Time taken for scrape by collector
# TYPE collector_scrape_time_ms gauge
collector_scrape_time_ms{collector="general",exporter="mongodb"} 1
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 12
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.18.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.054384e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 3.054384e+06
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.446674e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 2158
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 3.672248e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 3.054384e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 581632
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 3.088384e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 17559
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 548864
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 3.670016e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 19717
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 2400
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 45968
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 48960
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.194304e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 797966
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 524288
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 524288
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.0175752e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 5
# HELP mongodb_up Whether MongoDB is up.
# TYPE mongodb_up gauge
mongodb_up 1
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.04
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 12
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.4856192e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.65942620516e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.35563776e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19

Version Used:
Percona mongodb operator - perconalab/percona-server-mongodb-operator:main
Mongodb - percona/percona-server-mongodb:5.0.9
PMM - percona/pmm-server:2

Can you help in finding out why the metrics are not showing up. Also i am not able to enable profiler

Thanks,
Vijay

1 Like

Hello @vkperumal ,

could you please share your custom resource manifest? cr.yaml or helm values?

Also, how do you see that you do not have any metrics in PMM - what do you see in dashboards or “inspect” mode?

1 Like

Hi @Sergey_Pronin

Please find the below manifest

apiVersion: psmdb.percona.com/v1-13-0
kind: PerconaServerMongoDB
metadata:
  name: mongodb-percona
  namespace: mongodb
spec:
  crVersion: 1.13.0
  image: percona/percona-server-mongodb:5.0.9
  imagePullPolicy: IfNotPresent
  allowUnsafeConfigurations: true
  upgradeOptions:
    apply: Disabled
    schedule: "0 2 * * *"
  secrets:
    users: mongodb-percona-secrets
  # pmm:
  #   enabled: false
  replsets:
  - name: rs0
    size: 1
    configuration: |
      operationProfiling:
        slowOpThresholdMs: 200
        mode: slowOp
        rateLimit: 100
    volumeSpec:
      persistentVolumeClaim:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: mongodb
        resources:
          requests:
            storage: 10Gi
    expose:
      enabled: false
      exposeType: ClusterIP
    resources:
      limits:
        cpu: "1000m"
        memory: "8Gi"
      requests:
        cpu: "50m"
        memory: "500Mi"
    sidecars:
    - image: percona/mongodb_exporter:0.33
      name: mongodb-exporter
      env:
      - name: EXPORTER_USER
        valueFrom:
          secretKeyRef:
            name: mongodb-exporter-secrets
            key: MONGODB_CLUSTER_MONITOR_USER
      - name: EXPORTER_PASS
        valueFrom:
          secretKeyRef:
            name: mongodb-exporter-secrets
            key: MONGODB_CLUSTER_MONITOR_PASSWORD
      - name: POD_IP
        valueFrom:
          fieldRef:
            fieldPath: status.podIP
      - name: MONGODB_URI
        value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_IP):27017"
      args: ["--discovering-mode", "--compatible-mode", "--log.level=debug", "--mongodb.uri=$(MONGODB_URI)"]
  sharding:
    enabled: true

    configsvrReplSet:
      size: 1
      volumeSpec:
        persistentVolumeClaim:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: mongodb
          resources:
            requests:
              storage: 3Gi
      resources:
        limits:
          cpu: "500m"
          memory: "4Gi"
        requests:
          cpu: "10m"
          memory: "100Mi"
      expose:
        enabled: false
        exposeType: ClusterIP
    mongos:
      size: 1
      resources:
        limits:
          cpu: "500m"
          memory: "2Gi"
        requests:
          cpu: "10m"
          memory: "100Mi"
      expose:
        exposeType: NodePort
  backup:
    enabled: true
    image: perconalab/percona-server-mongodb-operator:main-backup
    serviceAccountName: percona-mongodb-backup
    resources:
      limits:
        cpu: "300m"
        memory: "0.5G"
      requests:
        cpu: "10m"
        memory: "0.5G"
    storages:
      s3-backup:
        type: s3
        s3:
          bucket: db-backup
          region: ap-south-1
          credentialsSecret: mongodb-s3-secrets
          prefix: "staging-mongodb"
          uploadPartSize: 10485760
          maxUploadParts: 10000
          storageClass: STANDARD
          insecureSkipTLSVerify: false
    pitr:
      enabled: false
#        oplogSpanMin: 10
      compressionType: gzip
      compressionLevel: 6
    tasks:
     - name: daily-backup
       enabled: true
       schedule: "0 0 * * *"
       keep: 5
       storageName: s3-backup
       compressionType: gzip
       compressionLevel: 6

On the dashboard i dont see any metrics

Thanks,
Vijay

1 Like

Hmm, I see now.
You are trying to use mongodb_exporter and somehow add it to pmm.

Why don’t you just enable pmm on the operator side? It will deploy pmm-client sidecar and configure it automatically.

Have you seen this document: Monitoring ?

In a nutshell what you need is the following in the Custom Resource:

  pmm:
    enabled: true
    image: percona/pmm-client:2.27.0
    serverHost: YOUR_PMM_SERVER_ADDRESS_HERE

Also you will need to set PMM_SERVER_PASSWORD in the secret mongodb-percona-secrets.

1 Like

i also added mongodb instance via pmm-server following below doc and i can see agents running but no metrics on the mongodb overview dashboards.

Also tried your suggestion of adding pmm client via CRD but was getting below error which is due to protocol mismatch, but i believe adding via CRD and by pmm-server add instance is same

ERRO[2022-08-16T14:30:42.466+00:00] Failed to establish two-way communication channel: unexpected HTTP status code received from server: 464 (); malformed header: missing HTTP content-type.  component=client

Thanks,
Vijay

1 Like

@vkperumal operator simplifies integration with PMM. You don’t need to configure anything manually or add anything on pmm-server side.

It works like this:

  1. You have pmm-server installed
  2. You install the Operator and enable pmm as I showed above in the Custom Resource for your DB (make sure you follow the documentation).
  3. You should start seeing the metrics in the pmm-server

If you don’t, than probably smth went wrong and the easiest way to debug it would be to check the logs of the pmm-client container in the database Pod.

Please let me know if it helps. If you still have issues, we can jump into a quick zoom call with you to debug the problem.

1 Like

@Sergey_Pronin Can we get on zoom call since i tried all the methods and still not able to get metrics

Thanks,
Vijay

1 Like

Did you manage to get this working

Sorry for the delayed response. I was able to get this working. The issue was with the PMM LB where i was using it with http whereas https should be used.

1 Like