How to collect metrics and ship to existing prometheus grafana setup

Hi friends,

we are switching our mongodb deployment from using bitnami charts/bitnami/mongodb at main · bitnami/charts · GitHub to use the percona operator.
In our bitnami solution we have setup the metric server via (charts/values.yaml at main · bitnami/charts · GitHub)

We have an existing prometheus/grafana setup via kube-prometheus-stack helm-charts/charts/kube-prometheus-stack at main · prometheus-community/helm-charts · GitHub

We are wondering if its possible to install a metric server that allows us to ship metrics so we can view the state of our mongodb instance in our existing grafana solution.

Im aware of something called PMM Percona Monitoring and Management but we are unsure if we are able to use this with our existing setup or what im assuming this is an alternative to our kube-prometheus-stack setup.

Just reading furtheri nto the documentation i noticed its possible to have a sidecar, would the solution here to have a sidecar running mognodb-exporter. Is there an example of this somewhere?


    sidecars:
    - image: percona/mongodb_exporter:0.36
      env:
      - name: EXPORTER_USER
        valueFrom:
          secretKeyRef:
            name: psmdb-db-secrets
            key: MONGODB_CLUSTER_MONITOR_USER
      - name: EXPORTER_PASS
        valueFrom:
          secretKeyRef:
            name: psmdb-db-secrets
            key: MONGODB_CLUSTER_MONITOR_PASSWORD
      - name: POD_IP
        valueFrom:
          fieldRef:
            fieldPath: status.podIP
      - name: MONGODB_URI
        value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_IP):27017"
      args: ["--discovering-mode", "--compatible-mode", "--log.level=debug", "--mongodb.uri=$(MONGODB_URI)"]
      name: rs-sidecar-1

I installed the mongod_exporter as a sidecar, but the metrics is not being picked up by prometheus ( can’t see it in the service discovery )

What am i doing wrong here?

Okay i managed to get this working I think. I’ll leave the solution here incase anyone needs it.

Need to setup mongodb exporter as a sidecar. And then you must setup a service and a service monitor so prometheus can begin to scrape the data.

honestly wish creating of the service & service monitor was integrated into this helm, if i was not such a noob at helm i would make a PR myself.

psmdb-db.values.yaml

    sidecars:
    - image: percona/mongodb_exporter:0.36
      env:
      - name: EXPORTER_USER
        valueFrom:
          secretKeyRef:
            name: psmdb-db-secrets
            key: MONGODB_CLUSTER_MONITOR_USER
      - name: EXPORTER_PASS
        valueFrom:
          secretKeyRef:
            name: psmdb-db-secrets
            key: MONGODB_CLUSTER_MONITOR_PASSWORD
      - name: POD_IP
        valueFrom:
          fieldRef:
            fieldPath: status.podIP
      - name: MONGODB_URI
        value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_IP):27017"
      args: ["--discovering-mode", "--compatible-mode", "--collect-all", "--log.level=debug", "--mongodb.uri=$(MONGODB_URI)"]
      name: metrics

metrics-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: psmdb-metrics
  namespace: mongodb
  labels:
    app: psmdb-metrics
    app.kubernetes.io/instance: psmdb-db
    app.kubernetes.io/component: metrics
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: '9216'
    prometheus.io/scrape: 'true'
spec:
  ports:
  - name: http-metrics 
    port: 9216 
    targetPort: 9216
    protocol: TCP
  selector:
    app.kubernetes.io/instance: psmdb-db
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: psmdb-metrics-servicemonitor
  namespace: monitoring
  labels:
    app.kubernetes.io/instance: psmdb-db
    app.kubernetes.io/component: metrics
spec:
  namespaceSelector:
    matchNames:
      - mongodb
  selector:
    matchLabels:
      app.kubernetes.io/instance: psmdb-db
      app.kubernetes.io/component: metrics
  endpoints:
  - port: http-metrics
    interval: 15s

1 Like

Hi @Kay_Khan, we have integration with PMM. You can deploy the PMM server and enable pmm client via CR very easily. Try to use our product and provide feedback for us. Thanks.

I understand.

I think you misunderstood my requirements, i wanted the ability to collect metrics and ship them to an existing prometheus/grafana setup.

If im not mistaken PMM would be an entirely new deployment, an alternative to our existing kube-prometheus-stack setup.

Hi Kay!

I used your solution and it seems to work. I’m wondering if you can share your entire values file?

Did you only put the sidecar under the repsets ? Or should it go under sharding as well?

Here’s what I have right now:

nameOverride: mongodb
replsets:
- name: rs0
  size: 3
  resources:
    requests: 
      memory: 4G
      cpu: 2
    limits:
      memory: 4G
      cpu: 2
  volumeSpec:
    pvc:
      storageClassName: kafka # kafka storage class has allow volume expansion
      resources:
        requests:
          storage: 10Gi
  sidecars:
  - image: percona/mongodb_exporter:0.39
    env:
    - name: EXPORTER_USER
      valueFrom:
        secretKeyRef:
          name: internal-mongodb-users
          key: MONGODB_DATABASE_ADMIN_USER
    - name: EXPORTER_PASS
      valueFrom:
        secretKeyRef:
          name: internal-mongodb-users
          key: MONGODB_DATABASE_ADMIN_PASSWORD
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: MONGODB_URI
      value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_NAME)"
    args: ["--discovering-mode", "--compatible-mode", "--collect-all", "--log.level=debug", "--mongodb.uri=$(MONGODB_URI)"]
    name: metrics

sharding:
  enabled: true
  mongos:
    resources:
      requests: 
        memory: 4G
        cpu: 2
      limits:
        memory: 4G
        cpu: 2
    expose:
      servicePerPod: true
  configrs:
    resources:
      requests: 
        memory: 4G
        cpu: 2
      limits:
        memory: 4G
        cpu: 2
    volumeSpec:
      pvc:
        storageClassName: kafka # kafka storage class has allow volume expansion
        resources:
          requests:
            storage: 3Gi
    sidecars:
    - image: percona/mongodb_exporter:0.39
      env:
      - name: EXPORTER_USER
        valueFrom:
          secretKeyRef:
            name: internal-mongodb-users
            key: MONGODB_DATABASE_ADMIN_USER
      - name: EXPORTER_PASS
        valueFrom:
          secretKeyRef:
            name: internal-mongodb-users
            key: MONGODB_DATABASE_ADMIN_PASSWORD
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: MONGODB_URI
        value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_NAME)"
      args: ["--discovering-mode", "--compatible-mode", "--collect-all", "--log.level=debug", "--mongodb.uri=$(MONGODB_URI)"]
      name: metrics

I’m noticing high Replication Lag by Set in Grafana right now ~1.67 min … so I’m pretty confused what’s up.

2 Likes

Even i’m facing a lot of lag in loading the dashboards

Are there any guides out there for production EKS settings? Like specific resources/configs to set.

I’ve read online that XFS disks are better for mongo so I created this basic storageClass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: xfs-storage-class
provisioner: kubernetes.io/aws-ebs
parameters:
  fsType: xfs

I just hooked that up with the Bitnami Mongodb

This post is disheartening: Installing MongoDB in production using Helm - Installation & Upgrades - MongoDB Developer Community Forums

Seems like percona and bitnami are deploying similar statefulsets

Thanks a ton for this @Kay_Khan your solution worked perfectly for me. One thing to note for others using this, it’s no longer necessary to pass the username and password env vars directly into the connection uri. The exporter will pick up MONGODB_USER and MONGODB_PASSWORD by default so EXPORTER_USER and EXPORTER_PASS could just be renamed accordingly to simplify the configs a bit.

Cheers

Hey @Domenic_Bove , just saw your post.

Seems like percona and bitnami are deploying similar statefulsets

Where are you getting this from?

It is true that both bitnami and Percona Operator use stateful sets to deploy MongoDB replica sets, but:

  1. it is just a standard way to run stateful applications in k8s
  2. this is where similarities end :slight_smile:

To use your storage class in the Operator just specify it in the storage spec section:

      volumeSpec:
        persistentVolumeClaim:
          storageClassName: xfs-storage-class
          resources:
            requests:
              storage: 3Gi