Not getting accurate read latency stats for MongoDB

We are not receiving reliable latency statistics.

MongoDB version : 4.2

OS version: Ubuntu 18.04

MongoDB exporter version: release-0.1x

Running the MongoDB exporter on each node to collect the statistics.

When examining the MongoDB logs and the Grafana dashboard, we notice a significant difference in the statistics.

Here are some examples of scenarios we tested on our machine:

  1. I’ve run a script that reads from one collection, and the log file indicates that it takes one query 3 seconds to complete. We ran the script for thirty minutes, however when we checked Grafana, the average read latency was 1 second for thirty minutes .
  2. One more scenario We used a new collection and ran a different script. We saw from the logs that each query was taking 8 seconds to complete. We ran that script for 30 minutes and received the same results on Grafana, i.e. 1 second.

For reference i am attaching Grafana query :

((avg(rate(mongodb_mongod_op_latencies_latency_total{type="read",mongo_cluster=~"cluster_name"}[5m])) by (instance)) * on (instance) group_right mongodb_mongod_replset_my_name) * on (instance,name,service) group_right mongodb_mongod_replset_member_health{state=~".*",set=~".*",name=~".*"} 

1st scenario screenshot and query:

2023-09-07T10:29:14.418+0000 I COMMAND [conn17080] command DB1.MYCOLLECTION appName: "--" command: find { find: "MYCOLLECTION", filter: { my_id: { $in: [ "o5mvkaaaaaDYcYDdaaaanQEaf", "ll_U_caaaaKYWOpdaaaaDehWa" ] } }, runtimeConstants: { localNow: new Date(1694082551331), clusterTime: Timestamp(1694082551, 4) }, shardVersion: [ Timestamp(0, 0), ObjectId('000000000000000000000000') ], lsid: { id: UUID("accd2144-1801-4fe0-8740-6cb34f13be80"), uid: BinData(0, 1B4AF5BE37EC62006B22F80A4B0F72550AB550A900F17793E37E83C43D7E3818) }, $readPreference: { mode: "primaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1694082551, 4), signature: { hash: BinData(0, 4AB494097251BF2D4E34CA0CCD01AD553286A4F2), keyId: 7241545307526266883 } }, $audit: { $impersonatedUsers: [ { user: "test_user", db: "admin" } ], $impersonatedRoles: [ { role: "readWriteAnyDatabase", db: "admin" } ] }, $client: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.15.0-1041-aws" }, platform: "CPython 3.9.5.final.0", application: { name: "--" }, mongos: { host: "localhost:27017", client: "0.0.0.0:53466", version: "4.2.12" } }, $configServerState: { opTime: { ts: Timestamp(1694082547, 1), t: 60 } }, $db: "DB1" } planSummary: COLLSCAN keysExamined:0 docsExamined:1999999 cursorExhausted:1 numYields:15625 nreturned:0 queryHash:4B1764BD planCacheKey:4B1764BD reslen:405 locks:{ ReplicationStateTransition: { acquireCount: { w: 15626 } }, Global: { acquireCount: { r: 15626 } }, Database: { acquireCount: { r: 15626 } }, Collection: { acquireCount: { r: 15626 } }, Mutex: { acquireCount: { r: 2 } } } storage:{} protocol:op_msg 3089ms

Hi @Sanjay_Mange,
could you please check with the latest version of PMM and mongodb_exporter? we’ve fixed this problem