Pmm2 setup on sharded cluster

Our setup is as follows:

router_01 server01test mongos
router_02 server02test mongos
config_a mongodb01test mongodCFG
config_b mongodb02test mongodCFG
shard01_a mongodb02test mongodSHARD1
shard01_b mongodb03test mongodSHARD1
shard01_c mongodb04test mongodSHARD1
shard02_a mongodb03test mongodSHARD2
shard02_b mongodb04test mongodSHARD2
shard02_c mongodb01test mongodSHARD2
shard03_a mongodb04test mongodSHARD3
shard03_b mongodb01test mongodSHARD3
shard03_c mongodb02test mongodSHARD3

I’m a little confused on how I’m supposed to install this so I can monitor QAN. Ive done the following.
#pmm server:
pmm install
yum install docker
systemctl start docker
curl -fsSL -O URL
-O URL &&
sha256sum .sha256-oneline -c &&
chmod +x ./get-pmm.sh &&
./get-pmm.sh

#configure pmm on all nodes including router
server01test:
sudo yum -y install REPONAME
yum -y install pmm2-client
pmm-admin config --server-insecure-tls --server-url URL
pmm-admin add mongodb --username=test --password=PAASWORD --host=server02test

server02test:
sudo yum -y install REPONAME
yum -y install pmm2-client
pmm-admin config --server-insecure-tls --server-url URL
pmm-admin add mongodb --username=test --password=PAASWORD --host=server02test

mongo01test:
sudo yum -y install REPONAME
yum -y install pmm2-client
pmm-admin config --server-insecure-tls --server-url URL
pmm-admin add mongodb node3 --cluster shard02 mongo01test:97019
pmm-admin add mongodb node2 --cluster shard03 mongo01test:97020

mongo02test:
sudo yum -y install REPONAME
yum -y install pmm2-client
pmm-admin config --server-insecure-tls --server-url URL
pmm-admin add mongodb node1 --cluster shard01 mongo02test:97018
pmm-admin add mongodb node3 --cluster shard03 mongo02test:97020

mongo03test:
sudo yum -y install REPONAME
yum -y install pmm2-client
pmm-admin config --server-insecure-tls --server-url URL
pmm-admin add mongodb node2 --cluster shard01 mongo03test:97019
pmm-admin add mongodb node1 --cluster shard02 mongo03test:97018

mongo04test:
sudo yum -y install REPONAME
yum -y install pmm2-client
pmm-admin config --server-insecure-tls --server-url URL
pmm-admin add mongodb node3 --cluster shard01 mongo04test:97020
pmm-admin add mongodb node2 --cluster shard02 mongo04test:97019
pmm-admin add mongodb node1 --cluster shard03 mongo04test:97018

This seems to get everything working fine now when I try to setup the cluster portion I installed it on the primary shard member with these commands.

pmm-admin add mongodb shard01 --cluster shard01 mongo02test:97018
pmm-admin add mongodb shard02 --cluster shard02 mongo03test:97018
pmm-admin add mongodb shard03 --cluster shard03 mongo04test:97018

Now If I try to install it on the secondary things get complicated it tells me shard02 is already created. So I wasnt sure if I did this right. Also we will in the future encrypt the connection between these mongo shard what is the process for doing what I need once the encryption is in place.

Hi jfarmer, first of all to have QAN you need to enable the profiler and configure the user with the privileges as per this page: Percona Monitoring and Management

Next, for the pmm-admin add commands the --cluster argument should be the same value for all the shards that are part of the same cluster. Remember to specify credentials for the user and just run it once on each server, there is no need to setup the cluster portion separately:

pmm-admin add mongodb --username=mongodb_exporter --password=percona --cluster=test --host=127.0.0.1 --port=27017

For TLS use something like this:

pmm-admin add mongodb --username=mongodb_exporter --password=percona --tls --tls-certificate-key-file=/tmp/test-server.pem --tls-ca-file=/tmp/test-ca.pem --host=mongo02test --port=97018

Hope that helps

2 Likes

Thank you for help.

So I read the percona monitoring and management link after I had posted it: I went ahead and created the user and enabled profiling. I do have some questions when it comes to this however.

Do I have to do it for each database or just the admin database? Do I have to do it for each collection under that database?

I did this from the router:

db.createRole({ role: “explainRole”, privileges: [{ resource: { db: “admin”, collection: “” }, actions: [ “listIndexes”, “listCollections”, “dbStats”, “dbHash”, “collStats”, “find” ] }], roles: })

db.getSiblingDB(“admin”).createUser({
user: “mongodb_exporter”,
pwd: “PASSWORD”,
roles: [
{ role: “explainRole”, db: “admin” },
{ role: “clusterMonitor”, db: “admin” },
{ role: “read”, db: “local” }
]
})

Then logged into the primary shard01 and ran this:
mongo --host IP --port PORT
use admin
db.setProfilingLevel(1)

1 Like

You can enable the profiler globally from the mongod.conf. If you are enabling it at runtime you will need to do it for each db you want profiled.
One more thing, the exporter user has to be created locally on each shard, not on the mongos.

2 Likes

Do I have to create the mongodb_exporter user on each shard primary secondary and arbiter or just on the primary of each shard?

When I create the role do I have to specify the collection?

1 Like

Hi, creating the exporter on the primary is enough as it will be propagated to the secondaries. Also when creating a role you do it on the authentication database you use (normally admin).

2 Likes

OK, last question. Is there a way to see query analysis on the routers?

I can see them on the individual shards now but most people will be doing queries on the routers.

1 Like

IIRC the queries sent via the routers should still be visible since they are stored in the profile collection of each shard. The routers themselves don’t have any collections on them.

2 Likes

OK, Ive got this all setup and working. Turns out you do have to install them on the individual shard. Thank you for all your help going to make your first answer as the solution.

1 Like

Glad to hear it’s working now!

1 Like