Mongodb 5 deploy running, but pods `deploy-mongo-db-psm-cfg-2` and `deploy-mongo-db-psm-sb0-2` are pending

I was able to deploy mongo 5.x using this helm command:

helm install deploy-mongo-db percona/psmdb-db --namespace default \
  --set "image.repository=percona/percona-server-mongodb" \
  --set "image.tag=5.0.4-3" \
  --set "replsets[0].name=sb0" \
  --set "replsets[0].size=3" \
  --set "secrets.users=deploy-mongodb-secrets" \
  --set "replsets[0].volumeSpec.persistentVolumeClaim.resources.requests.storage=60Gi" \
  --set "replsets[0].resources.requests.memory=2Gi" \
  --set "replsets[0].resources.requests.cpu=1000m" \
  --set "replsets[0].resources.limits.memory=3Gi" \
  --set "replsets[0].resources.limits.cpu=1500m" \
  --set "upgradeOptions.apply=Never" \
  --set "sharding.enabled=true" \
  --set "backup.enabled=true" \
  --set "backup.storages.s3-us-east.type=s3" \
  --set "backup.storages.s3-us-east.s3.credentialsSecret=deploy-mongodb-backup-s3" \
  --set "backup.storages.s3-us-east.s3.bucket=app.deploy.com-backup" \
  --set "backup.storages.s3.s3-us-east.region=us-east-1" \
  --set "backup.storages.s3.s3-us-east.endpointUrl=https://us-east-1.linodeobjects.com" \
  --set "backup.pitr.enabled=true" \
  --set "backup.pitr.oplogSpanMin=15" \
  --set "backup.tasks[0].name=daily-s3-us-west" \
  --set "backup.tasks[0].enabled=true" \
  --set "backup.tasks[0].schedule=0 0 * * *" \
  --set "backup.tasks[0].keep=14" \
  --set "backup.tasks[0].storageName=s3-us-east" \
  --set "backup.tasks[0].compressionType=gzip"

I can log into the mongo 5.0.4 cluster and all is well, except that pods deploy-mongo-db-psm-cfg-2 and deploy-mongo-db-psm-sb0-2 are each pending with the below error (same for both):

0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.

All the other pods are running successfully. Any thoughts? Thank you!

1 Like

Looks like that you provide only 2 nodes instead of 3 nodes. The idea is that each member of the replicaset “lives” on a seperate node (anti affinity / hostname)

1 Like

Thank you! That was indeed the problem. I had discovered that as well, and hadn’t had a chance to update yet.

1 Like