Mongo-init Permission denied

I am trying to deploy psmdb cluster with the helm charts ( percona/psmdb-operator 13.1 for the operator, percona/psmdb-db 13.0 for the custom resource) to an on-premise k8s cluster.(ver 1.20.15)

The operator creates the “psmdb-db-rs0” statefulset, but the mongo-init container in the pod fails:

++ id -u
++ id -g

  • install -o 2 -g 2 -m 0755 -D /ps-entry.sh /data/db/ps-entry.sh
    ++ id -u
    ++ id -g
  • install -o 2 -g 2 -m 0755 -D /mongodb-healthcheck /data/db/mongodb-healthcheck
    ++ id -u
    ++ id -g
  • install -o 2 -g 2 -m 0755 -D /pbm-entry.sh /opt/percona/pbm-entry.sh
    install: cannot create directory ‘/opt/percona’: Permission denied

I’ve tried different configurations, sharded/non-sharded deployment etc., but I get the same error every time. (I tried deleting the whole namespace between deploys to avoid any issues caused by leftover resources)

I cannot reproduce this error if I try to deploy 1.12.x, the mongo-init container finishes succesfully.
I am trying to make 1.13 work, because I have a diffierent issue in 1.12, that might already be fixed in 1.13.

The most simple config i’ve tried so far:

          backup:
            enabled: false
          sharding:
            enabled: false
          replsets:
            - name: rs0
              size: 3
              volumeSpec:
                pvc:
                  storageClassName: local-storage-ssd
                  resources:
                    requests:
                      storage: 5Gi

Do I miss something in the values that I should set in 1.13 to make it work?

1 Like

Hey @nTopee ,

do you use the same storageclass when you deploy 1.12? Do you use clean PVCs?

Usually permission denied comes from insufficient storage permissions.

BTW, what issue do you have in 1.12 that you want to be solved in 1.13?

1 Like

Hi!

Yes, I use the same storage class, but it shouldn’t matter, since the “/opt/percona” path is not on a persistent volume. I tried the deployment with ceph(rook.io) and topolvm PVC (both in use an working for multiple other deployments in this cluster), but it didn’t help.

When I deploy version 1.12, the operator creates all k8s resources sucessfully, the statefulset has all three nodes up, but it looks like the operator fails to set up the mongo replicaset. These are the logs from the operator:

{"level":"info","ts":1666257956.16132,"logger":"controller_psmdb","msg":"initiating replset","replset":"rs0","pod":"psmdb-db-rs0-0"}
{"level":"error","ts":1666257961.522955,"logger":"controller_psmdb","msg":"failed to reconcile cluster","Request.Namespace":"mongo-cluster","Request.Name":"psmdb-db","replset":"rs0","error":"handleReplsetInit: exec add admin user: command terminated with exit code 252 / Percona Server for MongoDB shell version v5.0.7-6\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"28eb59b7-9bbd-487d-8fd0-1179cdab519e\") }\nPercona Server for MongoDB server version: v5.0.7-6\nuncaught exception: Error: couldn't add user: not master

All 3 mongodb pods are stuck in this state:

rs.status()

{
        "ok" : 0,
        "errmsg" : "no replset config has been received",
        "code" : 94,
        "codeName" : "NotYetInitialized"
}

Thanks for your answer!

1 Like