Is it possible to disable creating tls-secrets by the operator?


We use gitops to deploy our manifests.

Also, we use cert-manager/step-issuer with our own CA to create tls-certificates.

As we are using gitops, we have no influence about which objects are created first and which one later. So I think we run into a kind of “race condition” when deploying our cluster:

Sometimes the operator creates certificates/secrets with its own CA - and then starts to deploy the replicaset.

“Later”, the cert-manager/step-issuer picks up my certficates and overwrites the secrets. (The result of the secrets is similar to Transport encryption (TLS/SSL) - Percona Operator for MongoDB) .
When this happens, the creation of the cluster by the operator stucks - because the secrets are changing while the operator wants to deploy the rest of the cluster. Mongos are not created then.

When deploying the certificates first (and be sure the cert-manager creates the secrets first) - and the customResource afterwards , there is no problem.

But, as just said, this is not possible when using gitops, as we have no impact about that.

What’s the way to go here?

  1. Is there a possibility to disable creation of the secrets by the operator?

  2. We reference the tls-secrets in the cr.yaml under secrets:, like

kind: PerconaServerMongoDB
    ssl: mongodb-default-rdx-tls-external
    sslInternal: mongodb-default-rdx-tls-internal

What is the recommended way for the naming of these tls-secrets? Do the names have to have a “reference” to the name of the CR?

Steps to Reproduce:

Deploy a custom resource for psmdb. Then create TLS-certfificates/secrets with the same name used by the operator (overwrite them) or use cert-manager with a third party issuer to do that.


crVersion: 1.14.0

Thanks and kind regards!

Hey @rdxmbr ,

I did the following:

  1. Deployed the default cr.yaml
  2. Copied my-cluster-name-ssl to my-cluster-name-new
  3. Referenced it in cr.yaml:
    ssl: my-cluster-name-new
  1. Applied cr.yaml without waiting for the cluster to be ready
  2. The cluster is not getting into ready state. There are bunch of observations:
    a. config server and replicaset pods are in ready state:
my-cluster-name-cfg-0                              2/2     Running   0          8m40s
my-cluster-name-cfg-1                              2/2     Running   0          8m17s
my-cluster-name-cfg-2                              2/2     Running   0          7m51s
my-cluster-name-rs0-0                              2/2     Running   0          8m40s
my-cluster-name-rs0-1                              2/2     Running   0          8m13s
my-cluster-name-rs0-2                              2/2     Running   0          7m48s

b. mongos deployment is not even created
c. Logs in operator pod showing the following error:

2023-11-20T09:37:19.182Z	ERROR	Reconciler error	{"controller": "psmdb-controller", "object": {"name":"my-cluster-name","namespace":"default"}, "namespace": "default", "name": "my-cluster-name", "reconcileID": "fe6adb10-d63c-4d4f-9d73-3ef69f6ddd86", "error": "reconcile StatefulSet for cfg: failed to run smartUpdate: failed to check active jobs: getting PBM object: create PBM connection to,, get config server connection URI: mongo: no documents in result", "errorVerbose": "reconcile StatefulSet for cfg: failed to run smartUpdate: failed to check active jobs: getting PBM object: create PBM connection to,, get config server connection URI: mongo: no documents in result\*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/\*Controller).Reconcile\n\t/go/pkg/mod/\*Controller).reconcileHandler\n\t/go/pkg/mod/\*Controller).processNextWorkItem\n\t/go/pkg/mod/\*Controller).Start.func2.2\n\t/go/pkg/mod/\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1598"}*Controller).reconcileHandler

This problem is only reproducible when you deploy change the certificate in flight, when the cluster is still initializing. If you change the cert after it is ready - it is fine.

@rdxmbr which gitops solution do you use?
afaik argocd has waves that allow you to define the order of resource creation.

Meanwhile we will look into it and see what can be done.

Thanks, @Sergey_Pronin ,

I am using the GitLab Kubernetes Agent Using GitOps with a Kubernetes cluster | GitLab. Indeed, the certificates are changed while the cluster is initializing in my case.

Going to have a look at flux for gitops Tutorial: Set up Flux for GitOps | GitLab next year.

Kind regards!