CrashLoopBackOff on deploy cluster

Description:

CrashLoopBack on deploy latest Percona Mongo Operator version on Kubernetes.

Steps to Reproduce:

1- Clone repository:

> git clone https://github.com/percona/percona-server-mongodb-operator.git

> cd percona-server-mongodb-operator

2- Install Operator

> kubectl apply -f deploy/bundle.yaml --server-side

> kubectl get all
NAME                                                          READY   STATUS    RESTARTS      AGE
pod/percona-server-mongodb-operator-84d74cd5f-9fxvc           1/1     Running   0             6s

NAME                                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/percona-server-mongodb-operator          1/1     1            1           8s

NAME                                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/percona-server-mongodb-operator-84d74cd5f           1         1         1       8s

3- Deploy sample cluster

> kubectl apply -f deploy/cr.yaml

4- Cluster deploy error

> kubectl get replicaset,pod
NAME                                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/opensearch-operator-controller-manager-64c95dcb4b   1         1         1       113d

NAME                                                          READY   STATUS             RESTARTS      AGE
pod/my-cluster-name-cfg-0                                     1/2     Error              1 (3s ago)    21s
pod/my-cluster-name-rs0-0                                     1/2     CrashLoopBackOff   1 (2s ago)    20s
pod/opensearch-operator-controller-manager-64c95dcb4b-5lzdp   2/2     Running            4 (26d ago)   113s

> kubectl describe pod/my-cluster-name-cfg-0
Name:             my-cluster-name-cfg-0
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-homolog-worker-5.ifrn.local/198.18.144.125
Start Time:       Wed, 12 Jun 2024 08:53:22 -0300
Labels:           app.kubernetes.io/component=cfg
                  app.kubernetes.io/instance=my-cluster-name
                  app.kubernetes.io/managed-by=percona-server-mongodb-operator
                  app.kubernetes.io/name=percona-server-mongodb
                  app.kubernetes.io/part-of=percona-server-mongodb
                  app.kubernetes.io/replset=cfg
                  controller-revision-hash=my-cluster-name-cfg-5d6664c796
                  statefulset.kubernetes.io/pod-name=my-cluster-name-cfg-0
Annotations:      cni.projectcalico.org/containerID: c86a05a50ad678715e792dc03401fca9b89f7df27ff569430ec4eec1c2438c0b
                  cni.projectcalico.org/podIP: 10.42.183.140/32
                  cni.projectcalico.org/podIPs: 10.42.183.140/32
                  percona.com/ssl-hash: 437cedec33bae276b535d152b6bc2b8c
                  percona.com/ssl-internal-hash: ec0777efe77141027f8dfe7bd80adaac
Status:           Running
IP:               10.42.183.140
IPs:
  IP:           10.42.183.140
Controlled By:  StatefulSet/my-cluster-name-cfg
Init Containers:
  mongo-init:
    Container ID:  containerd://1e440bcef165857625e51853d08a054e28879a95dc2836e05435cd70b04c6875
    Image:         perconalab/percona-server-mongodb-operator:main
    Image ID:      docker.io/perconalab/percona-server-mongodb-operator@sha256:8bc101dbe497d69f4c42c9733add8e5313ba757672e5ff5b6773070cf48a365a
    Port:          <none>
    Host Port:     <none>
    Command:
      /init-entrypoint.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 12 Jun 2024 08:53:36 -0300
      Finished:     Wed, 12 Jun 2024 08:53:36 -0300
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     300m
      memory:  500M
    Requests:
      cpu:        300m
      memory:     500M
    Environment:  <none>
    Mounts:
      /data/db from mongod-data (rw)
      /opt/percona from bin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ntsct (ro)
Containers:
  mongod:
    Container ID:  containerd://64704ce5c777792f4dbe625e08cfed9c1494e664a3bd67e325fdf80ffed4e9a4
    Image:         perconalab/percona-server-mongodb-operator:main-mongod7.0
    Image ID:      docker.io/perconalab/percona-server-mongodb-operator@sha256:8d8220be4c6e9554442a11da2ee6472ad6198654a349665203b9fea676630334
    Port:          27017/TCP
    Host Port:     0/TCP
    Command:
      /opt/percona/ps-entry.sh
    Args:
      --bind_ip_all
      --auth
      --dbpath=/data/db
      --port=27017
      --replSet=cfg
      --storageEngine=wiredTiger
      --relaxPermChecks
      --sslAllowInvalidCertificates
      --clusterAuthMode=x509
      --tlsMode=preferTLS
      --configsvr
      --enableEncryption
      --encryptionKeyFile=/etc/mongodb-encryption/encryption-key
      --wiredTigerCacheSizeGB=0.25
      --wiredTigerIndexPrefixCompression=true
      --quiet
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    132
      Started:      Wed, 12 Jun 2024 08:54:31 -0300
      Finished:     Wed, 12 Jun 2024 08:54:32 -0300
    Ready:          False
    Restart Count:  3
    Limits:
      cpu:     300m
      memory:  500M
    Requests:
      cpu:      300m
      memory:   500M
    Liveness:   exec [/opt/percona/mongodb-healthcheck k8s liveness --ssl --sslInsecure --sslCAFile /etc/mongodb-ssl/ca.crt --sslPEMKeyFile /tmp/tls.pem --startupDelaySeconds 7200] delay=60s timeout=10s period=30s #success=1 #failure=4
    Readiness:  exec [/opt/percona/mongodb-healthcheck k8s readiness --component mongod] delay=10s timeout=2s period=3s #success=1 #failure=3
    Environment Variables from:
      internal-my-cluster-name-users  Secret  Optional: false
    Environment:
      SERVICE_NAME:     my-cluster-name
      NAMESPACE:        default
      MONGODB_PORT:     27017
      MONGODB_REPLSET:  cfg
    Mounts:
      /data/db from mongod-data (rw)
      /etc/mongodb-encryption from my-cluster-name-mongodb-encryption-key (ro)
      /etc/mongodb-secrets from my-cluster-name-mongodb-keyfile (ro)
      /etc/mongodb-ssl from ssl (ro)
      /etc/mongodb-ssl-internal from ssl-internal (ro)
      /etc/users-secret from users-secret-file (rw)
      /opt/percona from bin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ntsct (ro)
  backup-agent:
    Container ID:  containerd://6203f3b583ea3a4476e13e6a286f1b0acac496e34ce71bb2987b8c2c52f09f32
    Image:         perconalab/percona-server-mongodb-operator:main-backup
    Image ID:      docker.io/perconalab/percona-server-mongodb-operator@sha256:080824dda24f3419c8657fc329f190156e7e04d56fc6794341375e1c0ff5f365
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/percona/pbm-entry.sh
    Args:
      pbm-agent-entrypoint
    State:          Running
      Started:      Wed, 12 Jun 2024 08:53:39 -0300
    Ready:          True
    Restart Count:  0
    Environment:
      PBM_AGENT_MONGODB_USERNAME:  <set to the key 'MONGODB_BACKUP_USER' in secret 'internal-my-cluster-name-users'>      Optional: false
      PBM_AGENT_MONGODB_PASSWORD:  <set to the key 'MONGODB_BACKUP_PASSWORD' in secret 'internal-my-cluster-name-users'>  Optional: false
      PBM_MONGODB_REPLSET:         cfg
      PBM_MONGODB_PORT:            27017
      PBM_AGENT_SIDECAR:           true
      PBM_AGENT_SIDECAR_SLEEP:     5
      SHARDED:                     TRUE
      POD_NAME:                    my-cluster-name-cfg-0 (v1:metadata.name)
      PBM_MONGODB_URI:             mongodb://$(PBM_AGENT_MONGODB_USERNAME):$(PBM_AGENT_MONGODB_PASSWORD)@$(POD_NAME)
      PBM_AGENT_TLS_ENABLED:       true
    Mounts:
      /data/db from mongod-data (rw)
      /etc/mongodb-ssl from ssl (ro)
      /opt/percona from bin (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ntsct (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  mongod-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongod-data-my-cluster-name-cfg-0
    ReadOnly:   false
  my-cluster-name-mongodb-keyfile:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-name-mongodb-keyfile
    Optional:    false
  bin:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  my-cluster-name-mongodb-encryption-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-name-mongodb-encryption-key
    Optional:    false
  ssl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-name-ssl
    Optional:    false
  ssl-internal:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-name-ssl-internal
    Optional:    true
  users-secret-file:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  internal-my-cluster-name-users
    Optional:    false
  kube-api-access-ntsct:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Normal   Scheduled               81s                default-scheduler        Successfully assigned default/my-cluster-name-cfg-0 to k8s-homolog-worker-5.ifrn.local
  Normal   SuccessfulAttachVolume  71s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-24edd403-ff3a-454e-b24a-ae437f922e31"
  Normal   Pulling                 69s                kubelet                  Pulling image "perconalab/percona-server-mongodb-operator:main"
  Normal   Pulled                  68s                kubelet                  Successfully pulled image "perconalab/percona-server-mongodb-operator:main" in 857.393954ms (857.451699ms including waiting)
  Normal   Created                 68s                kubelet                  Created container mongo-init
  Normal   Started                 67s                kubelet                  Started container mongo-init
  Normal   Pulled                  66s                kubelet                  Successfully pulled image "perconalab/percona-server-mongodb-operator:main-mongod7.0" in 814.348341ms (814.376774ms including waiting)
  Normal   Pulling                 65s                kubelet                  Pulling image "perconalab/percona-server-mongodb-operator:main-backup"
  Normal   Created                 64s                kubelet                  Created container backup-agent
  Normal   Pulled                  64s                kubelet                  Successfully pulled image "perconalab/percona-server-mongodb-operator:main-backup" in 827.620178ms (827.646302ms including waiting)
  Normal   Started                 64s                kubelet                  Started container backup-agent
  Normal   Pulled                  63s                kubelet                  Successfully pulled image "perconalab/percona-server-mongodb-operator:main-mongod7.0" in 796.238217ms (796.262767ms including waiting)
  Normal   Pulling                 45s (x3 over 66s)  kubelet                  Pulling image "perconalab/percona-server-mongodb-operator:main-mongod7.0"
  Normal   Started                 44s (x3 over 65s)  kubelet                  Started container mongod
  Normal   Created                 44s (x3 over 66s)  kubelet                  Created container mongod
  Normal   Pulled                  44s                kubelet                  Successfully pulled image "perconalab/percona-server-mongodb-operator:main-mongod7.0" in 944.59321ms (944.613035ms including waiting)
  Warning  BackOff                 40s (x5 over 61s)  kubelet                  Back-off restarting failed container mongod in pod my-cluster-name-cfg-0_default(9790f9e4-3161-46ec-8ecf-a0de4e248348)


> kubectl logs pod/my-cluster-name-cfg-0
Defaulted container "mongod" out of: mongod, backup-agent, mongo-init (init)
+ '[' - = - ']'
+ set -- mongod --bind_ip_all --auth --dbpath=/data/db --port=27017 --replSet=cfg --storageEngine=wiredTiger --relaxPermChecks --sslAllowInvalidCertificates --clusterAuthMode=x509 --tlsMode=preferTLS --configsvr --enableEncryption --encryptionKeyFile=/etc/mongodb-encryption/encryption-key --wiredTigerCacheSizeGB=0.25 --wiredTigerIndexPrefixCompression=true --quiet
+ originalArgOne=mongod
+ [[ mongod == mongo* ]]
++ id -u
+ '[' 1001 = 0 ']'
+ [[ mongod == mongo* ]]
+ numa=(numactl --interleave=all)
+ numactl --interleave=all true
+ set -- numactl --interleave=all mongod --bind_ip_all --auth --dbpath=/data/db --port=27017 --replSet=cfg --storageEngine=wiredTiger --relaxPermChecks --sslAllowInvalidCertificates --clusterAuthMode=x509 --tlsMode=preferTLS --configsvr --enableEncryption --encryptionKeyFile=/etc/mongodb-encryption/encryption-key --wiredTigerCacheSizeGB=0.25 --wiredTigerIndexPrefixCompression=true --quiet
++ mongod --version
++ head -1
++ awk '{print $3}'
++ awk -F. '{print $1"."$2}'
+ MONGODB_VERSION=

> kubectl logs pod/my-cluster-name-rs0-0
Defaulted container "mongod" out of: mongod, backup-agent, mongo-init (init)
+ '[' - = - ']'
+ set -- mongod --bind_ip_all --auth --dbpath=/data/db --port=27017 --replSet=rs0 --storageEngine=wiredTiger --relaxPermChecks --sslAllowInvalidCertificates --clusterAuthMode=x509 --tlsMode=preferTLS --shardsvr --enableEncryption --encryptionKeyFile=/etc/mongodb-encryption/encryption-key --wiredTigerCacheSizeGB=0.25 --wiredTigerIndexPrefixCompression=true --quiet
+ originalArgOne=mongod
+ [[ mongod == mongo* ]]
++ id -u
+ '[' 1001 = 0 ']'
+ [[ mongod == mongo* ]]
+ numa=(numactl --interleave=all)
+ numactl --interleave=all true
+ set -- numactl --interleave=all mongod --bind_ip_all --auth --dbpath=/data/db --port=27017 --replSet=rs0 --storageEngine=wiredTiger --relaxPermChecks --sslAllowInvalidCertificates --clusterAuthMode=x509 --tlsMode=preferTLS --shardsvr --enableEncryption --encryptionKeyFile=/etc/mongodb-encryption/encryption-key --wiredTigerCacheSizeGB=0.25 --wiredTigerIndexPrefixCompression=true --quiet
++ mongod --version
++ awk '{print $3}'
++ awk -F. '{print $1"."$2}'
++ head -1
+ MONGODB_VERSION=

Version:

Latest revision (main branch):
commit 16550593ef4e5cbdaf2f4c2e96330eb713d30853 (HEAD → main, origin/main, origin/HEAD)
Author: Pavel Tankov 4014969+ptankov@users.noreply.github.com
Date: Wed Jun 12 14:57:38 2024 +0300

Kubernetes version: v1.27.13+rke2r1

Logs:

Logs above.

Expected Result:

Cluster running.

Actual Result:

Cluster error.

Please don’t use the main branch. Latest supported operator version is 1.16 at this time:

1 Like

Hi @Ivan_Groenewold !

I tested it with stable versions, and the problem was the same.

> git checkout v1.16.0
HEAD is now at 54e1b18d upgrade-consistency-sharded-tls doesn't work on minikube because of AntiAffinity - removing from test suite for minikube

> cat version/version.go
package version

var (
	Version = "1.16.0"
)

Installation

> kubectl apply -f deploy/bundle.yaml --server-side
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied
role.rbac.authorization.k8s.io/percona-server-mongodb-operator serverside-applied
serviceaccount/percona-server-mongodb-operator serverside-applied
rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator serverside-applied
deployment.apps/percona-server-mongodb-operator serverside-applied

> kubectl apply -f deploy/cr.yaml
perconaservermongodb.psmdb.percona.com/my-cluster-name created

Error:

> kubectl get pod
NAME                                                      READY   STATUS             RESTARTS      AGE
my-cluster-name-cfg-0                                     1/2     CrashLoopBackOff   4 (40s ago)   2m32s
my-cluster-name-rs0-0                                     1/2     CrashLoopBackOff   4 (36s ago)   2m32s
percona-server-mongodb-operator-657d46f4b5-586d9          1/1     Running            0             3m14s

> kubectl describe pod/my-cluster-name-cfg-0
Name:             my-cluster-name-cfg-0
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-homolog-worker-7.ifrn.local/198.18.144.127
Start Time:       Thu, 13 Jun 2024 08:47:53 -0300
Labels:           app.kubernetes.io/component=cfg
                  app.kubernetes.io/instance=my-cluster-name
                  app.kubernetes.io/managed-by=percona-server-mongodb-operator
                  app.kubernetes.io/name=percona-server-mongodb
                  app.kubernetes.io/part-of=percona-server-mongodb
                  app.kubernetes.io/replset=cfg
                  controller-revision-hash=my-cluster-name-cfg-77d7977457
                  statefulset.kubernetes.io/pod-name=my-cluster-name-cfg-0
Annotations:      cni.projectcalico.org/containerID: 05b275d334abd7b191b1de76f9008c749a16788c94fdf770a751e546be15f220
                  cni.projectcalico.org/podIP: 10.42.207.80/32
                  cni.projectcalico.org/podIPs: 10.42.207.80/32
                  percona.com/ssl-hash: 76678878f104a39679f6b645cb3e10af
                  percona.com/ssl-internal-hash: b7f297bdaf7b906c95e92d76a351d963
Status:           Running
IP:               10.42.207.80
IPs:
  IP:           10.42.207.80
Controlled By:  StatefulSet/my-cluster-name-cfg
Init Containers:
  mongo-init:
    Container ID:  containerd://4dc4ef7f20c796d3f1a591feb7eb4c84b04f019876b0aeb4044a212355bfc007
    Image:         percona/percona-server-mongodb-operator:1.16.0
    Image ID:      docker.io/percona/percona-server-mongodb-operator@sha256:e9f7d80be465bbf03bc0b1ba47050561bbfc02a0796dc2dbbef72196e64afd32
    Port:          <none>
    Host Port:     <none>
    Command:
      /init-entrypoint.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 13 Jun 2024 08:48:06 -0300
      Finished:     Thu, 13 Jun 2024 08:48:06 -0300
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     300m
      memory:  500M
    Requests:
      cpu:        300m
      memory:     500M
    Environment:  <none>
    Mounts:
      /data/db from mongod-data (rw)
      /opt/percona from bin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpsl2 (ro)
Containers:
  mongod:
    Container ID:  containerd://c3bcdcb08af474992c7a9d3b6929af5c67624315b4e0ff3f90081925c6585c35
    Image:         percona/percona-server-mongodb:7.0.8-5
    Image ID:      docker.io/percona/percona-server-mongodb@sha256:f81d1353d5497c5be36ee525f742d498ee6e1df9aba9502660c50f0fc98743b6
    Port:          27017/TCP
    Host Port:     0/TCP
    Command:
      /opt/percona/ps-entry.sh
    Args:
      --bind_ip_all
      --auth
      --dbpath=/data/db
      --port=27017
      --replSet=cfg
      --storageEngine=wiredTiger
      --relaxPermChecks
      --sslAllowInvalidCertificates
      --clusterAuthMode=x509
      --tlsMode=preferTLS
      --configsvr
      --enableEncryption
      --encryptionKeyFile=/etc/mongodb-encryption/encryption-key
      --wiredTigerCacheSizeGB=0.25
      --wiredTigerIndexPrefixCompression=true
      --quiet
    State:          Terminated
      Reason:       Error
      Exit Code:    132
      Started:      Thu, 13 Jun 2024 08:51:16 -0300
      Finished:     Thu, 13 Jun 2024 08:51:16 -0300
    Last State:     Terminated
      Reason:       Error
      Exit Code:    132
      Started:      Thu, 13 Jun 2024 08:49:44 -0300
      Finished:     Thu, 13 Jun 2024 08:49:45 -0300
    Ready:          False
    Restart Count:  5
    Limits:
      cpu:     300m
      memory:  500M
    Requests:
      cpu:      300m
      memory:   500M
    Liveness:   exec [/opt/percona/mongodb-healthcheck k8s liveness --ssl --sslInsecure --sslCAFile /etc/mongodb-ssl/ca.crt --sslPEMKeyFile /tmp/tls.pem --startupDelaySeconds 7200] delay=60s timeout=10s period=30s #success=1 #failure=4
    Readiness:  exec [/opt/percona/mongodb-healthcheck k8s readiness --component mongod] delay=10s timeout=2s period=3s #success=1 #failure=3
    Environment Variables from:
      internal-my-cluster-name-users  Secret  Optional: false
    Environment:
      SERVICE_NAME:     my-cluster-name
      NAMESPACE:        default
      MONGODB_PORT:     27017
      MONGODB_REPLSET:  cfg
    Mounts:
      /data/db from mongod-data (rw)
      /etc/mongodb-encryption from my-cluster-name-mongodb-encryption-key (ro)
      /etc/mongodb-secrets from my-cluster-name-mongodb-keyfile (ro)
      /etc/mongodb-ssl from ssl (ro)
      /etc/mongodb-ssl-internal from ssl-internal (ro)
      /etc/users-secret from users-secret-file (rw)
      /opt/percona from bin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpsl2 (ro)
  backup-agent:
    Container ID:  containerd://5a59ba988ad5e12063ae3b791ffd38e971de683bb58d223afb6781ef52b62d1c
    Image:         percona/percona-backup-mongodb:2.4.1
    Image ID:      docker.io/percona/percona-backup-mongodb@sha256:a45d277af98090781a6149ccfb99d5bc4431ec53ba3b36ea644332851412a17e
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/percona/pbm-entry.sh
    Args:
      pbm-agent-entrypoint
    State:          Running
      Started:      Thu, 13 Jun 2024 08:48:10 -0300
    Ready:          True
    Restart Count:  0
    Environment:
      PBM_AGENT_MONGODB_USERNAME:  <set to the key 'MONGODB_BACKUP_USER' in secret 'internal-my-cluster-name-users'>      Optional: false
      PBM_AGENT_MONGODB_PASSWORD:  <set to the key 'MONGODB_BACKUP_PASSWORD' in secret 'internal-my-cluster-name-users'>  Optional: false
      PBM_MONGODB_REPLSET:         cfg
      PBM_MONGODB_PORT:            27017
      PBM_AGENT_SIDECAR:           true
      PBM_AGENT_SIDECAR_SLEEP:     5
      SHARDED:                     TRUE
      POD_NAME:                    my-cluster-name-cfg-0 (v1:metadata.name)
      PBM_MONGODB_URI:             mongodb://$(PBM_AGENT_MONGODB_USERNAME):$(PBM_AGENT_MONGODB_PASSWORD)@$(POD_NAME)
      PBM_AGENT_TLS_ENABLED:       true
    Mounts:
      /data/db from mongod-data (rw)
      /etc/mongodb-ssl from ssl (ro)
      /opt/percona from bin (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpsl2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  mongod-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongod-data-my-cluster-name-cfg-0
    ReadOnly:   false
  my-cluster-name-mongodb-keyfile:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-name-mongodb-keyfile
    Optional:    false
  bin:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  my-cluster-name-mongodb-encryption-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-name-mongodb-encryption-key
    Optional:    false
  ssl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-name-ssl
    Optional:    false
  ssl-internal:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-name-ssl-internal
    Optional:    true
  users-secret-file:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  internal-my-cluster-name-users
    Optional:    false
  kube-api-access-rpsl2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                    From                     Message
  ----     ------                  ----                   ----                     -------
  Normal   Scheduled               3m25s                  default-scheduler        Successfully assigned default/my-cluster-name-cfg-0 to k8s-homolog-worker-7.ifrn.local
  Normal   SuccessfulAttachVolume  3m15s                  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-24edd403-ff3a-454e-b24a-ae437f922e31"
  Normal   Pulling                 3m13s                  kubelet                  Pulling image "percona/percona-server-mongodb-operator:1.16.0"
  Normal   Pulled                  3m12s                  kubelet                  Successfully pulled image "percona/percona-server-mongodb-operator:1.16.0" in 928.663032ms (928.693529ms including waiting)
  Normal   Created                 3m12s                  kubelet                  Created container mongo-init
  Normal   Started                 3m12s                  kubelet                  Started container mongo-init
  Normal   Pulling                 3m10s                  kubelet                  Pulling image "percona/percona-backup-mongodb:2.4.1"
  Normal   Pulled                  3m10s                  kubelet                  Successfully pulled image "percona/percona-server-mongodb:7.0.8-5" in 794.989398ms (795.009361ms including waiting)
  Normal   Created                 3m9s                   kubelet                  Created container backup-agent
  Normal   Pulled                  3m9s                   kubelet                  Successfully pulled image "percona/percona-backup-mongodb:2.4.1" in 871.363322ms (871.39313ms including waiting)
  Normal   Started                 3m8s                   kubelet                  Started container backup-agent
  Normal   Pulled                  3m7s                   kubelet                  Successfully pulled image "percona/percona-server-mongodb:7.0.8-5" in 788.230857ms (788.250651ms including waiting)
  Normal   Pulling                 2m49s (x3 over 3m11s)  kubelet                  Pulling image "percona/percona-server-mongodb:7.0.8-5"
  Normal   Started                 2m48s (x3 over 3m10s)  kubelet                  Started container mongod
  Normal   Created                 2m48s (x3 over 3m10s)  kubelet                  Created container mongod
  Normal   Pulled                  2m48s                  kubelet                  Successfully pulled image "percona/percona-server-mongodb:7.0.8-5" in 788.019493ms (788.054163ms including waiting)
  Warning  BackOff                 2m44s (x5 over 3m5s)   kubelet                  Back-off restarting failed container mongod in pod my-cluster-name-cfg-0_default(e6fe095a-1e50-4b34-8860-f4cdb543603d)

> kubectl logs pod/my-cluster-name-cfg-0
Defaulted container "mongod" out of: mongod, backup-agent, mongo-init (init)
+ '[' - = - ']'
+ set -- mongod --bind_ip_all --auth --dbpath=/data/db --port=27017 --replSet=cfg --storageEngine=wiredTiger --relaxPermChecks --sslAllowInvalidCertificates --clusterAuthMode=x509 --tlsMode=preferTLS --configsvr --enableEncryption --encryptionKeyFile=/etc/mongodb-encryption/encryption-key --wiredTigerCacheSizeGB=0.25 --wiredTigerIndexPrefixCompression=true --quiet
+ originalArgOne=mongod
+ [[ mongod == mongo* ]]
++ id -u
+ '[' 1001 = 0 ']'
+ [[ mongod == mongo* ]]
+ numa=(numactl --interleave=all)
+ numactl --interleave=all true
+ set -- numactl --interleave=all mongod --bind_ip_all --auth --dbpath=/data/db --port=27017 --replSet=cfg --storageEngine=wiredTiger --relaxPermChecks --sslAllowInvalidCertificates --clusterAuthMode=x509 --tlsMode=preferTLS --configsvr --enableEncryption --encryptionKeyFile=/etc/mongodb-encryption/encryption-key --wiredTigerCacheSizeGB=0.25 --wiredTigerIndexPrefixCompression=true --quiet
++ head -1
++ awk '{print $3}'
++ mongod --version
++ awk -F. '{print $1"."$2}'
+ MONGODB_VERSION=

did you clean up all previous persistent volumes after switching to v1.16? make sure to start from scratch. Also if problem persists, show logs of the operator pod please

Hi @Ivan_Groenewold . Thank you for your support.

After remove cluster and uninstal operator → Delete PV/PVC:

> k get pvc | grep mongo | cut -d ' ' -f1  | xargs kubectl delete pvc
persistentvolumeclaim "mongod-data-minimal-cluster-cfg-0" deleted
persistentvolumeclaim "mongod-data-minimal-cluster-rs0-0" deleted
persistentvolumeclaim "mongod-data-my-cluster-name-cfg-0" deleted
persistentvolumeclaim "mongod-data-my-cluster-name-rs0-0" deleted
> k get pv | grep mongo | cut -d ' ' -f1  | xargs kubectl delete pv
persistentvolume "pvc-24edd403-ff3a-454e-b24a-ae437f922e31" deleted
persistentvolume "pvc-9cf33d85-26d7-4e97-978a-7c3d67dbf931" deleted
persistentvolume "pvc-ab3645fa-d7cc-48cf-97cf-4c046e72e24f" deleted
persistentvolume "pvc-ad30711a-133e-4095-afea-177d8ceaaf5c" deleted
> k get all | grep -i mongo

Install operator and create cluster:

> kubectl apply -f deploy/bundle.yaml --server-side
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied
role.rbac.authorization.k8s.io/percona-server-mongodb-operator serverside-applied
serviceaccount/percona-server-mongodb-operator serverside-applied
rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator serverside-applied
deployment.apps/percona-server-mongodb-operator serverside-applied

> kubectl  apply -f deploy/ssl-secrets.yaml
secret/my-cluster-name-ssl unchanged
secret/my-cluster-name-ssl-internal unchanged

> kubectl apply -f deploy/cr.yaml
perconaservermongodb.psmdb.percona.com/my-cluster-name created

Cluster status:

> kubectl get PerconaServerMongoDB
NAME              ENDPOINT   STATUS   AGE
my-cluster-name              error    92s

> k describe PerconaServerMongoDB
Name:         my-cluster-name
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  psmdb.percona.com/v1
Kind:         PerconaServerMongoDB
Metadata:
  Creation Timestamp:  2024-06-13T14:11:41Z
  Finalizers:
    delete-psmdb-pods-in-order
  Generation:        1
  Resource Version:  129720674
  UID:               3683d248-084c-488c-92bd-07cf8456d155
Spec:
  Backup:
    Enabled:  true
    Image:    percona/percona-backup-mongodb:2.4.1
    Pitr:
      Compression Level:  6
      Compression Type:   gzip
      Enabled:            false
      Oplog Only:         false
  Cr Version:             1.16.0
  Image:                  percona/percona-server-mongodb:7.0.8-5
  Image Pull Policy:      Always
  Pmm:
    Enabled:      false
    Image:        percona/pmm-client:2.41.2
    Server Host:  monitoring-service
  Replsets:
    Affinity:
      Anti Affinity Topology Key:  kubernetes.io/hostname
    Arbiter:
      Affinity:
        Anti Affinity Topology Key:  kubernetes.io/hostname
      Enabled:                       false
      Resources:
        Limits:
          Cpu:     300m
          Memory:  0.5G
        Requests:
          Cpu:     300m
          Memory:  0.5G
      Size:        1
    Expose:
      Enabled:      false
      Expose Type:  ClusterIP
    Name:           rs0
    Nonvoting:
      Affinity:
        Anti Affinity Topology Key:  kubernetes.io/hostname
      Enabled:                       false
      Pod Disruption Budget:
        Max Unavailable:  1
      Resources:
        Limits:
          Cpu:     300m
          Memory:  0.5G
        Requests:
          Cpu:     300m
          Memory:  0.5G
      Size:        3
      Volume Spec:
        Persistent Volume Claim:
          Resources:
            Requests:
              Storage:  3Gi
    Pod Disruption Budget:
      Max Unavailable:  1
    Resources:
      Limits:
        Cpu:     300m
        Memory:  0.5G
      Requests:
        Cpu:     300m
        Memory:  0.5G
    Size:        3
    Volume Spec:
      Persistent Volume Claim:
        Resources:
          Requests:
            Storage:  3Gi
  Secrets:
    Encryption Key:  my-cluster-name-mongodb-encryption-key
    Users:           my-cluster-name-secrets
  Sharding:
    Configsvr Repl Set:
      Affinity:
        Anti Affinity Topology Key:  kubernetes.io/hostname
      Expose:
        Enabled:      false
        Expose Type:  ClusterIP
      Pod Disruption Budget:
        Max Unavailable:  1
      Resources:
        Limits:
          Cpu:     300m
          Memory:  0.5G
        Requests:
          Cpu:     300m
          Memory:  0.5G
      Size:        3
      Volume Spec:
        Persistent Volume Claim:
          Resources:
            Requests:
              Storage:  3Gi
    Enabled:            true
    Mongos:
      Affinity:
        Anti Affinity Topology Key:  kubernetes.io/hostname
      Expose:
        Expose Type:  ClusterIP
      Pod Disruption Budget:
        Max Unavailable:  1
      Resources:
        Limits:
          Cpu:     300m
          Memory:  0.5G
        Requests:
          Cpu:      300m
          Memory:   0.5G
      Size:         3
  Update Strategy:  SmartUpdate
  Upgrade Options:
    Apply:                     disabled
    Schedule:                  0 2 * * *
    Set FCV:                   false
    Version Service Endpoint:  https://check.percona.com
Status:
  Conditions:
    Last Transition Time:  2024-06-13T14:11:52Z
    Message:               TLS secrets handler: "check cert-manager: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/validate?timeout=30s": no endpoints available for service "cert-manager-webhook"". Please create your TLS secret my-cluster-name-ssl manually or setup cert-manager correctly
    Reason:                ErrorReconcile
    Status:                True
    Type:                  error
  Message:                 Error: TLS secrets handler: "check cert-manager: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/validate?timeout=30s": no endpoints available for service "cert-manager-webhook"". Please create your TLS secret my-cluster-name-ssl manually or setup cert-manager correctly
  Ready:                   0
  Size:                    0
  State:                   error
Events:                    <none>

Secrets:

> k get secret
NAME                                        TYPE                 DATA   AGE
internal-my-cluster-name-users              Opaque               10     6m9s
my-cluster-name-secrets                     Opaque               11     6m9s
my-cluster-name-ssl                         kubernetes.io/tls    3      38s
my-cluster-name-ssl-internal                kubernetes.io/tls    3      38s

Operator logs:

> k logs opensearch-operator-controller-manager-64c95dcb4b-5lzdp
Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, operator-controller-manager
Flag --logtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
I0315 01:22:32.349460       1 flags.go:64] FLAG: --add-dir-header="false"
I0315 01:22:32.349560       1 flags.go:64] FLAG: --allow-paths="[]"
I0315 01:22:32.349571       1 flags.go:64] FLAG: --alsologtostderr="false"
I0315 01:22:32.349577       1 flags.go:64] FLAG: --auth-header-fields-enabled="false"
I0315 01:22:32.349584       1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups"
I0315 01:22:32.349592       1 flags.go:64] FLAG: --auth-header-groups-field-separator="|"
I0315 01:22:32.349597       1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user"
I0315 01:22:32.349603       1 flags.go:64] FLAG: --auth-token-audiences="[]"
I0315 01:22:32.349611       1 flags.go:64] FLAG: --client-ca-file=""
I0315 01:22:32.349616       1 flags.go:64] FLAG: --config-file=""
I0315 01:22:32.349623       1 flags.go:64] FLAG: --help="false"
I0315 01:22:32.349629       1 flags.go:64] FLAG: --http2-disable="false"
I0315 01:22:32.349635       1 flags.go:64] FLAG: --http2-max-concurrent-streams="100"
I0315 01:22:32.349653       1 flags.go:64] FLAG: --http2-max-size="262144"
I0315 01:22:32.349659       1 flags.go:64] FLAG: --ignore-paths="[]"
I0315 01:22:32.349665       1 flags.go:64] FLAG: --insecure-listen-address=""
I0315 01:22:32.349671       1 flags.go:64] FLAG: --kubeconfig=""
I0315 01:22:32.349676       1 flags.go:64] FLAG: --log-backtrace-at=":0"
I0315 01:22:32.349685       1 flags.go:64] FLAG: --log-dir=""
I0315 01:22:32.349691       1 flags.go:64] FLAG: --log-file=""
I0315 01:22:32.349696       1 flags.go:64] FLAG: --log-file-max-size="1800"
I0315 01:22:32.349702       1 flags.go:64] FLAG: --log-flush-frequency="5s"
I0315 01:22:32.349708       1 flags.go:64] FLAG: --logtostderr="true"
I0315 01:22:32.349714       1 flags.go:64] FLAG: --oidc-ca-file=""
I0315 01:22:32.349720       1 flags.go:64] FLAG: --oidc-clientID=""
I0315 01:22:32.349725       1 flags.go:64] FLAG: --oidc-groups-claim="groups"
I0315 01:22:32.349731       1 flags.go:64] FLAG: --oidc-groups-prefix=""
I0315 01:22:32.349744       1 flags.go:64] FLAG: --oidc-issuer=""
I0315 01:22:32.349750       1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]"
I0315 01:22:32.349765       1 flags.go:64] FLAG: --oidc-username-claim="email"
I0315 01:22:32.349771       1 flags.go:64] FLAG: --one-output="false"
I0315 01:22:32.349776       1 flags.go:64] FLAG: --proxy-endpoints-port="10443"
I0315 01:22:32.349783       1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:8443"
I0315 01:22:32.349788       1 flags.go:64] FLAG: --skip-headers="false"
I0315 01:22:32.349794       1 flags.go:64] FLAG: --skip-log-headers="false"
I0315 01:22:32.349799       1 flags.go:64] FLAG: --stderrthreshold="2"
I0315 01:22:32.349804       1 flags.go:64] FLAG: --tls-cert-file=""
I0315 01:22:32.349810       1 flags.go:64] FLAG: --tls-cipher-suites="[]"
I0315 01:22:32.349816       1 flags.go:64] FLAG: --tls-min-version="VersionTLS12"
I0315 01:22:32.349822       1 flags.go:64] FLAG: --tls-private-key-file=""
I0315 01:22:32.349827       1 flags.go:64] FLAG: --tls-reload-interval="1m0s"
I0315 01:22:32.349834       1 flags.go:64] FLAG: --upstream="http://127.0.0.1:8080/"
I0315 01:22:32.349840       1 flags.go:64] FLAG: --upstream-ca-file=""
I0315 01:22:32.349845       1 flags.go:64] FLAG: --upstream-client-cert-file=""
I0315 01:22:32.349850       1 flags.go:64] FLAG: --upstream-client-key-file=""
I0315 01:22:32.349856       1 flags.go:64] FLAG: --upstream-force-h2c="false"
I0315 01:22:32.349861       1 flags.go:64] FLAG: --v="10"
I0315 01:22:32.349867       1 flags.go:64] FLAG: --version="false"
I0315 01:22:32.349874       1 flags.go:64] FLAG: --vmodule=""
W0315 01:22:32.352972       1 kube-rbac-proxy.go:155]
==== Deprecation Warning ======================

Insecure listen address will be removed.
Using --insecure-listen-address won't be possible!

The ability to run kube-rbac-proxy without TLS certificates will be removed.
Not using --tls-cert-file and --tls-private-key-file won't be possible!

For more information, please go to https://github.com/brancz/kube-rbac-proxy/issues/187

===============================================


I0315 01:22:32.353010       1 kube-rbac-proxy.go:284] Valid token audiences:
I0315 01:22:32.353079       1 kube-rbac-proxy.go:378] Generating self signed cert as no cert is provided
I0315 01:23:04.986255       1 kube-rbac-proxy.go:490] Starting TCP socket on 0.0.0.0:10443
I0315 01:23:04.995697       1 kube-rbac-proxy.go:442] Starting TCP socket on 0.0.0.0:8443
I0315 01:23:05.026929       1 kube-rbac-proxy.go:497] Listening securely on 0.0.0.0:10443 for proxy endpoints
I0315 01:23:05.026984       1 kube-rbac-proxy.go:449] Listening securely on 0.0.0.0:8443

Cert-Manager logs:

kubectl -n cert-manager logs cert-manager-594b84b49d-6rrhc
...
I0613 12:06:56.438845       1 conditions.go:192] Found status change for Certificate "my-cluster-name-ca-cert" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2024-06-13 12:06:56.438830211 +0000 UTC m=+7814694.864597690
I0613 12:06:56.439287       1 conditions.go:203] Setting lastTransitionTime for Certificate "my-cluster-name-ca-cert" condition "Issuing" to 2024-06-13 12:06:56.439268976 +0000 UTC m=+7814694.865036494
E0613 12:06:56.439578       1 controller.go:134] "issuer in work queue no longer exists" err="issuer.cert-manager.io \"my-cluster-name-psmdb-ca-issuer\" not found" logger="cert-manager.issuers"
I0613 12:06:56.472762       1 controller.go:162] "re-queuing item due to optimistic locking on resource" logger="cert-manager.certificates-readiness" key="default/my-cluster-name-ca-cert" error="Operation cannot be fulfilled on certificates.cert-manager.io \"my-cluster-name-ca-cert\": the object has been modified; please apply your changes to the latest version and try again"
I0613 12:06:56.472957       1 conditions.go:192] Found status change for Certificate "my-cluster-name-ca-cert" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2024-06-13 12:06:56.472947853 +0000 UTC m=+7814694.898715349
E0613 12:06:56.483830       1 controller.go:134] "issuer in work queue no longer exists" err="issuer.cert-manager.io \"my-cluster-name-psmdb-issuer\" not found" logger="cert-manager.issuers"
I0613 12:06:56.485058       1 conditions.go:192] Found status change for Certificate "my-cluster-name-ssl-internal" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2024-06-13 12:06:56.485049461 +0000 UTC m=+7814694.910816951
I0613 12:06:56.485489       1 conditions.go:203] Setting lastTransitionTime for Certificate "my-cluster-name-ssl-internal" condition "Issuing" to 2024-06-13 12:06:56.485479736 +0000 UTC m=+7814694.911247227
I0613 12:06:56.496297       1 conditions.go:203] Setting lastTransitionTime for Certificate "my-cluster-name-ssl" condition "Issuing" to 2024-06-13 12:06:56.49628389 +0000 UTC m=+7814694.922051361
I0613 12:06:56.496420       1 conditions.go:192] Found status change for Certificate "my-cluster-name-ssl" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2024-06-13 12:06:56.496413361 +0000 UTC m=+7814694.922180855
E0613 12:06:56.508425       1 controller.go:167] "re-queuing item due to error processing" err="Operation cannot be fulfilled on certificates.cert-manager.io \"my-cluster-name-ssl-internal\": StorageError: invalid object, Code: 4, Key: /registry/cert-manager.io/certificates/default/my-cluster-name-ssl-internal, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e781604b-6981-4eef-82dd-deac73c4015f, UID in object meta: " logger="cert-manager.certificates-readiness" key="default/my-cluster-name-ssl-internal"
E0613 12:06:56.508421       1 controller.go:167] "re-queuing item due to error processing" err="Operation cannot be fulfilled on certificates.cert-manager.io \"my-cluster-name-ssl-internal\": StorageError: invalid object, Code: 4, Key: /registry/cert-manager.io/certificates/default/my-cluster-name-ssl-internal, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e781604b-6981-4eef-82dd-deac73c4015f, UID in object meta: " logger="cert-manager.certificates-trigger" key="default/my-cluster-name-ssl-internal"
E0613 12:06:56.509588       1 controller.go:167] "re-queuing item due to error processing" err="Operation cannot be fulfilled on certificates.cert-manager.io \"my-cluster-name-ssl\": StorageError: invalid object, Code: 4, Key: /registry/cert-manager.io/certificates/default/my-cluster-name-ssl, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: dcf0e3bc-6e10-474c-a0a4-b0d379f4ac52, UID in object meta: " logger="cert-manager.certificates-trigger" key="default/my-cluster-name-ssl"
E0613 12:06:56.512124       1 controller.go:167] "re-queuing item due to error processing" err="Operation cannot be fulfilled on certificates.cert-manager.io \"my-cluster-name-ssl\": StorageError: invalid object, Code: 4, Key: /registry/cert-manager.io/certificates/default/my-cluster-name-ssl, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: dcf0e3bc-6e10-474c-a0a4-b0d379f4ac52, UID in object meta: " logger="cert-manager.certificates-readiness" key="default/my-cluster-name-ssl"
E0613 12:06:56.732711       1 controller.go:167] "re-queuing item due to error processing" err="Operation cannot be fulfilled on certificates.cert-manager.io \"my-cluster-name-ca-cert\": StorageError: invalid object, Code: 4, Key: /registry/cert-manager.io/certificates/default/my-cluster-name-ca-cert, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 9a05c334-7b86-4285-ae5f-bb68f675e799, UID in object meta: " logger="cert-manager.certificates-key-manager" key="default/my-cluster-name-ca-cert"
E0613 12:07:01.439211       1 controller.go:134] "issuer in work queue no longer exists" err="issuer.cert-manager.io \"my-cluster-name-psmdb-issuer\" not found" logger="cert-manager.issuers"

Cert-Manager version: 1.14.2

Have you created your own certs? I think you might be hitting [K8SPSMDB-1101] - Percona JIRA if you can, try removing them or else try operator 1.15 until issue is fixed please

1 Like

No. i tried to run the cluster based on the examples of documentation.

I will test v1.15. Thank you.

Logs from another test using helm install (operator) and create cluster.

> helm repo add percona https://percona.github.io/percona-helm-charts/
"percona" already exists with the same configuration, skipping


> helm install my-op percona/psmdb-operator
NAME: my-op
LAST DEPLOYED: Thu Jun 13 11:51:40 2024
NAMESPACE: default
STATUS: deployed

> helm install cluster1 percona/psmdb-db
NAME: cluster1
LAST DEPLOYED: Thu Jun 13 11:52:22 2024
NAMESPACE: default
STATUS: deployed


> k get PerconaServerMongoDB
NAME                ENDPOINT   STATUS   AGE
cluster1-psmdb-db              error    20s

> k get psmdb
NAME                ENDPOINT   STATUS   AGE
cluster1-psmdb-db              error    25s

> kubectl describe PerconaServerMongoDB cluster1-psmdb-db
Name:         cluster1-psmdb-db
Namespace:    default
Labels:       app.kubernetes.io/instance=cluster1
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=psmdb-db
              app.kubernetes.io/version=1.16.0
              helm.sh/chart=psmdb-db-1.16.1
Annotations:  meta.helm.sh/release-name: cluster1
              meta.helm.sh/release-namespace: default
API Version:  psmdb.percona.com/v1
Kind:         PerconaServerMongoDB
Metadata:
  Creation Timestamp:  2024-06-13T14:52:23Z
  Finalizers:
    delete-psmdb-pods-in-order
  Generation:        1
  Resource Version:  129759739
  UID:               1e16b004-4249-4a5d-b4b2-9a55fba477e2
Spec:
  Backup:
    Enabled:  true
    Image:    percona/percona-backup-mongodb:2.4.1
    Pitr:
      Enabled:        false
  Cr Version:         1.16.0
  Image:              percona/percona-server-mongodb:7.0.8-5
  Image Pull Policy:  Always
  Multi Cluster:
    Enabled:  false
  Pause:      false
  Pmm:
    Enabled:      false
    Image:        percona/pmm-client:2.41.2
    Server Host:  monitoring-service
  Replsets:
    Affinity:
      Anti Affinity Topology Key:  kubernetes.io/hostname
    Arbiter:
      Affinity:
        Anti Affinity Topology Key:  kubernetes.io/hostname
      Enabled:                       false
      Size:                          1
    Expose:
      Enabled:      false
      Expose Type:  ClusterIP
    Name:           rs0
    Nonvoting:
      Affinity:
        Anti Affinity Topology Key:  kubernetes.io/hostname
      Enabled:                       false
      Pod Disruption Budget:
        Max Unavailable:  1
      Resources:
        Limits:
          Cpu:     300m
          Memory:  0.5G
        Requests:
          Cpu:     300m
          Memory:  0.5G
      Size:        3
      Volume Spec:
        Persistent Volume Claim:
          Resources:
            Requests:
              Storage:  3Gi
    Pod Disruption Budget:
      Max Unavailable:  1
    Resources:
      Limits:
        Cpu:     300m
        Memory:  0.5G
      Requests:
        Cpu:     300m
        Memory:  0.5G
    Size:        3
    Volume Spec:
      Persistent Volume Claim:
        Resources:
          Requests:
            Storage:  3Gi
  Secrets:
    Users:  cluster1-psmdb-db-secrets
  Sharding:
    Balancer:
      Enabled:  true
    Configsvr Repl Set:
      Affinity:
        Anti Affinity Topology Key:  kubernetes.io/hostname
      Expose:
        Enabled:      false
        Expose Type:  ClusterIP
      Pod Disruption Budget:
        Max Unavailable:  1
      Resources:
        Limits:
          Cpu:     300m
          Memory:  0.5G
        Requests:
          Cpu:     300m
          Memory:  0.5G
      Size:        3
      Volume Spec:
        Persistent Volume Claim:
          Resources:
            Requests:
              Storage:  3Gi
    Enabled:            true
    Mongos:
      Affinity:
        Anti Affinity Topology Key:  kubernetes.io/hostname
      Expose:
        Expose Type:  ClusterIP
      Pod Disruption Budget:
        Max Unavailable:  1
      Resources:
        Limits:
          Cpu:     300m
          Memory:  0.5G
        Requests:
          Cpu:     300m
          Memory:  0.5G
      Size:        2
  Unmanaged:       false
  Unsafe Flags:
    Backup If Unhealthy:       false
    Mongos Size:               false
    Replset Size:              false
    Termination Grace Period:  false
    Tls:                       false
  Update Strategy:             SmartUpdate
  Upgrade Options:
    Apply:                     disabled
    Schedule:                  0 2 * * *
    Set FCV:                   false
    Version Service Endpoint:  https://check.percona.com
Status:
  Conditions:
    Last Transition Time:  2024-06-13T14:52:23Z
    Message:               TLS secrets handler: "check cert-manager: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/validate?timeout=30s": no endpoints available for service "cert-manager-webhook"". Please create your TLS secret cluster1-psmdb-db-ssl manually or setup cert-manager correctly
    Reason:                ErrorReconcile
    Status:                True
    Type:                  error
  Message:                 Error: TLS secrets handler: "check cert-manager: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/validate?timeout=30s": no endpoints available for service "cert-manager-webhook"". Please create your TLS secret cluster1-psmdb-db-ssl manually or setup cert-manager correctly
  Ready:                   0
  Size:                    0
  State:                   error
Events:                    <none>

Operator logs (from helm test):

> k logs my-op-psmdb-operator-79fcc8f698-5wpbl
2024-06-13T14:51:45.146Z	INFO	setup	Manager starting up	{"gitCommit": "54e1b18dd9dac8e0ed5929bb2c91318cd6829a48", "gitBranch": "release-1-16-0", "goVersion": "go1.22.3", "os": "linux", "arch": "amd64"}
2024-06-13T14:51:45.331Z	INFO	server version	{"platform": "kubernetes", "version": "v1.27.14+rke2r1"}
2024-06-13T14:51:45.344Z	INFO	starting server	{"name": "health probe", "addr": "[::]:8081"}
I0613 14:51:45.345138       1 leaderelection.go:250] attempting to acquire leader lease default/08db0feb.percona.com...
2024-06-13T14:51:45.344Z	INFO	controller-runtime.metrics	Starting metrics server
2024-06-13T14:51:45.345Z	INFO	controller-runtime.metrics	Serving metrics server	{"bindAddress": ":8080", "secure": false}
I0613 14:52:02.398897       1 leaderelection.go:260] successfully acquired lease default/08db0feb.percona.com
2024-06-13T14:52:02.399Z	INFO	Starting EventSource	{"controller": "psmdb-controller", "source": "kind source: *v1.PerconaServerMongoDB"}
2024-06-13T14:52:02.399Z	INFO	Starting Controller	{"controller": "psmdb-controller"}
2024-06-13T14:52:02.399Z	INFO	Starting EventSource	{"controller": "psmdbbackup-controller", "source": "kind source: *v1.PerconaServerMongoDBBackup"}
2024-06-13T14:52:02.399Z	INFO	Starting EventSource	{"controller": "psmdbbackup-controller", "source": "kind source: *v1.Pod"}
2024-06-13T14:52:02.399Z	INFO	Starting Controller	{"controller": "psmdbbackup-controller"}
2024-06-13T14:52:02.399Z	INFO	Starting EventSource	{"controller": "psmdbrestore-controller", "source": "kind source: *v1.PerconaServerMongoDBRestore"}
2024-06-13T14:52:02.399Z	INFO	Starting EventSource	{"controller": "psmdbrestore-controller", "source": "kind source: *v1.Pod"}
2024-06-13T14:52:02.399Z	INFO	Starting Controller	{"controller": "psmdbrestore-controller"}
2024-06-13T14:52:02.695Z	INFO	Starting workers	{"controller": "psmdbbackup-controller", "worker count": 1}
2024-06-13T14:52:02.703Z	INFO	Starting workers	{"controller": "psmdb-controller", "worker count": 1}
2024-06-13T14:52:02.710Z	INFO	Starting workers	{"controller": "psmdbrestore-controller", "worker count": 1}
2024-06-13T14:52:24.057Z	ERROR	Reconciler error	{"controller": "psmdb-controller", "object": {"name":"cluster1-psmdb-db","namespace":"default"}, "namespace": "default", "name": "cluster1-psmdb-db", "reconcileID": "909f05f4-dd0c-4450-bad4-74b83aa59862", "error": "TLS secrets handler: \"check cert-manager: Internal error occurred: failed calling webhook \"webhook.cert-manager.io\": failed to call webhook: Post \"https://cert-manager-webhook.cert-manager.svc:443/validate?timeout=30s\": no endpoints available for service \"cert-manager-webhook\"\". Please create your TLS secret cluster1-psmdb-db-ssl manually or setup cert-manager correctly", "errorVerbose": "TLS secrets handler: \"check cert-manager: Internal error occurred: failed calling webhook \"webhook.cert-manager.io\": failed to call webhook: Post \"https://cert-manager-webhook.cert-manager.svc:443/validate?timeout=30s\": no endpoints available for service \"cert-manager-webhook\"\". Please create your TLS secret cluster1-psmdb-db-ssl manually or setup cert-manager correctly\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:370\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.1/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.1/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.1/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.1/pkg/internal/controller/controller.go:222\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1695"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.1/pkg/internal/controller/controller.go:324
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.1/pkg/internal/controller/controller.go:261
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.1/pkg/internal/controller/controller.go:222
...

Hi @Welkson,

I see several problems here. The first one, we do not officially support RKE2. We do not test our operator using this distribution. As you know, different distributions can have some specific differences. It can work, but we can’t guarantee it. The list of supported platforms you can find here
Also, I am worrying about this part of log:

it seems encryppoint can’t get the psmdb version, and you need to understand why (connect and execute mongod --version manually).

The second issue is cert-manager:
As I can see, you did not cleaned up all objects from the previous run and you have e.f. old certs/secrets and maybe issuer. Please delete the cluster and confirm it:

❯ kubectl get secrets
❯ kubectl get certificates
❯ kubectl get issuers
❯ kubectl get pvc

When you use cert-manager, you should not create certificates manually. Please check the doc
And how to deploy operator and cluster

1 Like

Hi @Slava_Sarzhan !

I use Rancher (RKE2) and LongHorn (CSI). I did some more tests removing the cluster objects (pvc, issuers, certificates, secrets) as suggested, but still unsuccessful. I’ll study more. I appreciate the help.

I found a problem in my cluster (LongHorn broken in 1 node) and fix it.

I repeated my tests based on mongo operator samples for Minikube and it worked.

kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.15.0/deploy/bundle.yaml

Custom cr-minimal:

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  name: minimal-cluster
spec:
  crVersion: 1.15.0
  image: percona/percona-server-mongodb:6.0.9-7
  allowUnsafeConfigurations: true
  upgradeOptions:
    apply: disabled
    schedule: "0 2 * * *"
  secrets:
    users: minimal-cluster
  replsets:

  - name: rs0
    size: 1
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: longhorn
        resources:
          requests:
            storage: 3Gi

  sharding:
    enabled: true

    configsvrReplSet:
      size: 1
      volumeSpec:
        persistentVolumeClaim:
          storageClassName: longhorn
          resources:
            requests:
              storage: 3Gi

    mongos:
      size: 1

Apply manifest:

kubectl apply -f cr-minimal-longhorn.yaml

Cluster and pod status:

> kubectl get psmdb
NAME              ENDPOINT                                           STATUS   AGE
minimal-cluster   minimal-cluster-mongos.default.svc.cluster.local   ready    4m57s

> kubectl get pod
NAME                                                      READY   STATUS    RESTARTS       AGE
minimal-cluster-cfg-0                                     1/1     Running   0              46s
minimal-cluster-mongos-0                                  1/1     Running   0              36s
minimal-cluster-rs0-0                                     1/1     Running   0              36s
percona-server-mongodb-operator-7df8b6dc4c-rhlwl          1/1     Running   0              4m13s

Thanks for your help @Ivan_Groenewold @Slava_Sarzhan

1 Like