Mongodb cluster in restart loop

Hello,

I have a mongodb cluster (3 node without arbiter) setup.
I install the Percona operator on an Openshift cluster.
It’s a 3 node worker cluster.
The openshift cluster is VM.

After installed, the host that run the VM shutdown.
After power up the Openshift cluster, the mongodb instance is on restart loop:

2023-02-27T15:30:27.898Z        ERROR   controller_psmdb        failed to reconcile cluster     {"Request.Namespace": "mongodb1", "Request.Name": "cluster2", "replset": "rs0", "error": "force reconfig to recover: write mongo config: replSetReconfig: (NotWritablePrimary) New config is rejected :: caused by :: replSetReconfig should only be run on a writable PRIMARY. Current state REMOVED;", "errorVerbose": "(NotWritablePrimary) New config is rejected :: caused by :: replSetReconfig should only be run on a writable PRIMARY. Current state REMOVED;\nreplSetReconfig\ngithub.com/percona/percona-server-mongodb-operator/pkg/psmdb/mongo.WriteConfig\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/psmdb/mongo/mongo.go:248\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).recoverReplsetNoPrimary\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/mgo.go:773\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileCluster\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/mgo.go:90\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:499\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571\nwrite mongo config\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).recoverReplsetNoPrimary\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/mgo.go:774\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileCluster\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/mgo.go:90\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:499\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571\nforce reconfig to recover\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileCluster\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/mgo.go:91\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:499\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        /go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        /go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        /go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        /go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227
{"t":{"$date":"2023-02-27T15:50:12.832+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":1400}}

Any idea about the problem? Thanks.

Hey @patdung100_percona ,

can you please show the state of psmdb object and Pods? What is happening with those?

@Sergey_Pronin

The first day was ok. When I shutdown and start the Openshift cluster on second day, I saw lots of messages like ```
{“t”:{“$date”:“2023-02-27T15:50:12.832+00:00”},“s”:“I”, “c”:“-”, “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.”,“nextWakeupMillis”:1400}}

It looks stabilized on third day and beyond:

NAME READY STATUS RESTARTS AGE
cluster2-rs0-0 2/2 Running 119 (75m ago) 3d18h
cluster2-rs0-1 2/2 Running 119 (75m ago) 3d18h
cluster2-rs0-2 2/2 Running 116 3d18h
percona-server-mongodb-operator-6cf4b7c6bd-m668r 1/1 Running 33 (76m ago) 3d19h


BTW, the SAN certificate problem should be solved in 1.14 deploy/cr.yaml:

$ oc get PerconaServerMongoDB
NAME ENDPOINT STATUS AGE
cluster2 cluster2-rs0.mongodb1.svc.cluster.local error 3d21h
$ oc get PerconaServerMongoDB/cluster2 -o yaml
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{“apiVersion”:“psmdb.percona.com/v1",“kind”:“PerconaServerMongoDB”,“metadata”:{“annotations”:{},“finalizers”:[“delete-psmdb-pods-in-order”],“name”:“cluster2”,“namespace”:“mongodb1”},“spec”:{“allowUnsafeConfigurations”:false,“backup”:{“enabled”:true,“image”:“perconalab/percona-server-mongodb-operator:main-backup”,“pitr”:{“compressionLevel”:6,“compressionType”:“gzip”,“enabled”:false},“serviceAccountName”:“percona-server-mongodb-operator”},“crVersion”:“1.13.0”,“image”:“percona/percona-server-mongodb:5.0.11-10”,“imagePullPolicy”:“IfNotPresent”,“initContainerSecurityContext”:{“fsGroup”:1000730000,“readOnlyRootFilesystem”:false,“runAsGroup”:1000730000,“runAsNonRoot”:true,“runAsUser”:1000730000},“platform”:“openshift”,“pmm”:{“enabled”:false,“image”:“percona/pmm-client:2.30.0”,“serverHost”:“monitoring-service”},“replsets”:[{“affinity”:{“antiAffinityTopologyKey”:“kubernetes.io/hostname”},“arbiter”:{“affinity”:{“antiAffinityTopologyKey”:“kubernetes.io/hostname”},“enabled”:false,“size”:1},“expose”:{“enabled”:true,“exposeType”:“ClusterIP”},“name”:“rs0”,“nonvoting”:{“affinity”:{“antiAffinityTopologyKey”:“kubernetes.io/hostname”},“enabled”:false,“podDisruptionBudget”:{“maxUnavailable”:1},“resources”:{“limits”:{“cpu”:“300m”,“memory”:“0.5G”},“requests”:{“cpu”:“300m”,“memory”:“0.5G”}},“size”:3,“volumeSpec”:{“persistentVolumeClaim”:{“resources”:{“requests”:{“storage”:“3Gi”}}}}},“podDisruptionBudget”:{“maxUnavailable”:1},“resources”:{“limits”:{“memory”:“1G”},“requests”:{“cpu”:“300m”,“memory”:“0.5G”}},“size”:3,“volumeSpec”:{“persistentVolumeClaim”:{“resources”:{“requests”:{“storage”:“3Gi”}},“storageClassName”:“nfs-client”},“podSecurityContext”:{“fsGroup”:1000730000,“runAsGroup”:1000730000,“runAsNonRoot”:true,“runAsUser”:1000730000}}}],“secrets”:{“encryptionKey”:“cluster2-encryption-key”,“users”:“cluster2-user-secrets”},“sharding”:{“configsvrReplSet”:{“affinity”:{“antiAffinityTopologyKey”:“kubernetes.io/hostname”},“expose”:{“enabled”:false,“exposeType”:“ClusterIP”},“podDisruptionBudget”:{“maxUnavailable”:1},“resources”:{“limits”:{“cpu”:“300m”,“memory”:“0.5G”},“requests”:{“cpu”:“300m”,“memory”:“0.5G”}},“size”:3,“volumeSpec”:{“persistentVolumeClaim”:{“resources”:{“requests”:{“storage”:“3Gi”}}}}},“enabled”:false,“mongos”:{“affinity”:{“antiAffinityTopologyKey”:“kubernetes.io/hostname”},“expose”:{“exposeType”:“ClusterIP”},“podDisruptionBudget”:{“maxUnavailable”:1},“resources”:{“limits”:{“cpu”:“300m”,“memory”:“0.5G”},“requests”:{“cpu”:“300m”,“memory”:“0.5G”}},“size”:3}},“updateStrategy”:“SmartUpdate”,“upgradeOptions”:{“apply”:“disabled”,“schedule”:"0 2 * * *”,“setFCV”:false,“versionServiceEndpoint”:“https://check.percona.com”}}}
creationTimestamp: “2023-02-25T19:49:27Z”
finalizers:

  • delete-psmdb-pods-in-order
    generation: 1
    name: cluster2
    namespace: mongodb1
    resourceVersion: “4435997”
    uid: 1c7cbacc-e083-49c6-8b45-c946adce8930
    spec:
    allowUnsafeConfigurations: false
    backup:
    enabled: true
    image: perconalab/percona-server-mongodb-operator:main-backup
    pitr:
    compressionLevel: 6
    compressionType: gzip
    enabled: false
    serviceAccountName: percona-server-mongodb-operator
    crVersion: 1.13.0
    image: percona/percona-server-mongodb:5.0.11-10
    imagePullPolicy: IfNotPresent
    platform: openshift
    pmm:
    enabled: false
    image: percona/pmm-client:2.30.0
    serverHost: monitoring-service
    replsets:
  • affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
    arbiter:
    affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
    enabled: false
    size: 1
    expose:
    enabled: true
    exposeType: ClusterIP
    name: rs0
    nonvoting:
    affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
    enabled: false
    podDisruptionBudget:
    maxUnavailable: 1
    resources:
    limits:
    cpu: 300m
    memory: 0.5G
    requests:
    cpu: 300m
    memory: 0.5G
    size: 3
    volumeSpec:
    persistentVolumeClaim:
    resources:
    requests:
    storage: 3Gi
    podDisruptionBudget:
    maxUnavailable: 1
    resources:
    limits:
    memory: 1G
    requests:
    cpu: 300m
    memory: 0.5G
    size: 3
    volumeSpec:
    persistentVolumeClaim:
    resources:
    requests:
    storage: 3Gi
    storageClassName: nfs-client
    secrets:
    encryptionKey: cluster2-encryption-key
    users: cluster2-user-secrets
    sharding:
    configsvrReplSet:
    affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
    expose:
    enabled: false
    exposeType: ClusterIP
    podDisruptionBudget:
    maxUnavailable: 1
    resources:
    limits:
    cpu: 300m
    memory: 0.5G
    requests:
    cpu: 300m
    memory: 0.5G
    size: 3
    volumeSpec:
    persistentVolumeClaim:
    resources:
    requests:
    storage: 3Gi
    enabled: false
    mongos:
    affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
    expose:
    exposeType: ClusterIP
    podDisruptionBudget:
    maxUnavailable: 1
    resources:
    limits:
    cpu: 300m
    memory: 0.5G
    requests:
    cpu: 300m
    memory: 0.5G
    size: 3
    updateStrategy: SmartUpdate
    upgradeOptions:
    apply: disabled
    schedule: 0 2 * * *
    setFCV: false
    versionServiceEndpoint: https://check.percona.com
    status:
    conditions:
  • lastTransitionTime: “2023-03-01T16:52:33Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:52:33Z”
    status: “True”
    type: ready
  • lastTransitionTime: “2023-03-01T16:53:03Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:53:03Z”
    status: “True”
    type: ready
  • lastTransitionTime: “2023-03-01T16:53:34Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:53:34Z”
    status: “True”
    type: ready
  • lastTransitionTime: “2023-03-01T16:54:35Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:54:36Z”
    status: “True”
    type: ready
  • lastTransitionTime: “2023-03-01T16:55:06Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:55:06Z”
    status: “True”
    type: ready
  • lastTransitionTime: “2023-03-01T16:55:37Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:55:38Z”
    status: “True”
    type: ready
  • lastTransitionTime: “2023-03-01T16:56:09Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:56:09Z”
    status: “True”
    type: ready
  • lastTransitionTime: “2023-03-01T16:56:40Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:57:10Z”
    status: “True”
    type: ready
  • lastTransitionTime: “2023-03-01T16:57:40Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:58:11Z”
    status: “True”
    type: ready
  • lastTransitionTime: “2023-03-01T16:58:41Z”
    message: ‘create pbm object: create PBM connection to cluster2-rs0-2.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-1.cluster2-rs0.mongodb1.svc.cluster.local:27017,cluster2-rs0-0.cluster2-rs0.mongodb1.svc.cluster.local:27017:
    create mongo connection: mongo ping: server selection error: server selection
    timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.51.32.240:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.32.240 because it doesn’‘t contain
    any IP SANs }, { Addr: 10.51.231.54:27017, Type: Unknown, Last error: connection()
    error occured during connection handshake: x509: cannot validate certificate
    for 10.51.231.54 because it doesn’‘t contain any IP SANs }, { Addr: 10.51.140.17:27017,
    Type: Unknown, Last error: connection() error occured during connection handshake:
    x509: cannot validate certificate for 10.51.140.17 because it doesn’‘t contain
    any IP SANs }, ] }’
    reason: ErrorReconcile
    status: “True”
    type: error
  • lastTransitionTime: “2023-03-01T16:59:11Z”
    status: “True”
    type: ready
    host: cluster2-rs0.mongodb1.svc.cluster.local
    mongoImage: percona/percona-server-mongodb:5.0.11-10
    mongoVersion: 5.0.11-10
    observedGeneration: 1
    ready: 3
    replsets:
    rs0:
    initialized: true
    ready: 3
    size: 3
    status: ready
    size: 3
    state: ready