Manually generated certificates cannot be applied in 1.16.0

Hi, testing on AWS EKS and using 1.16.0.

an issue where the operator keeps overwriting manually generated certificates.

my certificate is stored in AWS Secret Manager, and SSL & InternalSSL Secret are create by ExternalSecret.

Secrets:

apiVersion: v1
data:
  ca.crt: ++++++++
  tls.crt: ++++++++
  tls.key: ++++++++
immutable: false
kind: Secret
metadata:
  annotations:
    reconcile.external-secrets.io/data-hash: ec498208a4babd7a0b95e0afa4514c2b
  creationTimestamp: '2024-06-04T08:23:29Z'
  labels:
    reconcile.external-secrets.io/created-by: f03b3d87b482d2c50649c6280414ab91
  name: mongo44-main-ssl-internal
  namespace: mongodb
  ownerReferences:
    - apiVersion: external-secrets.io/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: ExternalSecret
      name: mongo44-main-internal-ssl
      uid: e886ca36-8f83-418b-9cf9-b7536f2a1fea
  resourceVersion: '313531644'
  uid: 43f39be8-1759-4c19-b1ba-f77d23a2abd4
type: kubernetes.io/tls
---
apiVersion: v1
data:
  ca.crt: ++++++++
  tls.crt: ++++++++
  tls.key: ++++++++
immutable: false
kind: Secret
metadata:
  annotations:
    reconcile.external-secrets.io/data-hash: b1c240872906b635dbaa1f4736fc15a9
  creationTimestamp: '2024-06-04T08:23:13Z'
  labels:
    reconcile.external-secrets.io/created-by: 07e635490539f840f7aff1572316b1d6
  name: mongo44-main-ssl
  namespace: mongodb
  ownerReferences:
    - apiVersion: external-secrets.io/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: ExternalSecret
      name: mongo44-main-ssl
      uid: 5b67e9d0-1e76-45d8-80aa-258720049b98
  resourceVersion: '313531265'
  uid: d1e08019-e5ec-4d9c-b1ad-5da819700802
type: kubernetes.io/tls

after the above two secrets are created, apply psmdb below,

CR:

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  name: mongo44-main
  labels:
    app.kubernetes.io/name: mongo44-main
  finalizers: null
spec:
  crVersion: 1.16.0
  unmanaged: false
  updateStrategy: SmartUpdate
  clusterServiceDNSMode: ServiceMesh
  
  image: "percona/percona-server-mongodb:4.4.24-23"
  imagePullPolicy: "IfNotPresent"

  ## secrets
  secrets:
    ssl: mongo44-main-ssl
    sslInternal : mongo44-main-ssl-internal
    users: psmdb-users

  ## upgrade
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: disabled
    schedule: 0 2 * * *
    setFCV: false

  ## tls
  tls: 
    mode: preferTLS

  ## backup
  backup:
    enabled: false
    image: "percona/percona-backup-mongodb:2.4.1"
    storages:
      s3-ap-northeast-2:
        s3:
          bucket: database-backup-kr-alpha
          prefix: scheduled/mongo44-main
          region: ap-northeast-2
        type: s3
    pitr:
      enabled: false
    tasks:
      - compressionLevel: 6
        compressionType: gzip
        enabled: true
        keep: 365
        name: daily-s3-ap-northeast-2
        schedule: 30 19 * * *
        storageName: s3-ap-northeast-2
        type: physical

  ## replica set
  replsets:
    - name: shard01
      size: 3
      configuration: |
        net:
          tls:
            allowConnectionsWithoutCertificates: true
            allowInvalidHostnames: true
            allowInvalidCertificates: true
        replication:
          replSetName: "alpha-mongodb-test44-shard01"
        security:
          enableEncryption: false
        systemLog:
          verbosity: 0
        setParameter:
          diagnosticDataCollectionDirectoryPath: "/data/db/diagnostic-mongod.data"
          diagnosticDataCollectionPeriodMillis: 10000
          maxIndexBuildMemoryUsageMegabytes: 250
      serviceAccountName: mongodb
      affinity:
        advanced:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
              - podAffinityTerm:
                  labelSelector:
                    matchLabels:
                      app.kubernetes.io/instance: mongo44-main
                      app.kubernetes.io/name: percona-server-mongodb
                      app.kubernetes.io/replset: shard01
                  topologyKey: failure-domain.beta.kubernetes.io/zone
                weight: 100
              - podAffinityTerm:
                  labelSelector:
                    matchLabels:
                      app.kubernetes.io/instance: mongo44-main
                      app.kubernetes.io/name: percona-server-mongodb
                  topologyKey: kubernetes.io/hostname
                weight: 1
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: mongo44-main
                    app.kubernetes.io/name: percona-server-mongodb
                    app.kubernetes.io/replset: shard01
                topologyKey: kubernetes.io/hostname
      nodeSelector:
        group: mongodb
      tolerations:
        - effect: NoSchedule
          key: group
          operator: Equal
          value: mongodb
      livenessProbe:
        failureThreshold: 20
        initialDelaySeconds: 300
        periodSeconds: 120
        startupDelaySeconds: 7200
        timeoutSeconds: 10
      readinessProbe:
        failureThreshold: 10
        initialDelaySeconds: 60
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 10
      storage:
        wiredTiger:
          collectionConfig:
            blockCompressor: snappy
          engineConfig:
            cacheSizeRatio: 0.6
            directoryForIndexes: false
            journalCompressor: snappy
          indexConfig:
            prefixCompression: true
      podDisruptionBudget:
        maxUnavailable: 1
      expose:
        enabled: true
        exposeType: LoadBalancer
        serviceAnnotations: 
          service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
          service.beta.kubernetes.io/aws-load-balancer-type: external
          service.beta.kubernetes.io/aws-load-balancer-security-groups: "eks-default,alpha-mongodb"
      resources:
        limits:
          cpu: 8
          memory: 64Gi
        requests:
          cpu: 100m
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 200Gi
          storageClassName: gp3-xfs
      nonvoting:
        enabled: false
        size: 1
        affinity:
      arbiter:
        enabled: false
        size: 1
        affinity: null
    - name: shard02
      size: 3
      configuration: |
        net:
          tls:
            allowConnectionsWithoutCertificates: true
            allowInvalidHostnames: true
            allowInvalidCertificates: true
        replication:
          replSetName: "alpha-mongodb-test44-shard02"
        security:
          enableEncryption: false
        systemLog:
          verbosity: 0
        setParameter:
          diagnosticDataCollectionDirectoryPath: "/data/db/diagnostic-mongod.data"
          diagnosticDataCollectionPeriodMillis: 10000
          maxIndexBuildMemoryUsageMegabytes: 250
      serviceAccountName: mongodb
      affinity:
        advanced:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
              - podAffinityTerm:
                  labelSelector:
                    matchLabels:
                      app.kubernetes.io/instance: mongo44-main
                      app.kubernetes.io/name: percona-server-mongodb
                      app.kubernetes.io/replset: shard02
                  topologyKey: failure-domain.beta.kubernetes.io/zone
                weight: 100
              - podAffinityTerm:
                  labelSelector:
                    matchLabels:
                      app.kubernetes.io/instance: mongo44-main
                      app.kubernetes.io/name: percona-server-mongodb
                  topologyKey: kubernetes.io/hostname
                weight: 1
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: mongo44-main
                    app.kubernetes.io/name: percona-server-mongodb
                    app.kubernetes.io/replset: shard02
                topologyKey: kubernetes.io/hostname
      nodeSelector:
        group: mongodb
      tolerations:
        - effect: NoSchedule
          key: group
          operator: Equal
          value: mongodb
      livenessProbe:
        failureThreshold: 20
        initialDelaySeconds: 300
        periodSeconds: 120
        startupDelaySeconds: 7200
        timeoutSeconds: 10
      readinessProbe:
        failureThreshold: 10
        initialDelaySeconds: 60
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 10
      storage:
        wiredTiger:
          collectionConfig:
            blockCompressor: snappy
          engineConfig:
            cacheSizeRatio: 0.6
            directoryForIndexes: false
            journalCompressor: snappy
          indexConfig:
            prefixCompression: true
      podDisruptionBudget:
        maxUnavailable: 1
      expose:
        enabled: true
        exposeType: LoadBalancer
        serviceAnnotations: 
          service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
          service.beta.kubernetes.io/aws-load-balancer-type: external
          service.beta.kubernetes.io/aws-load-balancer-security-groups: "eks-default,alpha-mongodb"
      resources:
        limits:
          cpu: 8
          memory: 64Gi
        requests:
          cpu: 100m
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 200Gi
          storageClassName: gp3-xfs
      nonvoting:
        enabled: false
        size: 1
        affinity:
      arbiter:
        enabled: false
        size: 1
        affinity: null
  
  ## shard cluster
  sharding:
    enabled: true
    balancer:
      enabled: true
    configsvrReplSet:
      size: 3
      configuration: |
        net:
          tls:
            allowConnectionsWithoutCertificates: true
            allowInvalidHostnames: true
            allowInvalidCertificates: true
        security:
          enableEncryption: false
        systemLog:
          verbosity: 0
        replication:
          replSetName: "alpha-mongodb-test44-config"
        setParameter:
          diagnosticDataCollectionDirectoryPath: "/data/db/diagnostic-mongod.data"
          diagnosticDataCollectionPeriodMillis: 10000
      serviceAccountName: mongodb
      affinity:
        advanced:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
              - podAffinityTerm:
                  labelSelector:
                    matchLabels:
                      app.kubernetes.io/instance: mongo44-main
                      app.kubernetes.io/name: percona-server-mongodb
                      app.kubernetes.io/replset: cfg
                  topologyKey: failure-domain.beta.kubernetes.io/zone
                weight: 100
              - podAffinityTerm:
                  labelSelector:
                    matchLabels:
                      app.kubernetes.io/instance: mongo44-main
                      app.kubernetes.io/name: percona-server-mongodb
                  topologyKey: kubernetes.io/hostname
                weight: 1
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: mongo44-main
                    app.kubernetes.io/name: percona-server-mongodb
                    app.kubernetes.io/replset: cfg
                topologyKey: kubernetes.io/hostname
      nodeSelector:
        group: mongodb
      tolerations:
        - effect: NoSchedule
          key: group
          operator: Equal
          value: mongodb
      livenessProbe:
        failureThreshold: 20
        initialDelaySeconds: 300
        periodSeconds: 120
        startupDelaySeconds: 7200
        timeoutSeconds: 10
      readinessProbe:
        failureThreshold: 10
        initialDelaySeconds: 60
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 10
      podDisruptionBudget:
        maxUnavailable: 1
      expose:
        enabled: true
        exposeType: LoadBalancer
        serviceAnnotations: 
          service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
          service.beta.kubernetes.io/aws-load-balancer-type: external
          service.beta.kubernetes.io/aws-load-balancer-security-groups: "eks-default,alpha-mongodb"
      resources:
        limits:
          memory: 2Gi
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 50Gi
          storageClassName: gp3-xfs
    mongos:
      size: 3
      configuration: |
        net:
          tls:
            allowConnectionsWithoutCertificates: true
            allowInvalidHostnames: true
            allowInvalidCertificates: true
          compression :
            compressors : "disabled"
        systemLog:
          verbosity: 0
        setParameter:
          diagnosticDataCollectionDirectoryPath: "/data/db/diagnostic-mongos.data"
          diagnosticDataCollectionPeriodMillis: 10000
          taskExecutorPoolSize: 1
      serviceAccountName: mongodb
      affinity:
        advanced:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
              - podAffinityTerm:
                  labelSelector:
                    matchLabels:
                      app.kubernetes.io/component: mongos
                      app.kubernetes.io/instance: mongo44-main
                      app.kubernetes.io/name: percona-server-mongodb
                  topologyKey: failure-domain.beta.kubernetes.io/zone
                weight: 100
              - podAffinityTerm:
                  labelSelector:
                    matchLabels:
                      app.kubernetes.io/instance: mongo44-main
                      app.kubernetes.io/name: percona-server-mongodb
                  topologyKey: kubernetes.io/hostname
                weight: 1
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchLabels:
                    app.kubernetes.io/component: mongos
                    app.kubernetes.io/instance: mongo44-main
                    app.kubernetes.io/name: percona-server-mongodb
                topologyKey: kubernetes.io/hostname
      nodeSelector:
        group: mongodb
      tolerations:
        - effect: NoSchedule
          key: group
          operator: Equal
          value: mongodb
      livenessProbe:
        failureThreshold: 20
        initialDelaySeconds: 300
        periodSeconds: 120
        startupDelaySeconds: 7200
        timeoutSeconds: 10
      readinessProbe:
        failureThreshold: 10
        initialDelaySeconds: 60
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 10
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          memory: 2Gi
      expose:
        exposeType: LoadBalancer
        servicePerPod: true
        serviceAnnotations: 
          service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
          service.beta.kubernetes.io/aws-load-balancer-type: external
          service.beta.kubernetes.io/aws-load-balancer-security-groups: "eks-default,alpha-mongodb"

at this time, the operator creates a new certificate and overwrites the existing SSL & InternalSSL Secret. this behavior seems to only occur in 1.16.0.

2024-05-31T10:32:17.774Z	INFO	createSSLByCertManager	updating cert-manager certificates	{"controller": "psmdb-controller", "object": {"name":"mongo44-main","namespace":"mongodb"}, "namespace": "mongodb", "name": "mongo44-main", "reconcileID": "5aa30853-9d2b-40f9-b345-4efe0c713f62"}
2024-05-31T10:32:17.774Z	INFO	Creating old secrets	{"controller": "psmdb-controller", "object": {"name":"mongo44-main","namespace":"mongodb"}, "namespace": "mongodb", "name": "mongo44-main", "reconcileID": "5aa30853-9d2b-40f9-b345-4efe0c713f62"}
2024-05-31T10:32:17.800Z	INFO	applying new certificates	{"controller": "psmdb-controller", "object": {"name":"mongo44-main","namespace":"mongodb"}, "namespace": "mongodb", "name": "mongo44-main", "reconcileID": "5aa30853-9d2b-40f9-b345-4efe0c713f62"}
2024-05-31T10:32:19.854Z	INFO	migrating new ca	{"controller": "psmdb-controller", "object": {"name":"mongo44-main","namespace":"mongodb"}, "namespace": "mongodb", "name": "mongo44-main", "reconcileID": "5aa30853-9d2b-40f9-b345-4efe0c713f62"}
2024-05-31T10:32:19.889Z	INFO	new ca is already in secret, deleting old secret	{"controller": "psmdb-controller", "object": {"name":"mongo44-main","namespace":"mongodb"}, "namespace": "mongodb", "name": "mongo44-main", "reconcileID": "5aa30853-9d2b-40f9-b345-4efe0c713f62"}
2024-05-31T10:32:19.907Z	INFO	new ca is already in secret, deleting old secret	{"controller": "psmdb-controller", "object": {"name":"mongo44-main","namespace":"mongodb"}, "namespace": "mongodb", "name": "mongo44-main", "reconcileID": "5aa30853-9d2b-40f9-b345-4efe0c713f62"}
2024-05-31T10:32:20.001Z	INFO	StatefulSet is changed, starting smart update	{"controller": "psmdb-controller", "object": {"name":"mongo44-main","namespace":"mongodb"}, "namespace": "mongodb", "name": "mongo44-main", "reconcileID": "5aa30853-9d2b-40f9-b345-4efe0c713f62", "name": "mongo44-main-cfg"}

Since the conditions below have disappeared(1.16.0), I think that this might be the cause of ignoring the existing secret and applying it as a new certificate.

# 1.15.0, pkg/controller/perconaservermongodb/ssl.go
func (r *ReconcilePerconaServerMongoDB) reconcileSSL(ctx context.Context, cr *api.PerconaServerMongoDB) error {
    ...
    if errSecret == nil && errInternalSecret == nil {
        return nil
    } else if errSecret != nil && !k8serr.IsNotFound(errSecret) {
        ...

The new spec.tls setting in 1.16.0 seems to always generate a certificate via cert-manager. I’ve tried changing unsafeFlags and testing in various settings, but it always overwrites existing secrets.

PS. Setting tls.allowInvalidCertificates also has the same result whether it is true or false.

Same issue here.
After upgrade to operator 1.16.0 it has started to renew secrets automatically using cert-manager.
How to keep old ssl secrets after upgrading to new operator?

1 Like

Actually cluster went into initializing status after that…

2024-06-05T10:15:05.205Z ERROR failed to reconcile cluster {"controller": "psmdb-controller", "object": {"name":"mongodb-stage","namespace":"mongodb"}, "namespace": "mongodb", "name": "mongodb-stage", "reconcileID": "cec2b38b-3aef-4056-b89a-a930302522ef", "replset": "rs0", "error": "dial: ping mongo: server selection error: context deadline exceeded, current topology: { Type: ReplicaSetNoPrimary, Servers: [{

And all replicas have Type: RSGhost status.

Hi, we have created [K8SPSMDB-1101] - Percona JIRA to track this. Currently we are investigating the issue and should have an update shortly

2 Likes