Security enableEncryption is always activated when using psmdb-db Helm Chart

Background
When using psmdb-db Helm Chart, the enableEncryption is always activated.
Details:
Helm Chart Version 1.12.4
Operator Version 1.12.0 (Deployed with Helm)

You can see in the rs0 pod description
the following (specifically the arg --enableEncryption)

Containers:
  mongod:
    Container ID:  docker://77968f06e8759c2bf0cd70001ce08593aee14bfff7ec25a49042d10791cec1db
    Image:         749425658711.dkr.ecr.us-east-1.amazonaws.com/docker.io/percona/percona-server-mongodb:4.4.15-15
    Image ID:      docker-pullable://749425658711.dkr.ecr.us-east-1.amazonaws.com/docker.io/percona/percona-server-mongodb@sha256:f768890c0a22cee1e50cd485e70dcf79294a467694051b8261c9d7ed20f9e046
    Port:          27017/TCP
    Host Port:     0/TCP
    Command:
      /data/db/ps-entry.sh
    Args:
      --bind_ip_all
      --auth
      --dbpath=/data/db
      --port=27017
      --replSet=rs0
      --storageEngine=wiredTiger
      --relaxPermChecks
      --sslAllowInvalidCertificates
      --clusterAuthMode=keyFile
      --keyFile=/etc/mongodb-secrets/mongodb-key
      --shardsvr
      --slowms=0
      --profile=1
      --enableEncryption
      --encryptionKeyFile=/etc/mongodb-encryption/encryption-key
      --wiredTigerCacheSizeGB=13.47
      --wiredTigerCollectionBlockCompressor=snappy
      --wiredTigerJournalCompressor=snappy
      --wiredTigerIndexPrefixCompression=true
      --config=/etc/mongodb-config/mongod.conf

Note, when looking over the Operator code percona-server-mongodb-operator/pkg/psmdb/statefulset.go

func isEncryptionEnabled(cr *api.PerconaServerMongoDB, replset *api.ReplsetSpec) (bool, error) {
	if cr.CompareVersion("1.12.0") >= 0 {
		enabled, err := replset.Configuration.IsEncryptionEnabled()
.
.
.		

we can see that the section configuration can define that via the security field.
but somehow this does not happen (even when updating the finite psmdb Yaml to contain the crVersion: 1.12.0 as well as it is missing …another helm Chart bug)

The workaround for this was to create a PSMDB Manifest (Not the helm Values) which includes the mongod.security section with the
enableEncryption field.
Note that I had to add the crVersion field as well

Below are simulation details.

Looking at

To simulate that you can use the Helm Values:

# Default values for psmdb-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Platform type: kubernetes, openshift
platform: kubernetes

# Cluster DNS Suffix
clusterServiceDNSSuffix: svc.cluster.local
# clusterServiceDNSMode: "Internal"

fullnameOverride: mongodb-raw-cluster
finalizers:
## Set this if you want that operator deletes the primary pod last
  - delete-psmdb-pods-in-order
## Set this if you want to delete database persistent volumes on cluster deletion
  - delete-psmdb-pvc

pause: false
unmanaged: false
allowUnsafeConfigurations: true
multiCluster:
  enabled: false
  # DNSSuffix: svc.clusterset.local
updateStrategy: Never
upgradeOptions:
  versionServiceEndpoint: https://check.percona.com
  apply: 5.0-recommended
  schedule: "0 2 * * *"
  setFCV: false

image:
  repository: percona/percona-server-mongodb
  tag: 4.4.15-15

imagePullPolicy: Always
# imagePullSecrets: []
# tls:
#   # 90 days in hours
#   certValidityDuration: 2160h
secrets:
  users: mongodb-cluster-users

#    encryptionKey: mongodb-cluster-encryption-key
  #remove this to disable at Rest encryption by the service

  # If you set users secret here, it will not be constructed from the values at the
  # bottom of this file, but the operator will use existing one or generate random values
  # users: my-cluster-name-secrets
  # encryptionKey: my-cluster-name-mongodb-encryption-key

pmm:
  enabled: true
  image:
    repository: percona/pmm-client
    tag: 2.29.1
  serverHost: pmm-monitoring-service.percona-monitoring.svc.cluster.local

replsets:
  - name: rs0
    size: 1
    configuration: |
      security:
        enableEncryption: false
      operationProfiling:
        mode: slowOp
        slowOpSampleRate: 0.01
        slowOpThresholdMs: 100
      setParameter:
        ttlMonitorSleepSecs: 60
        wiredTigerConcurrentReadTransactions: 128
        wiredTigerConcurrentWriteTransactions: 128
      systemLog:
        verbosity: 1

    # runtimeClassName: image-rc
    storage:
      engine: wiredTiger
      wiredTiger:
        engineConfig:
          cacheSizeRatio: 0.5
          directoryForIndexes: false
          journalCompressor: snappy
        collectionConfig:
          blockCompressor: snappy
        indexConfig:
          prefixCompression: true


    podDisruptionBudget:
      maxUnavailable: 1
    expose:
      enabled: true
      exposeType: ClusterIP
      # loadBalancerSourceRanges:
      #   - 10.0.0.0/8
      # serviceAnnotations:
      #   service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    nonvoting:
      enabled: false
      size: 0

    arbiter:
      enabled: false
      size: 0
    volumeSpec:
      pvc:
        storageClassName: "gp3"
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 20Gi


  - name: rs1
    size: 1
    configuration: |
      security:
        enableEncryption: false
      operationProfiling:
        mode: slowOp
        slowOpSampleRate: 0.01
        slowOpThresholdMs: 100
      setParameter:
        ttlMonitorSleepSecs: 60
        wiredTigerConcurrentReadTransactions: 128
        wiredTigerConcurrentWriteTransactions: 128
      systemLog:
        verbosity: 1

    storage:
      engine: wiredTiger
      wiredTiger:
        engineConfig:
          cacheSizeRatio: 0.5
          directoryForIndexes: false
          journalCompressor: snappy
        collectionConfig:
          blockCompressor: snappy
        indexConfig:
          prefixCompression: true


    podDisruptionBudget:
      maxUnavailable: 1
    expose:
      enabled: true
      exposeType: ClusterIP
      # loadBalancerSourceRanges:
      #   - 10.0.0.0/8
      # serviceAnnotations:
      #   service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    nonvoting:
      enabled: false
      size: 0

    arbiter:
      enabled: false
      size: 0

    volumeSpec:

      pvc:
        storageClassName: "gp3"
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 20Gi

sharding:
  enabled: true

  configrs:
    size: 1

    configuration: |
      operationProfiling:
        mode: slowOp
      systemLog:
        verbosity: 1


    expose:
      enabled: true
      exposeType: ClusterIP

    resources:
      limits:
        cpu: 2048m
        memory: 2G
      requests:
        cpu: 1024m
        memory: 1G
    volumeSpec:

      pvc:
        storageClassName: gp3
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 20Gi

  mongos:
    size: 1
    configuration: |
      systemLog:
        verbosity: 1

    expose:
       exposeType: ClusterIP
       servicePerPod: true
       loadBalancerSourceRanges:
         - 10.0.0.0/8


backup:
  enabled: false
  image:
    repository: percona/percona-backup-mongodb
    tag: 1.8.1
  serviceAccountName: percona-server-mongodb-operator

  storages:
    s3-us-east:
      s3:
        bucket: rawdb-backup-data-bucket-qa
        credentialsSecret: mongodb-cluster-backup-s3
        prefix: data/pbm/backup
        region: us-east-1
      type: s3
  tasks:
    - compressionLevel: 6
      compressionType: gzip
      enabled: true
      keep: 4
      name: s3-us-east
      schedule: 1 1 * * *
      storageName: s3-us-east

  pitr:
    enabled: false
    oplogSpanMin: 10
    compressionType: gzip
    compressionLevel: 6
1 Like