Hello Team,
we have adjusted the configuration of our clusters according to the release notes for v1.12.
However, some of the settings that were moved do not work as we expected that.
The
spec.mongod
section is removed from the Custom Resource configuration. Starting from now, mongod options should be passed to Replica Sets usingspec.replsets.[].configuration
key…
For example, we moved operationProfiling
from spec.mongod
to spec.replsets[0].configuration
without success. The desired OperationProfiling is not used.
Using spec.mongod
as before, everything works as expected.
If we take a look at the v1.12 CRDs in the Helm Chart Repo we see that the spec.mongod
section is still there and has not been removed as advertised.
What is now the right way to configure these things?
Here is a rendered version of one of our clusters:
apiVersion: psmdb.percona.com/v1-12-0
kind: PerconaServerMongoDB
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"psmdb.percona.com/v1-12-0","kind":"PerconaServerMongoDB"}
name: test-psmdb-db
finalizers:
- delete-psmdb-pods-in-order
- delete-psmdb-pvc
spec:
pause: false
unmanaged: false
image: "percona/percona-server-mongodb:4.4.8-9"
imagePullPolicy: "Always"
multiCluster:
enabled: false
secrets:
users: test-psmdb-db-secrets
encryptionKey: test-psmdb-db-mongodb-encryption-key
updateStrategy: SmartUpdate
upgradeOptions:
versionServiceEndpoint: https://check.percona.com
apply: 4.4-recommended
schedule: 0 2 * * *
setFCV: false
pmm:
enabled: false
image: "percona/pmm-client:2.28.0"
serverHost: monitoring-service
replsets:
- name: rs0
size: 3
configuration: |
operationProfiling:
mode: slowOp
slowOpThresholdMs: 100
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: database
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: database
livenessProbe:
failureThreshold: 4
initialDelaySeconds: 60
periodSeconds: 30
startupDelaySeconds: 7200
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 8
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
storage:
engine: wiredTiger
inMemory:
engineConfig:
inMemorySizeRatio: 0.5
wiredTiger:
collectionConfig:
blockCompressor: snappy
engineConfig:
cacheSizeRatio: 0.5
directoryForIndexes: false
journalCompressor: snappy
indexConfig:
prefixCompression: true
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
exposeType: ClusterIP
nonvoting:
enabled: false
size: 3
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: 300m
memory: 0.5G
requests:
cpu: 300m
memory: 0.5G
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 10Gi
mongod:
setParameter:
ttlMonitorSleepSecs: 60
wiredTigerConcurrentReadTransactions: 128
wiredTigerConcurrentWriteTransactions: 128
storage:
engine: wiredTiger
inMemory:
engineConfig:
inMemorySizeRatio: 0.9
wiredTiger:
engineConfig:
cacheSizeRatio: 0.5
directoryForIndexes: false
journalCompressor: snappy
collectionConfig:
blockCompressor: snappy
indexConfig:
prefixCompression: true
operationProfiling:
mode: slowOp
slowOpThresholdMs: 100
rateLimit: 100
backup:
enabled: true
image: "percona/percona-server-mongodb-operator:1.11.0-backup"
serviceAccountName: percona-server-mongodb-operator
storages:
s11-dev-backup:
s3:
bucket: psmdb-backup
credentialsSecret: test-psmdb-db-backup-secret
endpointUrl: https://xxx
region: us-east-1
type: s3
pitr:
enabled: true
tasks:
- compressionType: gzip
enabled: true
keep: 7
name: daily-dev
schedule: 0 18 * * *
storageName: s11-dev-backup
we would be very happy to be enlightened
best regards,
Ricardo