Description:
I have deployed 3 replicas of mongodb operator in a specific namespace and I have deployed percona mongodb replicaset of size 3.
My requirement is that, I have different clusters in different regions lets assume chennai and hyderabad. I want to deploy only slaves in hyderabad and master+slaves in chennai. This way I have have cross DC replication.
How can I achieve this.
I used helm chart to deploy the opertor and database.
Steps to Reproduce:
[Step-by-step instructions on how to reproduce the issue, including any specific settings or configurations]
Version:
1.15.0
Logs:
Here is my values file for psmdb-operator.
# Default values for psmdb-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 3
image:
repository: personal-repo/percona-mongo/percona-server-mongodb-operator
tag: 1.15.0
pullPolicy: IfNotPresent
# disableTelemetry: according to
# https://docs.percona.com/percona-operator-for-mongodb/telemetry.html
# this is how you can disable telemetry collection
# default is false which means telemetry will be collected
disableTelemetry: false
# set if you want to specify a namespace to watch
# defaults to `.Release.namespace` if left blank
# watchNamespace:
# set if operator should be deployed in cluster wide mode. defaults to false
# watchAllNamespaces: false
watchAllNamespaces: true
# rbac: settings for deployer RBAC creation
rbac:
# rbac.create: if false RBAC resources should be in place
create: true
# serviceAccount: settings for Service Accounts used by the deployer
serviceAccount:
# serviceAccount.create: Whether to create the Service Accounts or not
create: true
podAnnotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "8080"
podSecurityContext: {}
# runAsNonRoot: true
# runAsUser: 2
# runAsGroup: 2
# fsGroup: 2
# fsGroupChangePolicy: "OnRootMismatch"
securityContext: {}
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# seccompProfile:
# type: RuntimeDefault
# set if you want to use a different operator name
# defaults to `percona-server-mongodb-operator`
# operatorName:
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env:
resyncPeriod: 5s
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
# affinity: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
# logStructured: false
# logLevel: "INFO"
logStructured: true
logLevel: "INFO"
Here is my values file for percona-mongodb.
finalizers:
- delete-psmdb-pods-in-order
nameOverride: ""
fullnameOverride: ""
crVersion: 1.15.0
pause: false
unmanaged: false
allowUnsafeConfigurations: false
multiCluster:
enabled: false
updateStrategy: SmartUpdate
upgradeOptions:
versionServiceEndpoint: https://check.percona.com
apply: disabled
schedule: "0 2 * * *"
setFCV: false
image:
repository: personal-repo/percona-mongo/percona-server-mongodb
tag: 6.0.9-7
imagePullPolicy: IfNotPresent
secrets: {}
pmm:
enabled: false
image:
repository: percona/pmm-client
tag: 2.39.0
serverHost: monitoring-service
replsets:
- name: rs0
size: 3
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
exposeType: ClusterIP
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
pvc:
storageClassName: hdd-jbod-lvm-ext4-0
resources:
requests:
storage: 3Gi
nonvoting:
enabled: false
size: 3
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
pvc:
storageClassName: hdd-jbod-lvm-ext4-0
resources:
requests:
storage: 3Gi
arbiter:
enabled: false
size: 1
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
sharding:
enabled: false
balancer:
enabled: false
configrs:
size: 3
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
exposeType: ClusterIP
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
pvc:
storageClassName: hdd-jbod-lvm-ext4-0
resources:
requests:
storage: 3Gi
mongos:
size: 2
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
expose:
exposeType: ClusterIP
backup:
enabled: false
image:
repository: personal-repo/percona-mongo/percona-backup-mongodb
tag: 2.3.0
serviceAccountName: percona-server-mongodb-operator
storages:
pitr:
enabled: false
oplogOnly: false
tasks:
Expected Result:
I want a cluster of 5 mongodb replicas of which 1 master and 2 slaves are in chennai region and remaining 2 slaves are in hyderabad region.
Actual Result:
[What actually happened when the user encountered the issue]
Additional Information:
[Include any additional information that could be helpful to diagnose the issue, such as browser or device information]