Hi Team,
I am trying to enable Cross site replication with MCS setup on GKE
Requirement :
GKE - MCS cross site replication setup
Main region( us-east1 ) : rs0 replica set with 3 pods
Replica region (us-central1) : 2 unmanaged pods which should be added as secondary pods of rs0 replica from main region
Followed below steps :
- Enabled MCS in GKE
gcloud container hub multi-cluster-services describe
createTime: '2022-05-30T08:55:42.656444146Z'
membershipStates:
projects/560836504570/locations/global/memberships/gkecentra1:
state:
code: OK
description: Firewall successfully updated
updateTime: '2022-06-01T10:24:16.023974778Z'
projects/560836504570/locations/global/memberships/gkeeast1:
state:
code: OK
description: Firewall successfully updated
updateTime: '2022-06-01T10:31:50.648908280Z'
name: projects/gcp-clouddbgcp-nprd-69586/locations/global/features/multiclusterservicediscovery
resourceState:
state: ACTIVE
spec: {}
updateTime: '2022-06-01T11:44:28.102809607Z'
- Downloaded the percona operator 1.12 version and updated cr.yaml as below : on main region (us-east1)
apiVersion: psmdb.percona.com/v1-12-0
kind: PerconaServerMongoDB
metadata:
name: mainmg
finalizers:
- delete-psmdb-pods-in-order
spec:
crVersion: 1.12.0
image: percona/percona-server-mongodb:5.0.7-6
imagePullPolicy: Always
allowUnsafeConfigurations: false
updateStrategy: SmartUpdate
multiCluster:
enabled: true
DNSSuffix: svc.clusterset.local
upgradeOptions:
versionServiceEndpoint: https://check.percona.com
apply: 5.0-recommended
schedule: "0 2 * * *"
setFCV: false
secrets:
users: mainmg-secrets
encryptionKey: mainmg-mongodb-encryption-key
pmm:
enabled: false
image: percona/pmm-client:2.27.0
serverHost: monitoring-service
replsets:
- name: rs0
size: 3
# externalNodes:
# - host: 34.124.76.90
# - host: 34.124.76.91
# port: 27017
# votes: 0
# priority: 0
# - host: 34.124.76.92
# # for more configuration fields refer to https://docs.mongodb.com/manual/reference/configuration-options/
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: true
exposeType: LoadBalancer
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 3Gi
sharding:
enabled: false
Backup of secrets on main region :
kubectl get secret mainmg-secrets -o yaml > repl.yaml
kubectl get secret mainmg-ssl -o yaml > repl1.yaml
kubectl get secret mainmg-ssl-internal -o yaml > repl2.yaml
Switched to replica region :
kubectl config use-context gke_gcp-clouddbgcp-nprd-69586_us-east1_gkecntral1
apply secrets on repl region
kubectl apply -f repl.yaml
kubectl apply -f repl1.yaml
kubectl apply -f repl2.yaml
created operator pod
created repl with unmanaged settings as below :
apiVersion: psmdb.percona.com/v1-12-0
kind: PerconaServerMongoDB
metadata:
name: replmg
finalizers:
- delete-psmdb-pods-in-order
spec:
unmanaged: true
crVersion: 1.12.0
image: percona/percona-server-mongodb:5.0.7-6
imagePullPolicy: Always
allowUnsafeConfigurations: false
updateStrategy: OnDelete
multiCluster:
enabled: true
DNSSuffix: svc.clusterset.local
upgradeOptions:
versionServiceEndpoint: https://check.percona.com
apply: 5.0-recommended
schedule: "0 2 * * *"
setFCV: false
secrets:
users: mainmg-secrets
encryptionKey: mainmg-mongodb-encryption-key
pmm:
enabled: false
image: percona/pmm-client:2.27.0
serverHost: monitoring-service
replsets:
- name: rs1
size: 2
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: true
exposeType: LoadBalancer
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 3Gi
sharding:
enabled: false
kubectl get all shows below from main region
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mainmg-rs0-0 1/1 Running 0 19m
pod/mainmg-rs0-1 1/1 Running 0 19m
pod/mainmg-rs0-2 1/1 Running 0 18m
pod/percona-server-mongodb-operator-665cd69f9b-xcs5p 1/1 Running 0 103m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gke-mcs-2565p6eccm ClusterIP 10.32.9.1 <none> 27017/TCP 16m
service/gke-mcs-2tborsotni ClusterIP 10.32.6.100 <none> 27017/TCP 17m
service/gke-mcs-856vteaqek ClusterIP 10.32.8.197 <none> 27017/TCP 16m
service/kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 117m
service/mainmg-rs0 ClusterIP None <none> 27017/TCP 19m
service/mainmg-rs0-0 LoadBalancer 10.32.4.50 35.237.55.229 27017:30605/TCP 19m
service/mainmg-rs0-1 LoadBalancer 10.32.3.72 34.139.150.107 27017:32603/TCP 19m
service/mainmg-rs0-2 LoadBalancer 10.32.12.171 35.227.100.45 27017:30554/TCP 18m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/percona-server-mongodb-operator 1/1 1 1 103m
NAME DESIRED CURRENT READY AGE
replicaset.apps/percona-server-mongodb-operator-665cd69f9b 1 1 1 103m
NAME READY AGE
statefulset.apps/mainmg-rs0 3/3 19m
how to update the unmanaged pods from second region as secondary members of rs0 replicaset ?
@Sergey_Pronin Can you please suggest if any changes to be done to achieve the above ask.
Ref documents used.
Percona Operator for MongoDB and Kubernetes MCS: The Story of One Improvement - Percona Database Performance Blog Utiliser Workload Identity | Documentation Kubernetes Engine | Google Cloud
Configuring multi-cluster Services | Kubernetes Engine Documentation | Google Cloud /Set up Percona Server for MongoDB cross-site replication
Regards,
Adithya