Cross site replication with MCS setup on GKE issues

Hi Team,

I am trying to enable Cross site replication with MCS setup on GKE

Requirement :

GKE - MCS cross site replication setup 

Main region( us-east1 ) : rs0 replica set with 3 pods 
Replica region (us-central1) : 2 unmanaged pods which should be added as secondary pods of rs0 replica from main region

Followed below steps :

  1. Enabled MCS in GKE
gcloud container hub multi-cluster-services describe
createTime: '2022-05-30T08:55:42.656444146Z'
membershipStates:
  projects/560836504570/locations/global/memberships/gkecentra1:
    state:
      code: OK
      description: Firewall successfully updated
      updateTime: '2022-06-01T10:24:16.023974778Z'
  projects/560836504570/locations/global/memberships/gkeeast1:
    state:
      code: OK
      description: Firewall successfully updated
      updateTime: '2022-06-01T10:31:50.648908280Z'
name: projects/gcp-clouddbgcp-nprd-69586/locations/global/features/multiclusterservicediscovery
resourceState:
  state: ACTIVE
spec: {}
updateTime: '2022-06-01T11:44:28.102809607Z'
  1. Downloaded the percona operator 1.12 version and updated cr.yaml as below : on main region (us-east1)
apiVersion: psmdb.percona.com/v1-12-0
kind: PerconaServerMongoDB
metadata:
  name: mainmg
  finalizers:
    - delete-psmdb-pods-in-order
spec:
  crVersion: 1.12.0
  image: percona/percona-server-mongodb:5.0.7-6
  imagePullPolicy: Always
  allowUnsafeConfigurations: false
  updateStrategy: SmartUpdate
  multiCluster:
    enabled: true
    DNSSuffix: svc.clusterset.local
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: 5.0-recommended
    schedule: "0 2 * * *"
    setFCV: false
  secrets:
    users: mainmg-secrets
    encryptionKey: mainmg-mongodb-encryption-key
  pmm:
    enabled: false
    image: percona/pmm-client:2.27.0
    serverHost: monitoring-service
  replsets:

  - name: rs0
    size: 3
#    externalNodes:
#    - host: 34.124.76.90
#    - host: 34.124.76.91
#      port: 27017
#      votes: 0
#      priority: 0
#    - host: 34.124.76.92
#    # for more configuration fields refer to https://docs.mongodb.com/manual/reference/configuration-options/
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    expose:
      enabled: true
      exposeType: LoadBalancer
    resources:
      limits:
        cpu: "300m"
        memory: "0.5G"
      requests:
        cpu: "300m"
        memory: "0.5G"
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi

  sharding:
    enabled: false

Backup of secrets on main region :

kubectl get secret mainmg-secrets -o yaml > repl.yaml
kubectl get secret mainmg-ssl -o yaml > repl1.yaml
kubectl get secret mainmg-ssl-internal -o yaml > repl2.yaml

Switched to replica region :
kubectl config use-context gke_gcp-clouddbgcp-nprd-69586_us-east1_gkecntral1

apply secrets on repl region
kubectl apply -f repl.yaml
kubectl apply -f repl1.yaml
kubectl apply -f repl2.yaml

created operator pod

created repl with unmanaged settings as below :

apiVersion: psmdb.percona.com/v1-12-0
kind: PerconaServerMongoDB
metadata:
  name: replmg
  finalizers:
    - delete-psmdb-pods-in-order
spec:
  unmanaged: true
  crVersion: 1.12.0
  image: percona/percona-server-mongodb:5.0.7-6
  imagePullPolicy: Always
  allowUnsafeConfigurations: false
  updateStrategy: OnDelete
  multiCluster:
    enabled: true
    DNSSuffix: svc.clusterset.local
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: 5.0-recommended
    schedule: "0 2 * * *"
    setFCV: false
  secrets:
    users: mainmg-secrets
    encryptionKey: mainmg-mongodb-encryption-key
  pmm:
    enabled: false
    image: percona/pmm-client:2.27.0
    serverHost: monitoring-service
  replsets:

  - name: rs1
    size: 2
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    expose:
      enabled: true
      exposeType: LoadBalancer
    resources:
      limits:
        cpu: "300m"
        memory: "0.5G"
      requests:
        cpu: "300m"
        memory: "0.5G"
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi

  sharding:
    enabled: false

kubectl get all shows below from main region

kubectl get all
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/mainmg-rs0-0                                       1/1     Running   0          19m
pod/mainmg-rs0-1                                       1/1     Running   0          19m
pod/mainmg-rs0-2                                       1/1     Running   0          18m
pod/percona-server-mongodb-operator-665cd69f9b-xcs5p   1/1     Running   0          103m

NAME                         TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)           AGE
service/gke-mcs-2565p6eccm   ClusterIP      10.32.9.1      <none>           27017/TCP         16m
service/gke-mcs-2tborsotni   ClusterIP      10.32.6.100    <none>           27017/TCP         17m
service/gke-mcs-856vteaqek   ClusterIP      10.32.8.197    <none>           27017/TCP         16m
service/kubernetes           ClusterIP      10.32.0.1      <none>           443/TCP           117m
service/mainmg-rs0           ClusterIP      None           <none>           27017/TCP         19m
service/mainmg-rs0-0         LoadBalancer   10.32.4.50     35.237.55.229    27017:30605/TCP   19m
service/mainmg-rs0-1         LoadBalancer   10.32.3.72     34.139.150.107   27017:32603/TCP   19m
service/mainmg-rs0-2         LoadBalancer   10.32.12.171   35.227.100.45    27017:30554/TCP   18m

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/percona-server-mongodb-operator   1/1     1            1           103m

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/percona-server-mongodb-operator-665cd69f9b   1         1         1       103m

NAME                          READY   AGE
statefulset.apps/mainmg-rs0   3/3     19m

how to update the unmanaged pods from second region as secondary members of rs0 replicaset ?

@Sergey_Pronin Can you please suggest if any changes to be done to achieve the above ask.

Ref documents used.
Percona Operator for MongoDB and Kubernetes MCS: The Story of One Improvement - Percona Database Performance Blog Utiliser Workload Identity  |  Documentation Kubernetes Engine  |  Google Cloud
Configuring multi-cluster Services  |  Kubernetes Engine Documentation  |  Google Cloud /Set up Percona Server for MongoDB cross-site replication

Regards,
Adithya

1 Like

From replica region :

kubectl get all
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/percona-server-mongodb-operator-665cd69f9b-wbkdr   1/1     Running   0          64m
pod/repl1mg-rs1-0                                      1/1     Running   0          9m22s
pod/repl1mg-rs1-1                                      1/1     Running   0          8m34s
pod/repl1mg-rs1-2                                      1/1     Running   0          7m56s

NAME                         TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)           AGE
service/gke-mcs-2565p6eccm   ClusterIP      10.48.12.30   <none>           27017/TCP         23m
service/gke-mcs-2tborsotni   ClusterIP      10.48.5.58    <none>           27017/TCP         24m
service/gke-mcs-856vteaqek   ClusterIP      10.48.3.8     <none>           27017/TCP         23m
service/gke-mcs-9c6oejcget   ClusterIP      10.48.3.127   <none>           27017/TCP         3m53s
service/gke-mcs-ndm0buedi4   ClusterIP      10.48.3.129   <none>           27017/TCP         5m39s
service/gke-mcs-u7i10fdp5h   ClusterIP      10.48.0.179   <none>           27017/TCP         2m10s
service/kubernetes           ClusterIP      10.48.0.1     <none>           443/TCP           133m
service/repl1mg-rs1          ClusterIP      None          <none>           27017/TCP         9m21s
service/repl1mg-rs1-0        LoadBalancer   10.48.1.223   35.225.186.238   27017:30873/TCP   9m18s
service/repl1mg-rs1-1        LoadBalancer   10.48.5.184   34.136.50.251    27017:32318/TCP   8m33s
service/repl1mg-rs1-2        LoadBalancer   10.48.1.71    34.122.169.58    27017:30929/TCP   7m53s

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/percona-server-mongodb-operator   1/1     1            1           64m

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/percona-server-mongodb-operator-665cd69f9b   1         1         1       64m

NAME                           READY   AGE
statefulset.apps/repl1mg-rs1   3/3     9m24s

1 Like

Hey @Adithya ,

thanks for trying out MCS with our Operator.

Once you have MCS configured and enabled, our Operator automatically would create all the necessary ServiceExport objects. You can query them with kubectl get serviceexport.
If MCS is in place, it will create corresponding ServiceImport objects.

You need to use these serviceimport in the externalNodes section on the main site.
So in your main region you will have something like this:

 - name: rs0
    size: 3
    externalNodes:
   - host: my-dr-site-rs0-0.<NAMESPACE>.svc.clusterset.local
     priority: 0
     vote: 0
   - host: my-dr-site-rs0-1.<NAMESPACE>.svc.clusterset.local
     priority: 0
     vote: 0

Please let me know if it helps.

2 Likes