Unable to create 2 replicasets in same namespace with single operator

Hi Team,

My CR yaml looks like below

apiVersion: psmdb.percona.com/v1-9-0
kind: PerconaServerMongoDB
metadata:
  name: cdbmg1
spec:
  crVersion: 1.9.0
  image: percona/percona-server-mongodb:4.4.6-8
  imagePullPolicy: Always
  allowUnsafeConfigurations: false
  updateStrategy: SmartUpdate
  secrets:
    users: cdbmg1-secrets
  pmm:
    enabled: true
    image: percona/pmm-client:2.18.0
    serverHost: 3.142.242.229
  replsets:

  - name: rs1
    size: 3
    configuration: |
      operationProfiling:
        mode: slowOp
      systemLog:
         verbosity: 1
    storage:
      engine: wiredTiger
      inMemory:
        engineConfig:
          inMemorySizeRatio: 0.9
      wiredTiger:
        engineConfig:
          cacheSizeRatio: 0.5
          directoryForIndexes: false
          journalCompressor: snappy
        collectionConfig:
          blockCompressor: snappy
        indexConfig:
          prefixCompression: true
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    expose:
      enabled: true
      exposeType: LoadBalancer
      serviceAnnotations:
        service.beta.kubernetes.io/aws-load-balancer-name: "cdbdns"
        service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip

    arbiter:
      enabled: false
      size: 1
      affinity:
        antiAffinityTopologyKey: "kubernetes.io/hostname"
    resources:
      limits:
        cpu: "400m"
        memory: "1G"
      requests:
        cpu: "400m"
        memory: "1G"
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: gp3
        resources:
          requests:
            storage: 100Gi

  sharding:
    enabled: false


  mongod:
    net:
      port: 27048
      hostPort: 0
    security:
      redactClientLogData: false
      enableEncryption: true
      encryptionKeySecret: cdbmg1-mongodb-encryption-key
      encryptionCipherMode: AES256-CBC
    setParameter:
      ttlMonitorSleepSecs: 60
      wiredTigerConcurrentReadTransactions: 128
      wiredTigerConcurrentWriteTransactions: 128
    storage:
      engine: wiredTiger
      inMemory:
        engineConfig:
          inMemorySizeRatio: 0.9
      wiredTiger:
        engineConfig:
          cacheSizeRatio: 0.5
          directoryForIndexes: false
          journalCompressor: snappy
        collectionConfig:
          blockCompressor: snappy
        indexConfig:
          prefixCompression: true
    operationProfiling:
      mode: slowOp
      slowOpThresholdMs: 100
      rateLimit: 100

  backup:
    enabled: false
    restartOnFailure: true
    image: percona/percona-server-mongodb-operator:1.9.0-backup
    serviceAccountName: percona-server-mongodb-operator
#    resources:
#      limits:
#        cpu: "300m"
#        memory: "0.5G"
#      requests:
#        cpu: "300m"
#        memory: "0.5G"
    storages:
#      s3-us-west:
#        type: s3
#        s3:
#          bucket: S3-BACKUP-BUCKET-NAME-HERE
#          credentialsSecret: cdbmg1-backup-s3
#          region: us-west-2
#      minio:
#        type: s3
#        s3:
#          bucket: MINIO-BACKUP-BUCKET-NAME-HERE
#          region: us-east-1
#          credentialsSecret: cdbmg1-backup-minio
#          endpointUrl: http://minio.psmdb.svc.cluster.local:9000/minio/
    pitr:
      enabled: false
    tasks:
#      - name: daily-s3-us-west
#        enabled: true
#        schedule: "0 0 * * *"
#        keep: 3
#        storageName: s3-us-west
#        compressionType: gzip
#      - name: weekly-s3-us-west
#        enabled: false
#        schedule: "0 0 * * 0"
#        keep: 5
#        storageName: s3-us-west
#        compressionType: gzip

I have changes the rsname / cluster name / secrets / port number

kubectl get all looks like below :

NAME                                                  READY   STATUS      RESTARTS   AGE
pod/cdbmg-backup-daily-s3-us-east-1630933020-gj57l    0/1     Completed   0          3m55s
pod/cdbmg-backup-daily-s3-us-east-1630933080-8pmh6    0/1     Completed   0          2m55s
pod/cdbmg-backup-daily-s3-us-east-1630933140-pfl74    0/1     Completed   0          114s
pod/cdbmg-rs0-0                                       3/3     Running     0          50m
pod/cdbmg-rs0-1                                       3/3     Running     0          51m
pod/cdbmg-rs0-2                                       3/3     Running     0          52m
pod/cdbmg1-rs1-0                                      2/2     Running     0          3m3s
pod/cdbmg1-rs1-1                                      2/2     Running     0          2m35s
pod/cdbmg1-rs1-2                                      2/2     Running     0          2m5s
pod/percona-server-mongodb-operator-d859b69b6-4cpgp   1/1     Running     0          26h

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)           AGE
service/cdbmg-rs0-0    LoadBalancer   10.100.82.130    ac02d362e918041629981694862d1168-443129676.us-east-2.elb.amazonaws.com    27017:30259/TCP   26h
service/cdbmg-rs0-1    LoadBalancer   10.100.192.81    aef9661f378a641e38f2827f8893488a-1601847388.us-east-2.elb.amazonaws.com   27017:30641/TCP   26h
service/cdbmg-rs0-2    LoadBalancer   10.100.18.223    a46c45790e8a14ad5ad43b8e6b78fc5c-1245864235.us-east-2.elb.amazonaws.com   27017:30491/TCP   26h
service/cdbmg1-rs1-0   LoadBalancer   10.100.185.114   a690bc22ab3664ddeb14c1b6c956ec8e-2044634471.us-east-2.elb.amazonaws.com   27048:30503/TCP   2m46s
service/cdbmg1-rs1-1   LoadBalancer   10.100.78.85     ad093872b1d6f4b3e97125a889edeecb-421291523.us-east-2.elb.amazonaws.com    27048:32337/TCP   2m28s
service/cdbmg1-rs1-2   LoadBalancer   10.100.19.61     ac4bae392f2a84137bb31826da6c7cda-1089171940.us-east-2.elb.amazonaws.com   27048:30025/TCP   106s

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/percona-server-mongodb-operator   1/1     1            1           26h

NAME                                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/percona-server-mongodb-operator-d859b69b6   1         1         1       26h

NAME                          READY   AGE
statefulset.apps/cdbmg-rs0    3/3     26h
statefulset.apps/cdbmg1-rs1   3/3     3m3s

NAME                                                 COMPLETIONS   DURATION   AGE
job.batch/cdbmg-backup-daily-s3-us-east-1630933020   1/1           2s         3m55s
job.batch/cdbmg-backup-daily-s3-us-east-1630933080   1/1           2s         2m55s
job.batch/cdbmg-backup-daily-s3-us-east-1630933140   1/1           2s         114s

NAME                                          SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/cdbmg-backup-daily-s3-us-east   * */2 * * *   False     0        2m2s            52m

cdbmg-rs0-0 / cdbmg-rs0-1 /cdbmg-rs0-2 with default port 27017 works fine

cdbmg1-rs1-0 / cdbmg1-rs1-1 / cdbmg1-rs1-2 with 27048 port is not working as expected.

When i validate pod logs it shows below errors on all the 3 pods

{"t":{"$date":"2021-09-06T13:00:15.000+00:00"},"s":"D1", "c":"-",        "id":23074,   "ctx":"ftdc","msg":"User assertion","attr":{"error":"NotYetInitialized: no replset config has been received","file":"src/mongo/db/repl/repl_set_get_status_cmd.cpp","line":56}}
{"t":{"$date":"2021-09-06T13:00:16.000+00:00"},"s":"D1", "c":"-",        "id":23074,   "ctx":"ftdc","msg":"User assertion","attr":{"error":"NotYetInitialized: no replset config has been received","file":"src/mongo/db/repl/repl_set_get_status_cmd.cpp","line":56}}
{"t":{"$date":"2021-09-06T13:00:16.173+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.117.177:34604","connectionId":24,"connectionCount":1}}
{"t":{"$date":"2021-09-06T13:00:16.174+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn24","msg":"Connection ended","attr":{"remote":"192.168.117.177:34604","connectionId":24,"connectionCount":0}}
{"t":{"$date":"2021-09-06T13:00:17.000+00:00"},"s":"D1", "c":"-",        "id":23074,   "ctx":"ftdc","msg":"User assertion","attr":{"error":"NotYetInitialized: no replset config has been received","file":"src/mongo/db/repl/repl_set_get_status_cmd.cpp","line":56}}
{"t":{"$date":"2021-09-06T13:00:18.000+00:00"},"s":"D1", "c":"-",        "id":23074,   "ctx":"ftdc","msg":"User assertion","attr":{"error":"NotYetInitialized: no replset config has been received","file":"src/mongo/db/repl/repl_set_get_status_cmd.cpp","line":56}}
{"t":{"$date":"2021-09-06T13:00:19.000+00:00"},"s":"D1", "c":"-",        "id":23074,   "ctx":"ftdc","msg":"User assertion","attr":{"error":"NotYetInitialized: no replset config has been received","file":"src/mongo/db/repl/repl_set_get_status_cmd.cpp","line":56}}
{"t":{"$date":"2021-09-06T13:00:19.173+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.117.177:34634","connectionId":25,"connectionCount":1}}
{"t":{"$date":"2021-09-06T13:00:19.174+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn25","msg":"Connection ended","attr":{"remote":"192.168.117.177:34634","connectionId":25,"connectionCount":0}}
{"t":{"$date":"2021-09-06T13:00:20.007+00:00"},"s":"D1", "c":"-",        "id":23074,   "ctx":"ftdc","msg":"User assertion","attr":{"error":"NotYetInitialized: no replset config has been received","file":"src/mongo/db/repl/repl_set_get_status_cmd.cpp","line":56}}
{"t":{"$date":"2021-09-06T13:00:20.463+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.173.57:54199","connectionId":26,"connectionCount":1}}
{"t":{"$date":"2021-09-06T13:00:20.464+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn26","msg":"Connection ended","attr":{"remote":"192.168.173.57:54199","connectionId":26,"connectionCount":0}}
{"t":{"$date":"2021-09-06T13:00:21.010+00:00"},"s":"D1", "c":"-",        "id":23074,   "ctx":"ftdc","msg":"User assertion","attr":{"error":"NotYetInitialized: no replset config has been received","file":"src/mongo/db/repl/repl_set_get_status_cmd.cpp","line":56}}
{"t":{"$date":"2021-09-06T13:00:21.500+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.148.232:31938","connectionId":27,"connectionCount":1}}
{"t":{"$date":"2021-09-06T13:00:21.500+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn27","msg":"Connection ended","attr":{"remote":"192.168.148.232:31938","connectionId":27,"connectionCount":0}}
{"t":{"$date":"2021-09-06T13:00:22.001+00:00"},"s":"D1", "c":"-",        "id":23074,   "ctx":"ftdc","msg":"User assertion","attr":{"error":"NotYetInitialized: no replset config has been received","file":"src/mongo/db/repl/repl_set_get_status_cmd.cpp","line":56}}
{"t":{"$date":"2021-09-06T13:00:22.173+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.117.177:34686","connectionId":28,"connectionCount":1}}
{"t":{"$date":"2021-09-06T13:00:22.174+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn28","msg":"Connection ended","attr":{"remote":"192.168.117.177:34686","connectionId":28,"connectionCount":0}}
[root@ip-20-5-6-44 deploy]# kubectl get all

Any changes to be done in configuration file ? Please suggest

Regards,
Adithya

1 Like

Hey @Adithya ,

I have not tried to reproduce it yet, but any reason to use different port?

2 Likes

Hi @Sergey_Pronin we wan’t to use different port instead of default port(27017) given by Mongo / Percona for security standards in our organization.

Also as we planning to run multiple replicasets in same namespace wanted to isolate mongo deployments with diff ports.

~ Adithya

1 Like

hi @Sergey_Pronin ,

Like @Adithya mentioned, wanted to check with you if using any port other than 27017 is allowed and feasible ?
And what steps we would have to additionally follow in order to use non-default ports…

Thanks in advance,
Sangeetha

1 Like

@Adithya - Tried that too but I think it’s not working. You have to look at the operator including the cluster as a whole deployment artifact. There are lots of stuff going on and you can’t just create a new additional pmsdb.

Btw running multiple operators/cluster in diff namespaces - I think it’s a piece of cake and the routing can be done very easily.

I have this running on PROD - Operator 1.7 and Operator 1.9 … :slight_smile:

PS: and at the latest when you have single sharding turned on - you have to go with different namespaces.

1 Like

@jamoser Thanks for testing this scenario.

Yes running multiple operators/cluster in diff namespaces works fine.
So operator / deployment works only with 27017 default port then !

@Sergey_Pronin Can this be added as enhancement in upcoming versions of mongo operator to use custom port defined in configmap.

~Adithya

1 Like

when we use same port 27017 for 2 replicasets in same namespace below is the error even after changing clustername / rsname / secrets

{"t":{"$date":"2021-09-08T18:04:30.533+00:00"},"s":"I",  "c":"-",        "id":20700,   "ctx":"conn74","msg":"note: not profiling because db went away for namespace","attr":{"namespace":"admin.$cmd"}}
{"t":{"$date":"2021-09-08T18:04:30.533+00:00"},"s":"D1", "c":"QUERY",    "id":22790,   "ctx":"conn74","msg":"Received interrupt request for unknown op","attr":{"opId":10539}}
{"t":{"$date":"2021-09-08T18:04:31.000+00:00"},"s":"D1", "c":"-",        "id":23074,   "ctx":"ftdc","msg":"User assertion","attr":{"error":"NotYetInitialized: no replset config has been received","file":"src/mongo/db/repl/repl_set_get_status_cmd.cpp","line":56}}
{"t":{"$date":"2021-09-08T18:04:31.033+00:00"},"s":"I",  "c":"COMMAND",  "id":51803,   "ctx":"conn74","msg":"Slow query","attr":{"type":"command","ns":"admin.$cmd","command":{"isMaster":1,"$db":"admin"},"numYields":0,"reslen":382,"locks":{"ReplicationStateTransition":{"acquireCount":{"w":2}},"Global":{"acquireCount":{"r":2}},"Database":{"acquireCount":{"r":2}},"Collection":{"acquireCount":{"r":2}},"Mutex":{"acquireCount":{"r":2}}},"protocol":"op_msg","durationMillis":0}}
{"t":{"$date":"2021-09-08T18:04:31.033+00:00"},"s":"I",  "c":"-",        "id":20700,   "ctx":"conn74","msg":"note: not profiling because db went away for namespace","attr":{"namespace":"admin.$cmd"}}
{"t":{"$date":"2021-09-08T18:04:31.033+00:00"},"s":"D1", "c":"QUERY",    "id":22790,   "ctx":"conn74","msg":"Received interrupt request for unknown op","attr":{"opId":10602}}

cdb-rs0-0                                         3/3     Running     0          51m
cdb-rs0-1                                         3/3     Running     0          176m
cdb-rs0-2                                         3/3     Running     0          166m
cdb-rs0-3                                         3/3     Running     0          152m
cdb-rs0-4                                         3/3     Running     0          131m
cdb2-rs2-0                                        3/3     Running     1          7m8s
cdb2-rs2-1                                        3/3     Running     1          6m34s
cdb2-rs2-2                                        3/3     Running     1          6m11s

@Sergey_Pronin From old community posts Single percona-server-mongodb-operator deployment watching all namespaces for CRD PerconaServerMongoDB i saw you mentioned it’s possible to run multiple clusters with single operator in single namespace. Let me know if it works with latest version of operator ? and how can we achieve it.

~Adithya

1 Like

Hey all :slight_smile:

So,

  1. It is possible to run two clusters in the same namespace. It works for me if I use the same port for exposure (not an issue for k8s).
    In 1.9.0 there were some issues with exposing replicaset nodes, so I encourage you to try deploying from main branch. 1.10 is coming.

  2. I tried to use mongod.port section and indeed it does not work. It is quite a legacy part and I’m not sure if it ever worked. I have created this ticket for future improvement: [K8SPSMDB-557] Allow user to change the port for MongoDB deployment - Percona JIRA

2 Likes

Sure @Sergey_Pronin

Thanks running multiple mongo environments with same port 27017 works fine.

1 Like