Can't authenticate with default users

Hello everyone.

I installed a MongoDB HA database on a k3s cluster using Percona Operator. I edited the cr manifest as follows.

apiVersion: psmdb.percona.com/v1-7-0
kind: PerconaServerMongoDB
metadata:
  name: my-cluster-name
spec:
  crVersion: 1.7.0
  image: percona/percona-server-mongodb:4.4.3-5
  imagePullPolicy: Always
  allowUnsafeConfigurations: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: recommended
    schedule: "0 2 * * *"
  securityContext:
    fsGroup: 1234  
  secrets:
    users: my-cluster-name-secrets
  pmm:
    enabled: false
    image: percona/pmm-client:2.12.0
    serverHost: monitoring-service
  replsets:

  - name: rs0
    size: 3
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    resources:
      limits:
        cpu: "300m"
        memory: "0.5G"
      requests:
        cpu: "300m"
        memory: "0.5G"
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 1Gi

  sharding:
    enabled: false

    configsvrReplSet:
      size: 3
      affinity:
        antiAffinityTopologyKey: "kubernetes.io/hostname"
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: "300m"
          memory: "0.5G"
        requests:
          cpu: "300m"
          memory: "0.5G"
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 1Gi            

    mongos:
      size: 3
      affinity:
        antiAffinityTopologyKey: "kubernetes.io/hostname"
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: "300m"
          memory: "0.5G"
        requests:
          cpu: "300m"
          memory: "0.5G"
      expose:
        exposeType: ClusterIP

  mongod:
    net:
      port: 27017
      hostPort: 0
    security:
      redactClientLogData: false
      enableEncryption: true
      encryptionKeySecret: my-cluster-name-mongodb-encryption-key
      encryptionCipherMode: AES256-CBC
    setParameter:
      ttlMonitorSleepSecs: 60
      wiredTigerConcurrentReadTransactions: 128
      wiredTigerConcurrentWriteTransactions: 128
    storage:
      engine: wiredTiger
      inMemory:
        engineConfig:
          inMemorySizeRatio: 0.9
      wiredTiger:
        engineConfig:
          cacheSizeRatio: 0.5
          directoryForIndexes: false
          journalCompressor: snappy
        collectionConfig:
          blockCompressor: snappy
        indexConfig:
          prefixCompression: true
    operationProfiling:
      mode: slowOp
      slowOpThresholdMs: 100
      rateLimit: 100

Apart from this, for the installation process I stuck to what was written here. Install Percona server for MongoDB on Kubernetes

However, when I tried to log in as specified at the bottom of the documentation I linked, I received an authentication error.

mongo "mongodb://userAdmin:userAdmin123456@my-cluster-name-rs0.database.svc.cluster.local/admin?ssl=false"

I tried with user clusterAdmin and password clusterAdmin123456 and got the same error.

mongo --host "mongodb://userAdmin:userAdmin123456@my-cluster-name-rs0.database.svc.cluster.local/admin?ssl=false"

I verified that the service my-cluster-name-rs0.database.svc.cluster.local does exist and is bound to the right endpoints.

Moreover, the login succeeds with

mongo --host my-cluster-name-rs0.database.svc.cluster.local:27017

but of course I’m unable to perform all the operations which require authentication.

I entered one of the stetefulset pods and tried to log in from there. Again, the login succeeds with

mongo

but doesn’t succeed with

mongo -u userAdmin -p userAdmin123456

Same for clusterAdmin.

I noticed this message in the logs of one of the statefulset pods, signaling a SSL authentication error with the operator pod.

{"t":{"$date":"2022-04-22T09:10:25.781+00:00"},"s":"W",  "c":"NETWORK",  "id":23235,   "ctx":"conn2175","msg":"SSL peer certificate validation failed","attr":{"reason":"certificate signature failure"}}

Have you got any thoughts about this?
Thank you in advance.

Other contextual information:

  • k3s v. 1.22.7
  • Percona operator v. 4.4.3-5
1 Like

Hello @Gloria_Pedemonte ,

  1. 1.7.0 is quite old. Is there any reason why you use this version? Current latest is 1.11, 1.12 is coming soon (this week).

  2. By default Operator generates random passwords for default users. Our example deploy/secrets.yaml is not applied by the Operator. Please let me know if you created this secret manually. If not, check the corresponding secret object and get the password from there.

kubectl get secret YOUR-SECRET -o yaml
Then decode the MONGODB_USER_ADMIN_PASSWORD:
echo PASS | base64 --decode

1 Like

Hello @spronin ,
thanks for your answer. After re-deploying the secrets.yaml file it worked. I’ll try to switch to a more recent cr version, thanks for noticing.

1 Like