Percona server for MongoDB rejects clusterAuthMode keyfile

I am trying to ‘drop in’ replace MongoDB with Percona Server for MongoDB. I have a working 3 node MongoDB cluster with authentication enabled in Google Kubernetes Engine.

Authentication uses a keyfile for inter-node communication. I tried to drop-in replace mongodb (4.2.3) with percona server for mongodb (4.2.3). When I do this I get the following error:

2021-03-09T13:42:52.929+0000 I ACCESS [main] error opening file: /etc/secrets-volume/mongodb-keyfile: bad file

What is the difference between MongoDB and Percona’s server that makes the file invalid?

Here is my GKE stateful set yaml:

apiVersion: v1
kind: Service
metadata:
 name: mongoserver
 labels:
   name: mongoserver
spec:
 ports:
 - port: 27017
   targetPort: 27017
 clusterIP: None
 selector:
   role: mongo

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: mongoserver
spec:
 serviceName: "mongoserver"
 replicas: 3
 selector:
  matchLabels:
    role: mongo
 template:
   metadata:
     labels:
       role: mongo
       environment: test
   spec:
     terminationGracePeriodSeconds: 10
     containers:
       - name: mongo
         image: percona/percona-server-mongodb:4.2.3
         command:
           - mongod
           - "--bind_ip_all"
           - "--auth"
           - "--replSet"
           - rs0
           - "--clusterAuthMode"
           - "keyFile"
           - "--keyFile"
           - "/etc/secrets-volume/mongodb-keyfile"
           - "--setParameter"
           - "authenticationMechanisms=SCRAM-SHA-1"
           - "--wiredTigerCacheSizeGB"
           - "0.25"
         ports:
           - containerPort: 27017
         resources:
           requests:
             cpu: "5m"
             memory: 350Mi
         volumeMounts:
           - name: mongo-key
             mountPath: "/etc/secrets-volume"
             readOnly: true
           - name: mongo-persistent-storage
             mountPath: /data/db
       - name: mongo-sidecar
         image: cvallance/mongo-k8s-sidecar@sha256:cd62d32db488fbf78dfbaef020edd7fc09ee4d3fe5d50cc0579e747e8232c77f
         env:
           - name: MONGO_SIDECAR_POD_LABELS
             value: "role=mongo,environment=test"
           - name: MONGODB_USERNAME
             value: notauser
           - name: MONGODB_PASSWORD
             value: notapassword
           - name: MONGODB_DATABASE
             value: admin
         resources:
           requests:
             cpu: "2m"
     volumes:
     - name: mongo-key
       secret:
         defaultMode: 0400
         secretName: mongo-key
 volumeClaimTemplates:
 - metadata:
     name: mongo-persistent-storage
     annotations:
       volume.beta.kubernetes.io/storage-class: "slow"
   spec:
     accessModes: [ "ReadWriteOnce" ]
     resources:
       requests:
         storage: 1Gi
1 Like

After investigation I found that percona server runs in the container as the mongod user, where the mongodb container runs the server as root. Not running as root makes getting access to volume shares a bigger challenge, and was causing issues in not being able to access the secrets mount I have.

Workaround (as I need to consider the implications of running as root in the container) is to add this to the ‘spec’ section of the statefulset yaml at the same level as the ‘containers’ specifier:

securityContext:
runAsUser: 0
runAsGroup: 0

1 Like