Percona MongoDB EKS with EFS, the second StatefulSet instances fails to access EFS

Description:

When using an EFS storage, each MongoDB StatefulSet instance uses a security user/group (1000/1000) that doesn’t change.
So the first MongoDB stateful set instance with user/group 1000/1000 creates a PVC which creates an EFS access point with user/group 1000/1000 and starts up successfully because user/group of the instance matches the user/group of the EFS access point.
The second MongoDB stateful set instance with user/group 1000/1000 creates a PVC which creates an access point in the EFS but with user/group 1001:1001 and fails to start because it the instance user/group does not match the EFS access point anymore.
Do you have an idea of to user incremental user/group in MongoDB StatefulSet instances.
Does Percona MongoDB works with EFS file system?

Steps to Reproduce:

Create an EKS Cluster
Create an EFS file system
Create a Storage Class that points to the EFS
Deploy Percona using the Storage class above
The first MongoDB instance starts successfully with user/group 1000/1000
The second instance fails with user/group 1000/1000 as it needs user/group 1001/1001.

Version:

psmdb-operator:1.16.2
psmdb-db:1.16.2

Logs:

[1723303181:183107][1:0x7f7e346e7080], connection: [WT_VERB_DEFAULT][ERROR]: __posix_open_file, 815: /data/db/key.db/WiredTiger.wt: handle-open: open: File exists
[1723303181:187974][1:0x7f7e346e7080], connection: [WT_VERB_BLOCK][NOTICE]: unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.6
[1723303181:193852][1:0x7f7e346e7080], connection: [WT_VERB_DEFAULT][ERROR]: __posix_open_file, 815: /data/db/key.db/WiredTiger.wt: handle-open: open: Operation not permitted
[1723303181:223974][1:0x7f7e346e7080], connection: [WT_VERB_DEFAULT][ERROR]: __posix_open_file, 815: /data/db/key.db/WiredTiger.wt: handle-open: open: File exists
[1723303181:230089][1:0x7f7e346e7080], connection: [WT_VERB_BLOCK][NOTICE]: unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.7
[1723303181:235565][1:0x7f7e346e7080], connection: [WT_VERB_DEFAULT][ERROR]: __posix_open_file, 815: /data/db/key.db/WiredTiger.wt: handle-open: open: Operation not permitted
[1723303181:267552][1:0x7f7e346e7080], connection: [WT_VERB_DEFAULT][ERROR]: __posix_open_file, 815: /data/db/key.db/WiredTiger.wt: handle-open: open: File exists
[1723303181:273529][1:0x7f7e346e7080], connection: [WT_VERB_BLOCK][NOTICE]: unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.8
[1723303181:277818][1:0x7f7e346e7080], connection: [WT_VERB_DEFAULT][ERROR]: __posix_open_file, 815: /data/db/key.db/WiredTiger.wt: handle-open: open: Operation not permitted

Expected Result:

Should be able to access the file system, the first instance has the good user/group container security context 1000/1000 but the second instance should have 1001/1001to successfully access the EFS instead of 1000/1000.

Actual Result:

The second instance should have the security context set to 1001/1001 and the third 1002/1002.

Maybe I should use EBS instead of EFS ???

I recreated the EFS with fixed user/group ID of 1001/1001 using EFS params gid/uid when creating the Storage Class instead of providing a range of user/group IDs (EFS params gidRangeStart/gidRangeEnd) and it’s working.

@lucasbrazi06 glad that you got it working!

Is there any particular reason why are you using EFS vs EBS?