Issue with PBM for sharded cluster

Hello i encountered an error with PBM, i have two shard replica set and one configsvr
when i do pbm status i got the following


  • rsconfig/otvme061s:27019: pbm-agent v1.4.1 OK
  • rsconfig/otvme063s:27019: pbm-agent v1.4.1 OK
  • RS2/otvme061s:27017: pbm-agent v1.4.1 OK
  • RS2/otvme062s:27017: pbm-agent v1.4.1 OK
  • RS1/otvme063s:27017: pbm-agent v1.4.1 OK
  • RS1/otvme064s:27017: pbm-agent v1.4.1 OK

So all is good
but in backup part i got this

FS /mnt
2021-05-18T14:56:19Z 0.00B [ERROR: get file 2021-05-18T14:56:19Z_rsconfig.dump.s2: no such file] [2021-05-18T14:56:39]

What should i do, cause it seems that my issue occured cause i didn’t create filesystem what should i do ? and where should i put something, i follow all the percona docs and run all step one by one but i didn’t see anything about this
Thanks you !

1 Like


The details are a little thin at the moment, but it sounds like you might not have configured a shared remote storage (an object store, or a shared filesystem mounted at exactly the same path on all servers) yet.


1 Like

Thank you for the answer!!
I understand the issue and i’m searching to shared a filesystem mounted at exactly the same path on all my server, but i didn’t success yet have you got a tuto or something, to do it works with percona ?
Thanks for time

1 Like

If you’ve mounted the same shared filesystem on all the servers at the same path then that should address the error situation I was thinking of.

Which server did the following error happen on?

2021-05-18T14:56:19Z 0.00B [ERROR: get file 2021-05-18T14:56:19Z_rsconfig.dump.s2: no such file]

Does this server have a file “2021-05-18T14:56:19Z_rsconfig.dump.s2” in the configured filesystem backup path (/mnt/?) but it reported that “no such file” error nonetheless?

1 Like

Hello, thanks, i think i solved my issue but now i get this error :
ERROR with storage: storage: no init file, attempt to create failed: create destination file </data/.pbm.init>: open /data/.pbm.init: permission denied.
it seems that i had the a problem with storage but i insert the config with pbm command and i got all my remote filesystem on the same local path and i also allow pbm users authentification, the problems occurs on all my server.

1 Like

“pbm.init” is an empty test file written into the remote store that will be used by status-checking functions to quickly test that the storage is up.

I’m guessing the /data directory is neither owned by the process running pbm-agent, nor has write permission on it been given to ‘other’ users. So the write of pbm.init, the last time storage was synced, failed then.

Please consider well the unix directory permissions. There’s a lot of unix admin consideration required to make a shared filesystem identical mounted and with the correct permissions on every server. This is why I recommend an object store as the remote backup storage. People don’t want to set up a bucket and configure its connection credentials just to test PBM, I get that, but setting up a shared filesystem is probably harder.

1 Like