Try to delete backups from pbm, but failed in not founding file or directory.



As shown above, we started backup using remote filesystem server storage, everything goes fine, except trying to delete old backups. What should i do to fix this issue? 

Thanks in advance.

Hi Cora.
The PBM list of backups are not refreshed against the remote files every operation; they’re cached in a PBM control collection. It seems the remote files were already removed, and PBM has a bug here. The delete-backup subcommand should see missing files as being a warning situation as most; not an error.
Could you please open a new ticket at jira.percona.com in the https://jira.percona.com/projects/PBM/ project? And include that screenshot and version no. etc.?
Thanks,
Akira

For something you can do now: the following command will resync the list of backups based on what is found in the remote storage:

pbm config --force-resync
1 Like

Thanks for the reply. As you mentioned, ‘It seems the remote files were already removed’,  that is not what i am facing. The backup files are still there and the delete-backup command did not remove the history files at all , just like does nothing, but returns error message, not like what you said.

Another, using –force-resync, all the meta data in the collection pbmBackups is lost. I checked the db in mongo, and found that the command renamed the collection ,which is also not what i want.

This sounds as though the configured storage is not pointing to the place it once did, or the credentials (eg. AWS secret key) are no longer working.
What is the storage configuration in this case?

We set up a nfs server and mount remote filesystem disk to local to back up the DB. Are there some limitations on remote filesystem server storage? Here are the screenshots of the mounting case, on three machines, three different nfs locations, but same mounting point.


The remote directory that is mounted at the same path must be the same remote filesystem directory. I.e. in this case it should be one NFS location mapped to three different servers at the same mountpoint. As there is only one copy of the data for the replicaset, whether it is a single-node one, triple-node, or larger.
I think what has happened until now is that the files have been saved in (to pick one for an example) location 47. But the pbm-agent nodes that won the ‘race’ to take the lock for later commands (pbm delete-backup and pbm config --force-resync) were one of the ones with the other NFS locations 48 or 49.