Hi, I will have a lot of Mongo instances provisioned from the operator. In the mongo CR, the backup.storages section mentions bucket, but in the operator code there is another field prefix.
So I wonder if one S3 bucket can be shared between multiple Mongo to store their backup? The path can be prefixed with the CR name for example? I don’t quite want to create one bucket for each database.
Very happy to see your post, because I have the same needs
After testing, I found that the functions you need can indeed be achieved, but the operator will have some problems when deleting s3 backup. You need to comment in backup.yaml and make the operator not delete the backup in s3 when deleting the backup.
cr.yaml example:
backup:
storages:
s3-backup:
type: s3
s3:
bucket: < clusterName >
region: < s3.region >
credentialsSecret: < s3.credentialsSecret >
endpointUrl: < s3.endpointUrl >/< s3.bucketName >/< directoryName >
If finalizers.delete-backup is not commented in backup.yaml, the consequence of this configuration is that the operator will get stuck when deleting the backup, causing the deletion of the backup to fail.
Yes, the storage of s3 buckets will continue to increase. This is a compromise solution, because currently no matter what pxc/postgresql/mongodb operatort supports this way of use is not very friendly.
You can make a suggestion to percona about this usage requirement
Finally, look forward to percona getting better and better.