Hello Surendra,
Thank you for your question and the follow-up details that you provided.
Based on your original backup command, it looks like you are using Percona Sever for MongoDB and specifically the hot backup feature with the option for “Streaming Hot Backups to a Remote Destination”.
As noted here: https://www.percona.com/doc/percona-server-for-mongodb/LATEST/hot-backup.html
“This feature was implemented in Percona Server for MongoDB 4.2.1-1. In this release, this feature has the EXPERIMENTAL status.”
Currently in this experimental release our implementation does not use the Multipart API. Instead it utilizes a single PUT operation to fully encapsulate the backup.
Thus your backup attempt then hits the AWS S3 max size limit of 5 GB for a single operation as noted here:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html
“Upload an object in a single operation using the AWS SDKs, REST API, or AWS CLI—With a single PUT operation, you can upload objects up to 5 GB in size.”
As you noted in your response above, the Multipart API could be used which would then allow for up to 10,000 parts in an upload and a maximum size of up to 5 TB.
Note: Each part would still have a part size of between 5 MB and 5GB. So each part would still be that 5 GB max as seen with the single PUT operation as currently implemented.
We will enter this in as a Feature request. We can see how this would be very helpful to many. However we cannot guarantee when that would be available.
Workaround:
In the meantime, a potential workaround for you would be to backup to a local file system then use an OS level tool to perform the Multipart upload.
Thank you for your question and we will continue working to add functionality to our tools.
Regards,
Kimberly
MongoDB Tech Lead - Percona