So previously we were using mongodump for backups w/ MongoDB 3.6.
We just upgraded to Percona 4.0 and are now using “hot backup”, ie db.runCommand({createBackup …
In both cases we’ve been .gz the backups.
Any ideas why “hot backup” is nearly double the filesize? Is there any way to compress it even more? FYI the command we’re using is “tar -zcvf”, and with mongodumb it was “–gzip”.
Any ideas why “hot backup” is nearly double the filesize? Is there any way to compress it even more? FYI the command we’re using is “tar -zcvf”, and with mongodumb it was “–gzip”.
I believe the compression method you do doesn’t matter here. The key difference between the mongodump vs hotbackup is that :
So when you do backup with mongodump, you need to restore them and then have to use. So the index and key objects are created later when restoring. But with hotbackup, just do untar and start an instance with that directory, in other words - “ready to use backup” is possible with hotbackup. So with hotbackup, your downtime is reduced when failover on restore backup part and also requires nearly same size as your data directory. You can check this blog discussing about the same - https://www.percona.com/blog/2018/04/06/free-fast-mongodb-hot-backup-with-percona-server-for-mongodb/
You may want to update this part in the post “Binary-level backups can take a bit more space on disk” since it seems to take nearly double the space … which could be a major concern for larger DBs.