Hello team,
Thanks for this awesome tool. I have below doubts.
1. How does this tool take backup and how is this different from Mongo Dump?
2 As per online docs, pbm seems to be light weight than Mongo Dump –> How are we achieving this.
3. Does this tool have any MAX DB SIZE limit? example: Can we take a backup of a 5 TB cluster with this tool?
I tried to backup 50GB DB generated by YCSB onto a HDD disk.
Server configuration was - 3 node PSS-MongoDB 4.2.5 OS- 4 core, 24 GB RAM, HDD drive, CentOS 7
Backup of 50 GB took around 2 hours onto local disk.
4. Can we take backup of individual databases or individual collections ?
5. Can we take backup or do restore over multiple threads ? – so that we can make backup or restore faster ?
Hi Gowtham.
To answer 1. and 2. together, it’s the same as mongodump in that is a ‘logical’ record backup rather than a ‘physical’ file backup system. It is different to mongodump in that it has agents running 1-to-1 for each mongod node (So it is not as lightweight as using mongodump in admin. It requires installing pbm-agent processes, and configuring the location of the remote storage as a minimum). The backup process (or the restore process) will be done by one pbm-agent per replicaset so there is parallelism. mongodump doesn’t do that - in a cluster you must fetch via a mongos node so that becomes a bottleneck of one process. PBM will go N times faster is another way of saying, where N is the number of shards, assuming there isn’t another bottleneck such as the write speed in the remote storage. So PBM is like the performant mongodump for clusters. You can also use it for non-sharded replicasets too, but it won’t be any faster that mongodump in that case.
3. There isn’t a theoretical size limit, but all backup systems have the practical limits of the write speed. If you have say 2TB of data but the remote storage where you want to save backups will only accept data at 50MB/s that will take 11hrs, obviously not good enough. The reason I mention 50MB/s … we had a bug where the gzip compression library we used was limiting it to approximately Sorry. The bug is PBM-431. It’s been fixed in development and will be released in v1.2.0. The compression with the PBM-431 fix in writes at least 250MB/s (can be high as 1GB/s if using the better but slightly less compacting ones). But the next question, after the compression bottleneck is fixed, is how fast the remote storage is.
4. No. This is under consideration. It clashes with another potential feature, which is to allow physical file backups as well as logical record style backups. Can I ask you which is more important to you? Being able to backup and restore only a subset of dbs and colllections, or having a backup that finishes say twice as fast (there’s no saying physical backup will be that much faster, but just as a ballpark idea).
5. I think you’re being caught up by PBM-431 at the moment, so this probably stops being a worry for once you have v1.2.0. But to answer the question as it is: to use multiple threads is possible with the “s2” compression option. Or the slower “pgzip” option.
To premptively answer another question: When will v1.2.0 be published? Not within a couple of weeks at least. There are two more features being included in it, that will take some time. You can build the dev version with PBM-431 and PBM-447 completed from this commit if you want to get a head-start with trying it without the PBM-431 ~50MB/s bottleneck though.