I’m trying to load an s3 remote with pbm-backups for two different replica sets, cfg and sb0. I created two different clusters with the same topography, one for the cfg replica set, the other for sb0. When attempting to
pbm config --force-resync the cfg cluster gives an incompatible warning:
Backup doesn't match current cluster topology - it has different replica set names. Extra shards in the backup will cause this, for a simple example. The extra/unknown replica set names found in the backup are: sb0
When running the same command on the sb0 replica set, I get the same error, except that the extra/unknown replica set is cfg.
So…how do I get the pbm-agents to know about all SIX of the running agents and not just the 3 involved in the replica set in question? The nodes are all on the same network and can see each other, but
pbm status on any given replica set only returns the ones it knows about due to the connection string used in the
PBM_MONGODB_URI – is there a way to bridge multiple replica sets in pbm so that when I run the resync command, it sees BOTH sb0 and cfg?
(below is the example of
pbm status on the cfg replica set):
- cfg/10.255.0.1:27017: pbm-agent v2.0.0 OK - cfg/10.255.0.4:27017: pbm-agent v2.0.0 OK - cfg/10.0.0.4:27017: pbm-agent v2.0.0 OK
Would love any guidance. Thanks!