A direct connection cannot be made if multiple hosts are specified

Hello,
So I want to check the Percona Backup for MongoDB, and just for the test purpose, I did create 3 Docker containers for the MongoDB cluster (with replica sets configured), and then I added the Percona Backup for MongoDB:Latest Image. but I am stuck in the connection string part PBM_MONGODB_URI what should I put because when I mentioned the 3 container in the connection string I get this error

 Exit: connect to the node: connect: create mongo client: a direct connection cannot be made if multiple hosts are specified

and when I just configure only with the primary conatainer I get

dbrs:
  - dbrs/mongo1:27017: pbm-agent v1.8.1 OK
  - dbrs/mongo2:27017: pbm-agent NOT FOUND FAILED status:
  - dbrs/mongo3:27017: pbm-agent NOT FOUND FAILED status:

this is my docker-compose file:

version: '3.8'

services:
  mongo1:
    container_name: mongo1
    image: mongo:4.4
    volumes:
      - ./scripts/rs-init.sh:/scripts/rs-init.sh
      - ./scripts/init.js:/scripts/init.js
    networks:
      - mongo-network
    ports:
      - 27017:27017
    depends_on:
      - mongo2
      - mongo3
    links:
      - mongo2
      - mongo3
    restart: always
    entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "dbrs" ]

  mongo2:
    container_name: mongo2
    image: mongo:4.4
    networks:
      - mongo-network
    ports:
      - 27018:27017
    restart: always
    entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "dbrs" ]
  mongo3:
    container_name: mongo3
    image: mongo:4.4
    networks:
      - mongo-network
    ports:
      - 27019:27017
    restart: always
    entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "dbrs" ]
  percona:
    image: percona/percona-backup-mongodb:latest
    container_name: percona
    environment:
      PBM_MONGODB_URI: "mongodb://pbmuser:secretpwd@mongo1:27017/?authSource=admin&replSetName=dbrs"
    networks:
    - mongo-network
    volumes:
      - ./pbm_config.yaml:/tmp/pbm_config.yaml
      - /app
      #- ./pbm-set.sh:/tmp/pbm-set.sh
    depends_on:
      - mongo1
      - mongo2
      - mongo3
networks:
  mongo-network:
    driver: bridge

and this is the script for replicaset configuration:

#!/bin/bash

DELAY=25

mongo <<EOF
var config = {
    "_id": "dbrs",
    "version": 1,
    "members": [
        {
            "_id": 1,
            "host": "mongo1:27017",
            "priority": 2
        },
        {
            "_id": 2,
            "host": "mongo2:27017",
            "priority": 1
        },
        {
            "_id": 3,
            "host": "mongo3:27017",
            "priority": 1
        }
    ]
};
rs.initiate(config, { force: true });
EOF

echo "****** Waiting for ${DELAY} seconds for replicaset configuration to be applied ******"

sleep $DELAY

mongo < /scripts/init.js

any solutions ?

1 Like

Hi @ADOUNI_Rania,

In PBM_MONGODB_URI you have to specify a single host which the PBM will serve (taking backups from / restore to). PBM uses a direct connection to this host.
the error comes from underlying mongo driver. it doesn’t “know” to which host the direct connection has to be established.

in other words, for each node (mongod) you have to run a separate pbm-agent with a single host URI to the node. if you trying to run a single pbm-agent instance for many nodes, it won’t work

Hello,
So do you think I should run 2 other Percona containers in my Docker compose file and in each of them add a different PBM_MONGODB_URI? Or maybe there is another solution?

yes, you have missed 2 agents for mongo2 and mongo3.

please look at the PBM architecture to better understand what/why

2 Likes

thanks @Dmytro_Zghoba
Now my 3 pbm-agent ok, but I am facing error with pbm backup

 2022-09-09T12:06:46Z 0.00B <logical> [ERROR: get file 2022-09-09T12:06:46Z_dbrs.dump.s2: no such file] [2022-09-09T12:06:51Z]

this is my file pbm_config.yaml

storage:
  type: filesystem
  filesystem:
    path: /backup

and under the folder backup I can’t find anything !
pbm list

Backup snapshots:
  2022-09-09T12:05:06Z <logical> [complete: 2022-09-09T12:05:11Z]
  2022-09-09T12:06:46Z <logical> [complete: 2022-09-09T12:06:51Z]

PITR <off>:
1 Like

where you see 2022-09-09T12:06:46Z 0.00B <logical> [ERROR: get file 2022-09-09T12:06:46Z_dbrs.dump.s2: no such file] [2022-09-09T12:06:51Z] ?

and when you see the error-less pbm list output?
I’m surprised to see results with and without an error for the same backup at the same time.

to understand what PBM agents are doing run: pbm logs -s D -t 1000

could you please share it here?

1 Like

this the output of pbm list and pbm status

[root@a94fb810770b backup]# pbm list
Backup snapshots:
  2022-09-09T13:27:17Z <logical> [complete: 2022-09-09T13:27:22Z]

PITR <off>:
[root@a94fb810770b backup]# pbm status
Cluster:
========
dbrs:
  - dbrs/mongo1:27017: pbm-agent v1.8.1 OK
  - dbrs/mongo2:27017: pbm-agent v1.8.1 OK
  - dbrs/mongo3:27017: pbm-agent v1.8.1 OK


PITR incremental backup:
========================
Status [OFF]

Currently running:
==================
(none)

Backups:
========
FS  /backup
  Snapshots:
    2022-09-09T13:27:17Z 0.00B <logical> [ERROR: get file 2022-09-09T13:27:17Z_dbrs.dump.s2: no such file] [2022-09-09T13:27:22Z]

The logs of pbm agents:

[root@a94fb810770b backup]# pbm logs -s D -t 1000
2022-09-09T13:25:22Z I [dbrs/mongo3:27017] pbm-agent:
Version:   1.8.1
Platform:  linux/amd64
GitCommit: e3f8f2c535b3a8b69a54a86fc9271d3e6de5ff80
GitBranch: release-1.8.1
BuildTime: 2022-07-04_08:16_UTC
GoVersion: go1.16.9
2022-09-09T13:25:22Z I [dbrs/mongo3:27017] starting PITR routine
2022-09-09T13:25:22Z I [dbrs/mongo2:27017] starting PITR routine
2022-09-09T13:25:22Z I [dbrs/mongo2:27017] pbm-agent:
Version:   1.8.1
Platform:  linux/amd64
GitCommit: e3f8f2c535b3a8b69a54a86fc9271d3e6de5ff80
GitBranch: release-1.8.1
BuildTime: 2022-07-04_08:16_UTC
GoVersion: go1.16.9
2022-09-09T13:25:22Z I [dbrs/mongo3:27017] node: dbrs/mongo3:27017
2022-09-09T13:25:22Z I [dbrs/mongo3:27017] listening for the commands
2022-09-09T13:25:22Z I [dbrs/mongo2:27017] node: dbrs/mongo2:27017
2022-09-09T13:25:22Z I [dbrs/mongo2:27017] listening for the commands
2022-09-09T13:25:22Z I [dbrs/mongo1:27017] starting PITR routine
2022-09-09T13:25:22Z I [dbrs/mongo1:27017] pbm-agent:
Version:   1.8.1
Platform:  linux/amd64
GitCommit: e3f8f2c535b3a8b69a54a86fc9271d3e6de5ff80
GitBranch: release-1.8.1
BuildTime: 2022-07-04_08:16_UTC
GoVersion: go1.16.9
2022-09-09T13:25:22Z I [dbrs/mongo1:27017] node: dbrs/mongo1:27017
2022-09-09T13:25:22Z I [dbrs/mongo1:27017] listening for the commands
2022-09-09T13:25:27Z W [dbrs/mongo3:27017] [agentCheckup] get current storage status: query mongo: mongo: no documents in result
2022-09-09T13:25:27Z W [dbrs/mongo2:27017] [agentCheckup] get current storage status: query mongo: mongo: no documents in result
2022-09-09T13:25:27Z E [dbrs/mongo3:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:27Z E [dbrs/mongo2:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:27Z W [dbrs/mongo1:27017] [agentCheckup] get current storage status: query mongo: mongo: no documents in result
2022-09-09T13:25:27Z E [dbrs/mongo1:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:32Z E [dbrs/mongo2:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:32Z E [dbrs/mongo3:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:32Z E [dbrs/mongo1:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:37Z E [dbrs/mongo2:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:37Z E [dbrs/mongo3:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:37Z E [dbrs/mongo1:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:42Z E [dbrs/mongo2:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:42Z E [dbrs/mongo3:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:42Z E [dbrs/mongo1:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:47Z E [dbrs/mongo3:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:47Z E [dbrs/mongo2:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:47Z E [dbrs/mongo1:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:52Z E [dbrs/mongo3:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:52Z E [dbrs/mongo2:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:52Z E [dbrs/mongo1:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:57Z E [dbrs/mongo3:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:57Z E [dbrs/mongo2:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:25:57Z E [dbrs/mongo1:27017] [agentCheckup] check storage connection: unable to get storage: get config: get: mongo: no documents in result
2022-09-09T13:26:02Z I [dbrs/mongo3:27017] got command resync <ts: 1662729961>
2022-09-09T13:26:02Z I [dbrs/mongo3:27017] got epoch {1662729957 6}
2022-09-09T13:26:02Z I [dbrs/mongo3:27017] [resync] started
2022-09-09T13:26:02Z D [dbrs/mongo3:27017] [resync] got physical restores list: 0
2022-09-09T13:26:02Z D [dbrs/mongo3:27017] [resync] got backups list: 0
2022-09-09T13:26:02Z I [dbrs/mongo2:27017] got command resync <ts: 1662729961>
2022-09-09T13:26:02Z I [dbrs/mongo2:27017] got epoch {1662729957 6}
2022-09-09T13:26:02Z I [dbrs/mongo3:27017] [resync] succeed
2022-09-09T13:26:02Z D [dbrs/mongo2:27017] [resync] lock not acquired
2022-09-09T13:26:02Z D [dbrs/mongo3:27017] [resync] epoch set to {1662729962 14}
2022-09-09T13:26:02Z I [dbrs/mongo1:27017] got command resync <ts: 1662729961>
2022-09-09T13:26:02Z I [dbrs/mongo1:27017] got epoch {1662729962 14}
2022-09-09T13:26:02Z D [dbrs/mongo1:27017] [resync] get lock: duplicate operation: 631b3ee9472345a1e4a3808d [Resync storage]
2022-09-09T13:26:02Z D [dbrs/mongo1:27017] [resync] lock not acquired
2022-09-09T13:27:18Z I [dbrs/mongo3:27017] got command backup [name: 2022-09-09T13:27:17Z, compression: s2 (level: default)] <ts: 1662730037>
2022-09-09T13:27:18Z I [dbrs/mongo3:27017] got epoch {1662730027 3}
2022-09-09T13:27:18Z I [dbrs/mongo2:27017] got command backup [name: 2022-09-09T13:27:17Z, compression: s2 (level: default)] <ts: 1662730037>
2022-09-09T13:27:18Z I [dbrs/mongo2:27017] got epoch {1662730027 3}
2022-09-09T13:27:18Z I [dbrs/mongo1:27017] got command backup [name: 2022-09-09T13:27:17Z, compression: s2 (level: default)] <ts: 1662730037>
2022-09-09T13:27:18Z I [dbrs/mongo1:27017] got epoch {1662730027 3}
2022-09-09T13:27:18Z D [dbrs/mongo1:27017] [backup/2022-09-09T13:27:17Z] init backup meta
2022-09-09T13:27:18Z D [dbrs/mongo1:27017] [backup/2022-09-09T13:27:17Z] nomination list for dbrs: [[mongo3:27017 mongo2:27017] [mongo1:27017]]
2022-09-09T13:27:18Z D [dbrs/mongo1:27017] [backup/2022-09-09T13:27:17Z] nomination dbrs, set candidates [mongo3:27017 mongo2:27017]
2022-09-09T13:27:18Z I [dbrs/mongo3:27017] [backup/2022-09-09T13:27:17Z] backup started
2022-09-09T13:27:18Z D [dbrs/mongo2:27017] [backup/2022-09-09T13:27:17Z] skip after nomination, probably started by another node
2022-09-09T13:27:18Z D [dbrs/mongo1:27017] [backup/2022-09-09T13:27:17Z] skip after nomination, probably started by another node
2022-09-09T13:27:21Z D [dbrs/mongo3:27017] [backup/2022-09-09T13:27:17Z] wait for tmp users {1662730041 5}
2022-09-09T13:27:22Z I [dbrs/mongo3:27017] [backup/2022-09-09T13:27:17Z] mongodump finished, waiting for the oplog
2022-09-09T13:27:23Z D [dbrs/mongo1:27017] [backup/2022-09-09T13:27:17Z] bcp nomination: dbrs won by mongo3:27017
2022-09-09T13:27:25Z D [dbrs/mongo3:27017] [backup/2022-09-09T13:27:17Z] set oplog span to {1662730038 20} / {1662730042 3}
2022-09-09T13:27:25Z I [dbrs/mongo3:27017] [backup/2022-09-09T13:27:17Z] dropping tmp collections
2022-09-09T13:27:25Z D [dbrs/mongo3:27017] [backup/2022-09-09T13:27:17Z] epoch set to {1662730045 5}
2022-09-09T13:27:27Z I [dbrs/mongo3:27017] [backup/2022-09-09T13:27:17Z] backup finished
2022-09-09T13:27:27Z D [dbrs/mongo3:27017] [backup/2022-09-09T13:27:17Z] releasing lock

1 Like

I think I know what is going on.

every PBM agent in a cluster should share the same storage. during backup (if node priority is not provided) on each replset/shard, there is a nomination process to choose which one of replset’s nodes will do backup. in your case, the dbrs/mongo3:27017 did the backup. if I’m not wrong, inside PBM container (that one for the mongo3 node), you will find your backup.
docker $CONTAINER_ID exec ls /backup

if it’s true, just share the same storage (docker volume in your case) between all PBM agent containers.

also, when you run pbm status, the pbm CLI should have access to the storage. better to run the command within any PBM container like docker $CONTAINER_ID exec pbm status

2 Likes

hello dymtro,
I appreciate your assistance, Everything now functions flawlessly. However, I have one query: Can the PBM be performed on a standalone MongoDB instance?

1 Like

no. for this use mongo-tools. PBM requires oplog which is not available in standalone node. also in standalone you cannot use PITR (requires oplog) even with mongo-tools.

I suggest to deploy at least single node replset. you will have a oplog. so it will be possible to save changes between start and complete of a backup and apply the changes.

2 Likes