PBM restore issue check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:

Hi team,

While restoring mongo db we facing bellow issue.

MongoDB: 4.4.18 version
PBM-agent : V 2.3.0

[root@drp00-mongodb01-test ec2-user]# pbm restore 2024-11-12T05:50:46Z   --mongodb-uri="mongodb://admin:wzweda-123@drp00-mongodb01-test.dev.wzwedacloud.info:27017,drp00-mongodb02-test.dev.wzwedacloud.info:27017,drp00-mongodb03-test.dev.wzwedacloud.info:27017/?replicaSet=rs1" -w



Starting restore 2024-11-14T04:56:40.152000314Z from ‘2024-11-12T05:50:46Z’…Error: node.drp00-mongodb02-test.dev.wzwedacloud.info:27017 failed: 1731560201:check mongod binary: run: exec: “mongod”: executable file not found in $PATH. stderr:

  • Restore on replicaset “rs1” in state: error: node.drp00-mongodb02-test.dev…info:27017 failed: 1731560201:check mongod binary: run: exec: “mongod”: executable file not found in $PATH. stderr:
    `

PBM agent logs

2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.dev.info:27017] got command restore [name: 2024-11-14T07:46:09.847573555Z, snapshot: 2024-11-12T05:50:46Z] <ts: 1731570369>
2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.dev.info:27017] got command restore [name: 2024-11-14T07:46:09.847573555Z, snapshot: 2024-11-12T05:50:46Z] <ts: 1731570369>
2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.devwzweda.info:27017] got epoch {1731416608 73}
2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.devwzweda.info:27017] got epoch {1731416608 73}
2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] backup: 2024-11-12T05:50:46Z
2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] backup: 2024-11-12T05:50:46Z
2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] recovery started
2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] recovery started
2024-11-14T07:46:11Z E [rs1/drp00-mongodb02-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] restore: check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:
2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] moving to state starting
2024-11-14T07:46:11Z I [rs1/drp00-mongodb02-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] waiting for cluster
2024-11-14T07:46:16Z E [rs1/drp00-mongodb03-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] restore: move to running state: wait for cluster: cluster failed: 1731570370:check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:
2024-11-14T07:46:16Z E [rs1/drp00-mongodb02-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] restore: move to running state: wait for cluster: cluster failed: 1731570370:check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:
[root@drp00-mongodb01-test ec2-user]#

Hi Prince,
How did you install mongod? Are you using a binary version of mongod (custom path) or installed via rpm/apt packages? Is it a new server or restoring into the same server as backup? Is it a physical backup restore?

2024-11-14T07:46:16Z E [rs1/drp00-mongodb03-test.dev.wzweda.info:27017] [restore/2024-11-14T07:46:09.847573555Z] restore: move to running state: wait for cluster: cluster failed: 1731570370:check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:

Try adding mongod in the $PATH variable and check restore.
export PATH=$PATH:/path/to/mongod

Regards,
Vinodh Guruji

Hi Vinodh

We are currently using MongoDB version 4.4.18 within a Docker container, with the default installation of mongod. We are planning to migrate to a new server and have encountered issues while attempting to restore our backups.

We have tried various types of restore processes—physical, logical, and incremental—but unfortunately, we are facing the same errors across all methods.

Here are the steps we followed for the restore:

  1. Installed the PBM agent on all MongoDB servers.
  2. Started the PBM agent on the primary and two secondary containers.
  3. Checked the backup status using pbm status <mongourl>.
  4. Based on the latest physical backup ID, we attempted to restore using the command pbm restore <backup ID> <mongoURL> from the primary MongoDB server.

I would like to clarify whether we should run the restore command from inside the primary container or from outside the primary container on the MongoDB server.

Thank you for your assistance. Looking forward to your guidance on this matter.

Hi vinodh

i test in inside primary container to restore

`[root@e121a6e41ab3 ~]# pbm status --mongodb-uri="mongodb://admin:wajneon-123@loc                                                                             alhost:27017/?authSource=admin&replicaSet=rs1"                                                                                                               Cluster:
========
rs1:
  - rs1/drp00-mongodb01-test.dev.wajneoncloud.info:27017 [S]: pbm-agent v2.3.0 O                                                                             K
  - rs1/drp00-mongodb02-test.dev.wajneoncloud.info:27017 [S]: pbm-agent v2.3.0 O                                                                             K
  - rs1/drp00-mongodb03-test.dev.wajneoncloud.info:27017 [P]: pbm-agent v2.3.0 O                                                                             K


PITR incremental backup:
========================
Status [OFF]

Currently running:
==================
(none)

Backups:
========
S3 us-west-1 s3:/ps/incremental/
  Snapshots:
    2024-11-12T05:50:46Z 1.08GB <incremental, base> [restore_to_time: 2024-11-12                                                                             T05:50:48Z]`

status inside the container

while restoring i got following errors

`[root@e121a6e41ab3 ~]# pbm restore 2024-11-12T05:50:46Z --mongodb-uri="mongodb://admin:wajneon-123@localhost:27017/?authSource=admin&replicaSet=rs1"
Starting restore 2024-11-21T06:04:34.475769414Z from '2024-11-12T05:50:46Z'......Error: node.drp00-mongodb01-test.dev.wajneoncloud.info:27017 failed: 1732169075:check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:
- Restore on replicaset "rs1" in state: error: node.drp00-mongodb01-test.dev.wajneoncloud.info:27017 failed: 1732169075:check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:
[root@e121a6e41ab3 ~]# pbm restore 2024-11-12T05:50:46Z --mongodb-uri="mongodb://admin:wajneon-123@localhost:27017/?authSource=admin&replicaSet=rs1"
Starting restore 2024-11-21T06:05:00.742076254Z from '2024-11-12T05:50:46Z'......Error: node.drp00-mongodb02-test.dev.wajneoncloud.info:27017 failed: 1732169101:check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:
- Restore on replicaset "rs1" in state: error: node.drp00-mongodb02-test.dev.wajneoncloud.info:27017 failed: 1732169101:check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:
[root@e121a6e41ab3 ~]# pbm restore 2024-11-12T05:50:46Z --mongodb-uri="mongodb://admin:wajneon-123@localhost:27017/?authSource=admin&replicaSet=rs1"
Starting restore 2024-11-21T06:05:13.626193532Z from '2024-11-12T05:50:46Z'......Error: node.drp00-mongodb01-test.dev.wajneoncloud.info:27017 failed: 1732169114:check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:
- Restore on replicaset "rs1" in state: error: node.drp00-mongodb01-test.dev.wajneoncloud.info:27017 failed: 1732169114:check mongod binary: run: exec: "mongod": executable file not found in $PATH. stderr:

container logs

{"t":{"$date":"2024-11-21T06:19:19.065+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1732169959:65978][1:0x7f534456c700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 11346, snapshot max: 11346 snapshot count: 0, oldest timestamp: (1732169954, 2) , meta checkpoint timestamp: (1732169959, 2) base write gen: 116581"}}
{"t":{"$date":"2024-11-21T06:19:56.860+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":""connectionId":79,"connectionCount":27}}
{"t":{"$date":"2024-11-21T06:19:56.860+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"1","connectionId":80,"connectionCount":28}}
{"t":{"$date":"2024-11-21T06:19:56.861+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn80","msg":"client metadata","attr":{"remote":"10.","client":"conn80","doc":{"application":{"name":"pbm-ctl"},"driver":{"name":"mongo-go-driver","version":"v1.12.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.19"}}}
{"t":{"$date":"2024-11-21T06:19:56.861+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn79","msg":"client metadata","attr":{"remote":"1,"client":"conn79","doc":{"application":{"name":"pbm-ctl"},"driver":{"name":"mongo-go-driver","version":"v1.12.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.19"}}}
{"t":{"$date":"2024-11-21T06:19:58.253+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn79","msg":"Interrupted operation as its client disconnected","attr":{"opId":35433}}
{"t":{"$date":"2024-11-21T06:19:58.253+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn80","msg":"Connection ended","attr":{"remote":"10.wajneon:46808","connectionId":80,"connectionCount":27}}
{"t":{"$date":"2024-11-21T06:19:58.253+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn79","msg":"Connection ended","attr":{"remote":"10.wajneon:46794","connectionId":79,"connectionCount":26}}
{"t":{"$date":"2024-11-21T06:20:19.091+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1732170019:91852][1:0x7f534456c700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 12234, snapshot max: 12234 snapshot count: 0, oldest timestamp: (1732170013, 5) , meta checkpoint timestamp: (1732170018, 5) base write gen: 116581"}}

Hi, first of all I would like to point out that you are using unsupported version of MongoDB. You should upgrade ASAP. Also PBM version is quite old. That being said, make sure pbm-agent runs under the same userid as mongod. If you still have problems, please share more details about how you setup your containers (Dockerfile) and also the pbm-agent configuration.

Hi Ivan,

I wanted to discuss my migration scenario with you. Currently, my MongoDB cluster is running on a CentOS instance, and I plan to migrate to a new Oracle instance. My goal is to perform a backup and restore without any data loss.

For example, I have 100GB of data, and my cluster is actively receiving data during the migration. Therefore, I need a way to migrate without losing any data.

I’m considering using incremental backups for this process. Once the new cluster is active, I will proceed with the MongoDB version upgrade.

Sample Docker file

version: '2.2'
services:
  mongors1:
    container_name: mongors1
    image: percona/percona-server-mongodb:4.4.18
    command: mongod --auth --keyFile=/data/key/authKey.key --replSet rs1 --dbpath /data/db --port 27017 -enableEncryption --encryptionKeyFile=/data/key/mongodb.key  --encryptionCipherMode=AES256-GCM --setParameter replWriterThreadCount=64 --setParameter enableFlowControl=false --setParameter maxSessions=1000000 --setParameter logicalSessionRefreshMillis=120000 --setParameter localLogicalSessionTimeoutMinutes=15 --setParameter enableTimeoutOfInactiveSessionCursors=true --setParameter cursorTimeoutMillis=180000 --setParameter minNumChunksForSessionsCollection=20480
    ports:
      - 27017:27017
    expose:
      - "27017"
    restart: unless-stopped
    volumes:
      - /mongo_data/data:/data/db
    ulimits:
      nproc:
        soft: 64000
        hard: 64000
      nofile:
        soft: 65535
        hard: 65535
      memlock:
        soft: -1
        hard: -1
    mem_limit: 12g
    cpus: 3
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "10"

for upgrade I suggest a rolling approach as described in Upgrade Process of Percona Server for MongoDB (Replica Set and Shard Cluster)
I suggest you can open a new thread about that to avoid mixing subjects. You haven’t showed the pbm container yet and also pbm configuration.