Percona backup working but restore is not working on a mongo sharded cluster

Hi We have setup pbm-agent configured on 3 config nodes and 3 shard servers and we have 2 mongos routers configured and its configured on Amazon Linux 2023.

We have 3 config nodes with pbm-agent as

cfgsvr1.snb.internal

cfgsvr2.snb.internal

cfgsvr3.snb.internal

We have 3 shard nodes configured with pbm-agent as

shardsvr1.snb.internal

shardsvr2.snb.internal

shardsvr3.snb.internal

We have pbm-agent configured as

cat /etc/sysconfig/pbm-agent # which is configured as per hostname defined
PBM_MONGODB_URI=“mongodb://pbmuser:*****@cfgsvr2.snb.internal:27019/admin?replicaSet=cfgRS&authSource=admin&tls=true&tlsCAFile=/data/mongo/config/ca-chainv4.cert.pem”

We have configured pbm-agent wrapper as well on

cat /usr/local/bin/pbm-wrapper
#!/bin/bash
export PBM_MONGODB_URI=“mongodb://pbmuser:******@cfgsvr1.snb.internal:27019,cfgsvr2.snb.internal:27019,cfgsvr3.snb.internal:27019/admin?replicaSet=cfgRS&tls=true&tlsCAFile=/data/mongo/config/ca-chainv4.cert.pem”
exec /usr/bin/pbm “$@”

From mongos router pbm-agent backup is being taken while running below command

docker run --rm -v /etc/mongo/ssl/ca-chainv4.cert.pem:/ca-chainv4.pem:ro percona/percona-backup-mongodb:2.13.0 pbm backup pbm backup --mongodb-uri=“mongodb://pbmuser:*****@cfgsvr1.snb.internal:27019,cfgsvr2.snb.internal:27019,cfgsvr3.snb.internal:27019/?authSource=admin&replicaSet=cfgRS”

so looks like backup is going to s3

pbm-wrapper status

Cluster:

shardRS:

  • shardsvr1.snb.internal:27018 [S]: pbm-agent [v2.13.0] OK
  • shardsvr2.snb.internal:27018 [P]: pbm-agent [v2.13.0] OK
  • shardsvr3.snb.internal:27018 [S]: pbm-agent [v2.13.0] OK
    cfgRS:
  • cfgsvr1.snb.internal:27019 [S]: pbm-agent [v2.13.0] OK
  • cfgsvr2.snb.internal:27019 [P]: pbm-agent [v2.13.0] OK
  • cfgsvr3.snb.internal:27019 [S]: pbm-agent [v2.13.0] OK

PITR incremental backup:

Status [OFF]

Currently running:

(none)

Backups:

Main storage:

Type: S3
Region: us-east-1
Path: s3://snb-f12-int-s3-backups/mongo-shard-config
Snapshots:
NAME SIZE TYPE PROFILE SEL BASE RESTORE TIME STATUS

2026-04-22T08:10:01Z 452.71KB logical no no 2026-04-22T08:10:06 done
2026-04-22T00:10:02Z 443.43KB logical no no 2026-04-22T00:10:06 done
2026-04-21T16:10:01Z 155.55KB logical no no 2026-04-21T16:10:05 done
2026-04-21T08:10:01Z 276.39KB logical no no 2026-04-21T08:10:05 done
2026-04-21T00:10:01Z 11.14MB logical no no 2026-04-21T00:10:06 done
2026-04-20T16:10:02Z 11.14MB logical no no 2026-04-20T16:10:06 done
2026-04-20T08:10:01Z 11.13MB logical no no 2026-04-20T08:10:06 done
2026-04-20T00:10:01Z 11.39MB logical no no 2026-04-20T00:10:05 done
2026-04-19T16:10:01Z 11.38MB logical no no 2026-04-19T16:10:06 done
2026-04-19T08:10:02Z 11.38MB logical no no 2026-04-19T08:10:06 done
2026-04-19T00:10:01Z 11.10MB logical no no 2026-04-19T00:10:06 done
2026-04-18T16:10:01Z 11.09MB logical no no 2026-04-18T16:10:05 done
2026-04-18T08:10:01Z 11.09MB logical no no 2026-04-18T08:10:06 done
2026-04-18T00:10:01Z 11.08MB logical no no 2026-04-18T00:10:06 done
2026-04-17T16:10:01Z 11.34MB logical no no 2026-04-17T16:10:06 done
2026-04-17T08:10:01Z 11.34MB logical no no 2026-04-17T08:10:06 done
2026-04-17T00:10:01Z 11.06MB logical no no 2026-04-17T00:10:05 done
2026-04-16T16:10:01Z 11.05MB logical no no 2026-04-16T16:10:06 done
2026-04-16T08:10:01Z 10.87MB logical no no 2026-04-16T08:10:06 done
2026-04-16T00:10:01Z 11.13MB logical no no 2026-04-16T00:10:06 done
2026-04-15T16:10:01Z 10.86MB logical no no 2026-04-15T16:10:05 done
2026-04-15T08:10:01Z 11.12MB logical no no 2026-04-15T08:10:06 done
2026-04-15T00:10:01Z 10.85MB logical no no 2026-04-15T00:10:06 done
2026-04-14T16:10:01Z 10.84MB logical no no 2026-04-14T16:10:05 done
2026-04-14T08:10:01Z 10.83MB logical no no 2026-04-14T08:10:05 done
2026-04-14T00:10:02Z 11.10MB logical no no 2026-04-14T00:10:09 done
2026-04-13T16:10:01Z 10.82MB logical no no 2026-04-13T16:10:08 done
2026-04-13T08:10:01Z 11.08MB logical no no 2026-04-13T08:10:07 done
2026-04-13T00:10:01Z 11.06MB logical no no 2026-04-13T00:10:06 done
2026-04-12T16:10:01Z 11.06MB logical no no 2026-04-12T16:10:05 done
2026-04-12T08:10:01Z 11.06MB logical no no 2026-04-12T08:10:06 done
2026-04-12T00:10:01Z 11.04MB logical no no 2026-04-12T00:10:06 done
2026-04-11T16:10:01Z 11.03MB logical no no 2026-04-11T16:10:05 done
2026-04-11T08:10:01Z 11.04MB logical no no 2026-04-11T08:10:06 done
2026-04-11T00:10:01Z 11.02MB logical no no 2026-04-11T00:10:06 done
2026-04-10T16:10:01Z 11.02MB logical no no 2026-04-10T16:10:06 done
2026-04-10T14:18:57Z 11.02MB logical no no 2026-04-10T14:19:01 done
2026-04-10T05:07:01Z 11.01MB logical no no 2026-04-10T05:07:06 done
2026-04-08T12:16:29Z 11.01MB logical no no 2026-04-08T12:16:35 done
PITR chunks [39.69MB]:
2026-04-22T00:10:07 - 2026-04-22T12:08:33
2026-04-21T08:10:02 - 2026-04-21T08:10:05 (no base snapshot)
2026-04-08T12:16:36 - 2026-04-21T02:58:48

We tried restoring backup of percona on a fresh mongo sharded cluster by running

docker run --rm -v /etc/mongo/ssl/ca-chainv4.cert.pem:/ca-chainv4.pem:ro percona/percona-backup-mongodb:2.12.0 pbm restore 2026-04-21T00:10:01Z --num-parallel-collections 16 --num-insertion-workers-per-collection 8 --mongodb-uri=“mongodb://pbmuser:******@cfgsvr1.snb.internal:27019,cfgsvr2.snb.internal:27019,cfgsvr3.snb.internal:27019/?authSource=admin&replicaSet=cfgRS&tls=true&tlsCAFile=/ca-chainv4.pem”

from mongos router

So its giving error in shardsvr nodes and skipping many collections while restoring it

Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.682+0000 archive prelude audit.elements.versions.vlad3
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.682+0000 archive format version “0.1”
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.682+0000 archive server version “8.0.20”
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.682+0000 archive tool version “2.13.0”
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.695+0000 preparing collections to restore from
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.695+0000 skipping restoring audit.elements.versions.altest, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.695+0000 skipping restoring audit.elements.versions.altest metadata, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.695+0000 skipping restoring audit.elements.versions.au13262208260521, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.au13262208260521 metadata, it is not>
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.authtesting01, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.authtesting01 metadata, it is not in>
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.autoapi2, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.autoapi2 metadata, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.autotestingf12, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.autotestingf12 metadata, it is not i>
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.ayf12, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.ayf12 metadata, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.catoki, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.catoki metadata, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.chiswashere, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.chiswashere metadata, it is not incl>
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.chriswasheretoo, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.chriswasheretoo metadata, it is not >
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.devops, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000
skipping restoring audit.elements.versions.devops metadata, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.devopstesting, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.devopstesting metadata, it is not in>
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.djf12, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.djf12 metadata, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.djf12t1, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.djf12t1 metadata, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.djf12t2, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.djf12t2 metadata, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.djf12t3, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.696+0000 skipping restoring audit.elements.versions.djf12t3 metadata, it is not included

on cfgsvr nodes error as follows

Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 I [restore/2026-04-22T12:21:49.955068681Z] starting oplog replay
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] + applying {cfgRS 2026-04-18T00:10:01Z/cfgRS/>
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 I [restore/2026-04-22T12:21:49.955068681Z] oplog replay finished on {1776471006 6}
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] building indexes up
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmRUsers”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.system.users”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmAgents”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmBackups”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmConfig”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmPITRChunks”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.system.roles”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmPITR”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmRRoles”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmRestores”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmCmd”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmLockOp”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmLog”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmOpLog”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for "admin.system.versio>
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “admin.pbmLock”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “config.shards”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “config.tags”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “config.version”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “config.chunks”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “config.collections”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “config.databases”
Apr 22 12:22:07 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:07.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] skip restore indexes for “config.settings”
Apr 22 12:22:15 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:15.000+0000 D [restore/2026-04-22T12:21:49.955068681Z] epoch set to {1776860535 4}
Apr 22 12:22:15 cfgsvr2.snb.internal pbm-agent[1900]: 2026-04-22T12:22:15.000+0000 I [restore/2026-04-22T12:21:49.955068681Z] recovery successfully finished

So restoration is not successfull

Hi, restoring in a new fresh cluster requires some extra steps. Please check Restoring into a cluster / replica set with a different name - Percona Backup for MongoDB

I also got bunch of errors like below

[ec2-user@cfgsvr3 ~]$ pbm-wrapper config --force-resync
Storage resync started
[ec2-user@cfgsvr3 ~]$ journalctl -u pbm status
Failed to add match ‘status’: Invalid argument
[ec2-user@cfgsvr3 ~]$ journalctl -u pbm-agent status
Failed to add match ‘status’: Invalid argument
[ec2-user@cfgsvr3 ~]$ journalctl -u pbm-agent -f
Apr 21 05:00:34 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:00:34.000+0000 I got epoch {1776746874 1}
Apr 21 05:00:34 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:00:34.000+0000 I [resync] started
Apr 21 05:00:34 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:00:34.000+0000 E [resync] resync: reinit storage: delete init file: delete ‘snb-f12-int-s3-backups/.pbm.init’ file from S3: operation error S3: DeleteObject, https response error StatusCode: 403, RequestID: MX0WZMBV37AJ8FR5, HostID: SqJRI0+2kot7cm+7dPsJWZrYgx0Q50+T276KXioNiii/eHdl4geapN0CbS9Ql2Bj9wGwl0sE1FpFPR75ViLtaQs2dq0ahbJ+, api error AccessDenied: User: arn:aws:sts::311150721637:assumed-role/snb-f12-int-iamrole-mongo/i-05fbcd800483c5266 is not authorized to perform: s3:DeleteObject on resource: “arn:aws:s3:::snb-f12-int-s3-backups/mongo-shard-config/.pbm.init” because no identity-based policy allows the s3:DeleteObject action
Apr 21 05:00:38 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:00:38.000+0000 D [pitr] waiting pitr nomination
Apr 21 05:00:40 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:00:40.000+0000 D [pitr] skip after pitr nomination, probably started by another node
Apr 21 05:00:55 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:00:55.000+0000 D [pitr] waiting for cluster ready status
Apr 21 05:01:13 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:01:13.000+0000 D [pitr] waiting pitr nomination
Apr 21 05:01:15 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:01:15.000+0000 D [pitr] start_catchup
Apr 21 05:01:15 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:01:15.000+0000 D [pitr] setting RS error status for err: catchup: get last backup: no backup found. full backup is required to start PITR
Apr 21 05:01:15 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:01:15.000+0000 E [pitr] init: catchup: get last backup: no backup found. full backup is required to start PITR
Apr 21 05:01:30 cfgsvr3.snb.internal pbm-agent[1902]: 2026-04-21T05:01:30.000+0000 D [pitr] waiting for cluster ready status

When I deleted .pbm.init from s3 and again ran pbm-wrapper config --force-resync it use to point me to old backup and then I tried restoration through mongos router and got error as given above..what else I am missing

Why I am getting these errors as well

Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.695+0000 preparing collections to restore from
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.695+0000 skipping restoring audit.elements.versions.altest, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.695+0000 skipping restoring audit.elements.versions.altest metadata, it is not included
Apr 22 12:22:01 shardsvr2.snb.internal pbm-agent[1911]: 2026-04-22T12:22:01.695+0000 skipping restoring audit.elements.versions.au13262208260521, it is not included

is there anything excluded while restoring full data

Seems the storage user is missing some perms, check what’s needed here Remote backup storage overview - Percona Backup for MongoDB

also if this is not a real S3 storage, you should use Minio driver instead for S3 emulation.

This is not permission issue of s3 as data is fully backed up by percona while running

docker run --rm -v /etc/mongo/ssl/ca-chainv4.cert.pem:/ca-chainv4.pem:ro percona/percona-backup-mongodb:2.13.0 pbm backup pbm backup --mongodb-uri=“mongodb://pbmuser:*****@cfgsvr1.snb.internal:27019,cfgsvr2.snb.internal:27019,cfgsvr3.snb.internal:27019/?authSource=admin&replicaSet=cfgRS”

aws s3 ls s3://snb-f12-int-s3-backups/mongo-shard-config/2026-04-20T16:10:02Z/shardRS/
PRE oplog/
2026-04-20 16:10:07 129 admin.pbmRRoles.zst
2026-04-20 16:10:07 1717 admin.pbmRUsers.zst
2026-04-20 16:10:07 129 admin.system.roles.zst
2026-04-20 16:10:07 1715 admin.system.users.zst
2026-04-20 16:10:07 249 admin.system.version.zst
2026-04-20 16:10:08 85684 audit.elements.versions.altest.zst
2026-04-20 16:10:08 70214 audit.elements.versions.au13262208260521.zst
2026-04-20 16:10:09 87665 audit.elements.versions.authtesting01.zst
2026-04-20 16:10:12 88241 audit.elements.versions.autoapi2.zst
2026-04-20 16:10:08 1380603 audit.elements.versions.autotestingf12.zst
2026-04-20 16:10:10 157857 audit.elements.versions.ayf12.zst
2026-04-20 16:10:09 74377 audit.elements.versions.catoki.zst
2026-04-20 16:10:09 90794 audit.elements.versions.chiswashere.zst
2026-04-20 16:10:12 88605 audit.elements.versions.chriswasheretoo.zst
2026-04-20 16:10:10 194645 audit.elements.versions.devops.zst
2026-04-20 16:10:10 140511 audit.elements.versions.devopstesting.zst
2026-04-20 16:10:09 87332 audit.elements.versions.djf12.zst
2026-04-20 16:10:08 134272 audit.elements.versions.djf12t1.zst
2026-04-20 16:10:10 174397 audit.elements.versions.djf12t2.zst
2026-04-20 16:10:09 79607 audit.elements.versions.djf12t3.zst
2026-04-20 16:10:09 81623 audit.elements.versions.djf12t4.zst
2026-04-20 16:10:10 241958 audit.elements.versions.dnyf12.zst
2026-04-20 16:10:08 73890 audit.elements.versions.dnyf12013.zst
2026-04-20 16:10:09 75250 audit.elements.versions.dnyf12dj.zst
2026-04-20 16:10:11 3359116 audit.elements.versions.docdb1.zst
2026-04-20 16:10:08 88380 audit.elements.versions.invqa.zst
2026-04-20 16:10:11 9264 audit.elements.versions.links.au13262208260521.zst
2026-04-20 16:10:12 451 audit.elements.versions.links.autoapi2.zst
2026-04-20 16:10:10 18374 audit.elements.versions.links.ayf12.zst
2026-04-20 16:10:09 9291 audit.elements.versions.links.catoki.zst
2026-04-20 16:10:12 456 audit.elements.versions.links.chiswashere.zst
2026-04-20 16:10:10 441 audit.elements.versions.links.chriswasheretoo.zst
2026-04-20 16:10:08 10648 audit.elements.versions.links.devops.zst
2026-04-20 16:10:08 2946 audit.elements.versions.links.devopstesting.zst
2026-04-20 16:10:10 449 audit.elements.versions.links.djf12.zst
2026-04-20 16:10:08 1621 audit.elements.versions.links.docdb1.zst
2026-04-20 16:10:10 460 audit.elements.versions.links.invqa.zst
2026-04-20 16:10:07 389824 audit.elements.versions.links.pkiautomation.zst
2026-04-20 16:10:10 464 audit.elements.versions.links.pkimanual2.zst
2026-04-20 16:10:08 1106 audit.elements.versions.links.public.zst
2026-04-20 16:10:08 10205 audit.elements.versions.links.rdsupgrade.zst
2026-04-20 16:10:09 10691 audit.elements.versions.links.rmqtest.zst
2026-04-20 16:10:08 9625 audit.elements.versions.links.vlad3.zst
2026-04-20 16:10:12 2679734 audit.elements.versions.pkiautomation.zst
2026-04-20 16:10:09 87799 audit.elements.versions.pkimanual2.zst
2026-04-20 16:10:09 41000 audit.elements.versions.public.zst
2026-04-20 16:10:10 75577 audit.elements.versions.rb1.zst
2026-04-20 16:10:11 74665 audit.elements.versions.rdsupgrade.zst
2026-04-20 16:10:10 75627 audit.elements.versions.rmqtest.zst
2026-04-20 16:10:11 96345 audit.elements.versions.semantic.zst
2026-04-20 16:10:11 81519 audit.elements.versions.sz1.zst
2026-04-20 16:10:09 77612 audit.elements.versions.vlad3.zst
2026-04-20 16:10:12 94715 meta.pbm
2026-04-20 16:10:12 169187 metadata.json

which is right but restoration is broken ..need some help on that

which is given as per document of percona but while restoration it should be done on mongos router or nodes having pbm installed

Sorry I just noticed your pbm-agent connection URI is not correct. Each pbm-agent always should point to its localhost. Then pbm client triggering the restore points to config servers. More info: Configure authentication in MongoDB - Percona Backup for MongoDB

Hi Ivan,

It is pointing to its localhost or hostname only

for cfgsvr1

cat /etc/sysconfig/pbm-agent
PBM_MONGODB_URI=“mongodb://pbmuser:*****@cfgsvr1.snb.internal:27019/admin?replicaSet=cfgRS&authSource=admin&tls=true&tlsCAFile=/data/mongo/config/ca-chainv4.cert.pem”

for cfgsvr2

cat /etc/sysconfig/pbm-agent
PBM_MONGODB_URI=“mongodb://pbmuser:***@cfgsvr2.snb.internal:27019/admin?replicaSet=cfgRS&authSource=admin&tls=true&tlsCAFile=/data/mongo/config/ca-chainv4.cert.pem”

for cfgsvr3

for cfgsvr1

cat /etc/sysconfig/pbm-agent
PBM_MONGODB_URI=“mongodb://pbmuser:*****@cfgsvr3.snb.internal:27019/admin?replicaSet=cfgRS&authSource=admin&tls=true&tlsCAFile=/data/mongo/config/ca-chainv4.cert.pem”

for shard nodes

for shardsvr1

cat /etc/sysconfig/pbm-agent
PBM_MONGODB_URI=“mongodb://pbmuser:****@shardsvr1.snb.internal:27018/admin?replicaSet=shardRS&authSource=admin&tls=true&tlsCAFile=/data/mongo/config/ca-chainv4.cert.pem”

for shardsvr2

for shardsvr1

cat /etc/sysconfig/pbm-agent
PBM_MONGODB_URI=“mongodb://pbmuser:****@shardsvr2.snb.internal:27018/admin?replicaSet=shardRS&authSource=admin&tls=true&tlsCAFile=/data/mongo/config/ca-chainv4.cert.pem”

for shardsvr3

for shardsvr1

cat /etc/sysconfig/pbm-agent
PBM_MONGODB_URI=“mongodb://pbmuser:****@shardsvr3.snb.internal:27018/admin?replicaSet=shardRS&authSource=admin&tls=true&tlsCAFile=/data/mongo/config/ca-chainv4.cert.pem”

So it is configured but main issue is pbm restore is how we can debug

Please let me know if you want more information

sorry but that is still not right. Here’s an example of how it should look like:

PBM_MONGODB_URI=“mongodb://pbmuser:****@shardsvr3.snb.internal:27018/?authSource=admin&tls=true&tlsCAFile=/data/mongo/config/ca-chainv4.cert.pem”

basically don’t include the replicaSet part nor point pbm-agent to any specific db or you will get this sort of errors.

Let me check on this and get back.

Hi Ivan,

Thanks for your help and letting me know the issue.

Restore is working now after removing replica set

great! thanks for letting me know