[PBM] Restoration does not put push data

Hello Team,
I have an issue restoring tha data from a backup done with percona backup for mongo.

My setup:

  • 6 nodes
  • 2 shards
  • databsse sharded

I backuped my whole database this is the number of document before the backup

use generated
db[“generated.1”].countDocuments()
80

This is the status of pbm

pbm status
Cluster:
configsvr:
  - 172.16.24.59:27018 [S]: pbm-agent [v2.9.0] OK
  - 172.16.24.146:27018 [S]: pbm-agent [v2.9.0] OK
  - 172.16.24.113:27018 [P]: pbm-agent [v2.9.0] OK
  - 172.16.24.34:27018 [S]: pbm-agent [v2.9.0] OK
  - 172.16.24.218:27018 [S]: pbm-agent [v2.9.0] OK
  - 172.16.24.158:27018 [S]: pbm-agent [v2.9.0] OK
shard0:
  - 172.16.24.59:27019 [S]: pbm-agent [v2.9.0] OK
  - 172.16.24.146:27019 [S]: pbm-agent [v2.9.0] OK
  - 172.16.24.113:27019 [P]: pbm-agent [v2.9.0] OK
shard1:
  - 172.16.24.34:27019 [S]: pbm-agent [v2.9.0] OK
  - 172.16.24.218:27019 [P]: pbm-agent [v2.9.0] OK
  - 172.16.24.158:27019 [S]: pbm-agent [v2.9.0] OK


PITR incremental backup:
========================
Status [ON]

Currently running:
==================
(none)

Backups:
========
S3 fr-par https://myS3//percona-backup-mongo
  Snapshots:
    2025-04-07T08:55:46Z 353.78KB <logical> [restore_to_time: 2025-04-07T08:55:50Z]
    2025-04-07T08:50:18Z 351.10KB <logical> [restore_to_time: 2025-04-07T08:50:22Z]
  PITR chunks [777.96KB]:
    2025-04-07T08:50:23Z - 2025-04-07T08:55:41Z

I try then to restore my previous backup:

1st I removed the database generated

use generated
db.dropDatabase()
pbm config --set pitr.enabled=false
pbm restore '2025-04-07T08:55:46Z'

I notice that the database was restored without the data. Please find attached the logs it only show restoring the index and playing the oplog.

Any idea of what is happening here ?

Please find the log below:

2025-04-07T09:03:57Z I [shard1/172.16.24.34:27019] [restore/2025-04-07T09:03:57.013916851Z] backup: 2025-04-07T08:55:46Z
2025-04-07T09:03:57Z I [shard1/172.16.24.34:27019] [restore/2025-04-07T09:03:57.013916851Z] recovery started
2025-04-07T09:03:57Z I [shard1/172.16.24.34:27019] [restore/2025-04-07T09:03:57.013916851Z] This node is not the primary. Check pbm agent on the primary for restore progress
2025-04-07T09:03:58Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] moving to state running
2025-04-07T09:03:58Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] moving to state running
2025-04-07T09:03:58Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] moving to state running
2025-04-07T09:04:01Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring users and roles
2025-04-07T09:04:01Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] moving to state dumpDone
2025-04-07T09:04:02Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring users and roles
2025-04-07T09:04:02Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] moving to state dumpDone
2025-04-07T09:04:03Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring users and roles
2025-04-07T09:04:03Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] moving to state dumpDone
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] starting oplog replay
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] oplog replay finished on {1744016150 5}
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for metadata.ObjectGroup: _tenant_1, _ops_1, _glpd_1, _id_
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for metadata.Unit: _tenant_1, _ops_1, _glpd_1, _id_
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.PurgeObjectGroup: processId_1__tenant_1__metadata.id_1, processId_1__tenant_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.PurgeUnit: processId_1__tenant_1, processId_1__tenant_1__metadata.id_1, processId_1__tenant_1__metadata.status_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.BulkUpdateUnitMetadataReport: processId_1__tenant_1_statusId_1, processId_1__tenant_1_id_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.EliminationActionUnit: processId_1__tenant_1, processId_1__tenant_1__metadata.id_1__metadata.type_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.ExtractedMetadata: processId_1_tenant_1_id_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.PreservationReport: processId_1__tenant_1, processId_1__tenant_1_id_1, processId_1__tenant_1_status_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.TransferReplyUnit: processId_1__tenant_1, processId_1__tenant_1__metadata.id_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.AuditObjectGroup: processId_1__tenant_1, processId_1__tenant_1__metadata.id_1, processId_1__tenant_1__metadata.status_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.DeleteGotVersionsReport: processId_1__tenant_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.EvidenceAuditReport: processId_1__tenant_1__metadata.status_1, processId_1__tenant_1, processId_1__tenant_1__metadata.id_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for report.InvalidUnits: processId_1__tenant_1, processId_1__tenant_1__metadata.id_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.system.roles: role_1_db_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.system.users: user_1_db_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.1: _id_hashed
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.3: _id_hashed
2025-04-07T09:04:05Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] starting oplog replay
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.4: _id_hashed
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.6: _id_hashed
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.9: _id_
2025-04-07T09:04:05Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] oplog replay finished on {1744016150 5}
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.10: _id_hashed
2025-04-07T09:04:05Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.system.roles: role_1_db_1
2025-04-07T09:04:05Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.system.users: user_1_db_1
2025-04-07T09:04:05Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.6: _id_hashed
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.2: _id_
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.5: _id_hashed
2025-04-07T09:04:05Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.7: _id_hashed
2025-04-07T09:04:05Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] starting oplog replay
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.7: _id_hashed
2025-04-07T09:04:05Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.9: _id_
2025-04-07T09:04:05Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] oplog replay finished on {1744016150 5}
2025-04-07T09:04:05Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.1: _id_hashed
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.8: _id_hashed
2025-04-07T09:04:05Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.system.roles: role_1_db_1
2025-04-07T09:04:05Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.system.users: user_1_db_1
2025-04-07T09:04:05Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.pbmLockOp: replset_1_type_1
2025-04-07T09:04:05Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.10: _id_hashed
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for identity.Certificate: Certificate_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for identity.PersonalCertificate: Hash_1
2025-04-07T09:04:05Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.pbmLock: replset_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookLifeCycleObjectGroup: _id_hashed, _tenant_1__lastPersistedDate_1
2025-04-07T09:04:05Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.pbmOpLog: opid_1_replset_1
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookLifeCycleObjectGroupInProcess: _id_hashed
2025-04-07T09:04:05Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookLifeCycleUnit: _id_hashed, _tenant_1__lastPersistedDate_1
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.3: _id_hashed
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookLifeCycleUnitInProcess: _id_hashed
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookOperation: _id_hashed, _tenant_1_evDateTime_1_evTypeProc_1_events.evDateTime_1_events.outDetail_1, _tenant_1_events.evType_1, _tenant_1_evIdProc_1, _lastPersistedDate_1__tenant_1
2025-04-07T09:04:06Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.pbmPITRChunks: rs_1_start_ts_1_end_ts_1, start_ts_1_end_ts_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.AccessionRegisterDetail: OriginatingAgency_1, SubmissionAgency_1, Opc_1, Opi_1, OriginatingAgency_1_Opi_1__tenant_1, _tenant_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.ArchiveUnitProfile: _tenant_1_Identifier_1
2025-04-07T09:04:06Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for admin.pbmBackups: name_1, start_ts_1_status_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.FileFormat: PUID_1
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.5: _id_hashed
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.ManagementContract: _tenant_1_Identifier_1
2025-04-07T09:04:06Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for config.shards: host_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.Agencies: _tenant_1_Identifier_1
2025-04-07T09:04:06Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for config.tags: ns_1_min_1, ns_1_tag_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.Context: Identifier_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.PreservationScenario: _tenant_1_Identifier_1
2025-04-07T09:04:06Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for config.chunks: uuid_1_shard_1_min_1, uuid_1_lastmod_1, uuid_1_min_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.SecurityProfile: Name_1, Identifier_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.VitamSequence: Name_1, Name_1__tenant_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.AccessionRegisterSummary: _tenant_1_OriginatingAgency_1, _tenant_1
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.2: _id_hashed
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.FileRules: RuleId_1, RuleType_1, _tenant_1_RuleId_1, _tenant_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.Griffin: _tenant_1_Identifier_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.IngestContract: _tenant_1_Identifier_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.Ontology: _tenant_1_Identifier_1, _tenant_1_Collections_1
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for masterdata.Profile: _tenant_1_Identifier_1
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.4: _id_hashed
2025-04-07T09:04:06Z I [shard1/172.16.24.218:27019] [restore/2025-04-07T09:03:57.013916851Z] recovery successfully finished
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for generated.generated.8: _id_
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookLifeCycleUnitInProcess: _id_hashed
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookOperation: _tenant_1_evDateTime_1_evTypeProc_1_events.evDateTime_1_events.outDetail_1, _tenant_1_events.evType_1, _tenant_1_evIdProc_1, _lastPersistedDate_1__tenant_1, _id_
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookLifeCycleObjectGroup: _id_hashed, _tenant_1__lastPersistedDate_1
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookLifeCycleObjectGroupInProcess: _id_hashed
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for logbook.LogbookLifeCycleUnit: _id_hashed, _tenant_1__lastPersistedDate_1
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for metadata.ObjectGroup: _tenant_1, _ops_1, _glpd_1, _id_
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for metadata.Snapshot: _tenant_1_Name_1
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] restoring indexes for metadata.Unit: _glpd_1, _id_hashed, _tenant_1, _ops_1
2025-04-07T09:04:06Z I [shard0/172.16.24.113:27019] [restore/2025-04-07T09:03:57.013916851Z] recovery successfully finished
2025-04-07T09:04:07Z I [configsvr/172.16.24.113:27018] [restore/2025-04-07T09:03:57.013916851Z] recovery successfully finished

@Mouhamadou_Diallo I hope you followed the steps/process as mentioned in the manual - Restore from a logical backup - Percona Backup for MongoDB. Please verify!

Was the backup taken from same MongoDB and PBM version or if they differ ?

Did you check the backup logs also ?

Have you verified the data from both mongos(router) node and individual shard node if showing similar info ?

You can also check the individual backup/restore using pbm describe-backup and pbm describe-restore to get more insight specific to the target backup.

Hello,

This is my sh.status(). It seems ok for: autosplit and balencer


---

autosplit

{ 'Currently enabled': 'no' }

---

balancer

{

'Currently running': 'no',

'Failed balancer rounds in last 5 attempts': 0,

'Migration Results for the last 24 hours': { '512': 'Success' },

'Currently enabled': 'no'

}

Was the backup taken from same MongoDB and PBM version or if they differ ?

Yes it is the same cluster and the same version. The restore was done on the same cluster.

Did you check the backup logs also ?

Yes the backup was checked you can find below the describe-backup and describe-restore


pbm describe-backup 2025-04-07T08:55:46Z

name: "2025-04-07T08:55:46Z"

opid: 67f393125e5e42713b08ab99

type: logical

last_write_time: "2025-04-07T08:55:50Z"

last_transition_time: "2025-04-07T08:56:07Z"

mongodb_version: 5.0.14

fcv: "5.0"

pbm_version: 2.9.0

status: done

size_h: 353.8 KiB

replsets:

- name: shard0

status: done

node: 172.16.24.146:27019

last_write_time: "2025-04-07T08:55:50Z"

last_transition_time: "2025-04-07T08:56:02Z"

- name: shard1

status: done

node: 172.16.24.158:27019

last_write_time: "2025-04-07T08:55:50Z"

last_transition_time: "2025-04-07T08:56:02Z"

- name: configsvr

status: done

node: 172.16.24.59:27018

last_write_time: "2025-04-07T08:55:54Z"

last_transition_time: "2025-04-07T08:56:05Z"

configsvr: true


pbm describe-restore 2025-04-07T13:49:32.536137124Z

name: "2025-04-07T13:49:32.536137124Z"

opid: 67f3d7ec86ab4268e6281e1a

backup: "2025-04-07T08:55:46Z"

type: logical

status: done

start: "2025-04-07T13:49:33Z"

finish: "2025-04-07T13:49:41Z"

last_transition_time: "2025-04-07T13:49:41Z"

replsets:

- name: shard1

status: done

last_transition_time: "2025-04-07T13:49:41Z"

- name: shard0

status: done

last_transition_time: "2025-04-07T13:49:40Z"

- name: configsvr

status: done

last_transition_time: "2025-04-07T13:49:40Z"

hello @ anil.joshi,
Any insight here ? I’m stuck with this.
Thanks,
M.D

Hi,

The shared logs from restore procedure seems ok.
Restored collections log entries are not present within logs because PBM uses external tool for logical restore which outputs into stderr. Provided logs are obtained using pbm logs command, I suppose.

To see restored collections within logs, please copy agent’s stderr output.
Alternatively it’s possible to redirect pbm-agent’s log to file as it is explained here: Logging options - Percona Backup for MongoDB.

Those logs will contain the list of restored collections, including number of restored documents within each collection. After you have those, please share them again here.

Hello,

I started over and did all the step all again please find below the data:

Check pbm status

pbm status

Cluster:
========
configsvr:
 - 172.16.24.29:27018 [S]: pbm-agent [v2.9.0] OK
 - 172.16.24.74:27018 [S]: pbm-agent [v2.9.0] OK
 - 172.16.24.228:27018 [P]: pbm-agent [v2.9.0] OK
 - 172.16.24.232:27018 [S]: pbm-agent [v2.9.0] OK
 - 172.16.24.183:27018 [S]: pbm-agent [v2.9.0] OK
 - 172.16.24.182:27018 [S]: pbm-agent [v2.9.0] OK
shard0:
 - 172.16.24.29:27019 [P]: pbm-agent [v2.9.0] OK
 - 172.16.24.74:27019 [S]: pbm-agent [v2.9.0] OK
 - 172.16.24.228:27019 [S]: pbm-agent [v2.9.0] OK
shard1:
 - 172.16.24.232:27019 [S]: pbm-agent [v2.9.0] OK
 - 172.16.24.183:27019 [S]: pbm-agent [v2.9.0] OK
 - 172.16.24.182:27019 [P]: pbm-agent [v2.9.0] OK


PITR incremental backup:
========================
Status [ON]

Currently running:
==================
(none)

Backups:
========
S3 fr-par https://my-s3//percona-backup-mongo
 Snapshots:
   2025-05-27T11:41:26Z 331.70KB <logical> [restore_to_time: 2025-05-27T11:41:32Z]
   2025-05-27T11:27:37Z 220.96KB <logical> [restore_to_time: 2025-05-27T11:27:41Z]
   2025-05-15T12:15:30Z 427.95KB <logical> [restore_to_time: 2025-05-15T12:15:33Z]
   2025-05-15T12:12:16Z 413.49KB <logical> [restore_to_time: 2025-05-15T12:12:20Z]
   2025-04-07T08:55:46Z 353.78KB <logical> [restore_to_time: 2025-04-07T08:55:50Z]
   2025-04-07T08:50:18Z 351.10KB <logical> [restore_to_time: 2025-04-07T08:50:22Z]
 PITR chunks [3.14MB]:
   2025-05-27T11:27:42Z - 2025-05-27T11:41:26Z
   2025-05-27T11:27:37Z - 2025-05-27T11:27:41Z (no base snapshot)
   2025-05-15T12:15:31Z - 2025-05-15T12:15:33Z (no base snapshot)
   2025-05-15T12:12:21Z - 2025-05-15T12:14:34Z
   2025-04-07T08:50:23Z - 2025-04-07T09:01:11Z

Deactivate pitr

pbm config --set pitr.enabled=false

pbm status
.....
PITR incremental backup:
========================
Status [OFF]
.....
# Drop database generated
use generated
db.dropDatabase()

# switch to admin
show dbs
admin         1.52 MiB
config        4.80 MiB

Launching recovery

pbm restore '2025-05-27T11:41:26Z'

# switch to admin
show dbs
admin         1.52 MiB
config        4.80 MiB
generated   220.00 KiB

use generated
show collections # this is empty

backup describe:

pbm describe-backup '2025-05-27T11:41:26Z'

name: "2025-05-27T11:41:26Z"
opid: 6835a4e6c1828147f48ca349
type: logical
last_write_time: "2025-05-27T11:41:32Z"
last_transition_time: "2025-05-27T11:41:56Z"
mongodb_version: 5.0.14
fcv: "5.0"
pbm_version: 2.9.0
status: done
size_h: 331.7 KiB
replsets:
- name: shard0
 status: done
 node: 172.16.24.74:27019
 last_write_time: "2025-05-27T11:41:33Z"
 last_transition_time: "2025-05-27T11:41:36Z"
- name: configsvr
 status: done
 node: 172.16.24.232:27018
 last_write_time: "2025-05-27T11:41:34Z"
 last_transition_time: "2025-05-27T11:41:55Z"
 configsvr: true
- name: shard1
 status: done
 node: 172.16.24.232:27019
 last_write_time: "2025-05-27T11:41:32Z"
 last_transition_time: "2025-05-27T11:41:47Z"

Describe restore

pbm describe-restore 2025-05-27T11:50:20.576589643Z
name: "2025-05-27T11:50:20.576589643Z"
opid: 6835a6fc903ab58551e08549
backup: "2025-05-27T11:41:26Z"
type: logical
status: done
start: "2025-05-27T11:50:20Z"
finish: "2025-05-27T11:50:28Z"
last_transition_time: "2025-05-27T11:50:28Z"
replsets:
- name: configsvr
 status: done
 last_transition_time: "2025-05-27T11:50:27Z"
- name: shard0
 status: done
 last_transition_time: "2025-05-27T11:50:27Z"
- name: shard1
 status: done
 last_transition_time: "2025-05-27T11:50:27Z"

Te logs of all the agent can be found at this link: Message | SecureTransfert

Hello,

Thank you for provided logs, that was helpful.
It turns out that your agent’s connection string is not correct, so pbm-agent can connect to MongoDb instance, but it’s confused about databases which it refers to.
So connection string should be as it is specified here:
https://docs.percona.com/percona-backup-mongodb/details/authentication.html?h=connection+string#__tabbed_1_1
E.g.

pbm-agent --mongodb-uri "mongodb://pbmuser:secretpwd@localhost:27017/?authSource=admin"

In your case, I expect that you have trailing database specified, e.g:

mongodb://pbmuser:secretpwd@localhost:27017/db-name

and you should remove that (“db-name” in example above).

We’ll improve that in the upcoming versions, please follow this ticket for more information:

Hello,

The fix worked. Thanks for the support. I can now resore an exising backup. Will have a look on the PITR now :slight_smile: