We have mongodb community 6.0 server and PBM 2.4.1.
PITR is enabled every 10 minutes.
During backup we got error:
PITR incremental backup:
========================
Status [ON]
! ERROR while running PITR backup: 2025-11-02T20:33:08.000+0300 E [rs0/rc-mongo-gridfs-2:27017] [pitr] streaming oplog: no starting point defined; 2025-11-02T20:33:06.000+0300 E [cfg/rc-mongo-gridfs-config-3:27019] [pitr] streaming oplog: no starting point defined
Currently running:
==================
(none)
Backups:
========
FS /u01/backups_new
Snapshots:
2025-10-31T21:00:02Z 15.40TB <logical> \[restore_to_time: 2025-11-02T11:20:04Z\]
There is no any pitr directories after 2025-10-31T21:00:02Z after last full backup started.
How can we fix it?
Hi @Magzhanova, Could you please share the latest PITR execution logs so we can see if any additional errors are present?
pbm logs -s D -e pitr
pbm logs -s D -e pitr
2025-11-04T20:37:40Z E [rs0/rc-mongo-gridfs-3:27017] [pitr] copy oplog from "2025-10-31T21:00:02Z" backup: file stat: file is empty
2025-11-04T20:37:40Z E [rs0/rc-mongo-gridfs-3:27017] [pitr] streaming oplog: no starting point defined
2025-11-04T20:37:41Z D [cfg/rc-mongo-gridfs-config-2:27019] [pitr] start_catchup
2025-11-04T20:37:41Z E [cfg/rc-mongo-gridfs-config-2:27019] [pitr] copy oplog from "2025-10-31T21:00:02Z" backup: file stat: file is empty
2025-11-04T20:37:41Z E [cfg/rc-mongo-gridfs-config-2:27019] [pitr] streaming oplog: no starting point defined
2025-11-04T20:37:52Z D [rs0/rc-mongo-gridfs-2:27017] [pitr] start_catchup
2025-11-04T20:37:52Z E [rs0/rc-mongo-gridfs-2:27017] [pitr] copy oplog from "2025-10-31T21:00:02Z" backup: file stat: file is empty
2025-11-04T20:37:52Z E [rs0/rc-mongo-gridfs-2:27017] [pitr] streaming oplog: no starting point defined
2025-11-04T20:37:55Z D [cfg/rc-mongo-gridfs-config-1:27019] [pitr] start_catchup
2025-11-04T20:37:55Z E [cfg/rc-mongo-gridfs-config-1:27019] [pitr] copy oplog from "2025-10-31T21:00:02Z" backup: file stat: file is empty
2025-11-04T20:37:55Z E [cfg/rc-mongo-gridfs-config-1:27019] [pitr] streaming oplog: no starting point defined
2025-11-04T20:37:58Z D [cfg/rc-mongo-gridfs-config-3:27019] [pitr] start_catchup
2025-11-04T20:37:58Z E [cfg/rc-mongo-gridfs-config-3:27019] [pitr] copy oplog from "2025-10-31T21:00:02Z" backup: file stat: file is empty
2025-11-04T20:37:58Z E [cfg/rc-mongo-gridfs-config-3:27019] [pitr] streaming oplog: no starting point defined
2025-11-04T20:38:17Z D [rs0/rc-mongo-gridfs-1:27017] [pitr] start_catchup
2025-11-04T20:38:17Z E [rs0/rc-mongo-gridfs-1:27017] [pitr] copy oplog from "2025-10-31T21:00:02Z" backup: file stat: file is empty
2025-11-04T20:38:17Z E [rs0/rc-mongo-gridfs-1:27017] [pitr] streaming oplog: no starting point defined
2025-11-04T20:38:25Z D [rs0/rc-mongo-gridfs-3:27017] [pitr] start_catchup
2025-11-04T20:38:25Z E [rs0/rc-mongo-gridfs-3:27017] [pitr] copy oplog from "2025-10-31T21:00:02Z" backup: file stat: file is empty
2025-11-04T20:38:25Z E [rs0/rc-mongo-gridfs-3:27017] [pitr] streaming oplog: no starting point defined
Based on the logs, PITR procedure can’t copy the oplog from the backup to proceed. This indicates that although the last backup is marked as successful, it does not actually contain a valid oplog. If you perform a storage resync now (which re-validates backup consistency), this backup will likely be marked as ERROR, since an empty oplog file makes it unusable for restoration.
There are several possible causes for this issue: a hardware failure might have led to the oplog file being lost, or PBM may have encountered an error while saving the oplog during the backup process. To identify the root cause, please review the backup logs to confirm whether the oplog was successfully saved or if any related errors were recorded.
Regardless of the root cause, we recommend taking the following steps:
- Upgrade PBM to the latest version
Recent PBM releases include multiple fixes related to PITR, so it’s strongly advised to test with the newest version.
- Consider switching from Community Edition to PSMDB
PSMDB supports physical backups, which offer significantly better performance for large datasets - both in backup speed and restoration time.
To resolve the current issue:
Oplog is most likely lost, as the last backup was taken six days ago and its entries have likely been overwritten due to the oplog rolling nature. To recover a consistent backup environment:
- Disable PITR
- Upgrade PBM (strongly recommended before proceeding further)
- Create a new backup - either logical or physical
- Re-enable PITR once new backup has been completed successfully