Error Percona Restore

Good night dear,

I’m taking an error when performing pbm restore and I already checked the documentation and I didn’t understand the problem, see if you can help me.

[root@trs-v-mngo-1p02 ~]# pbm status
Cluster:

========

trs_replicaset:

  - trs_replicaset/trs-v-mngo-1p01:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-1p02:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-2p01:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-2p02:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-7p02:27017: pbm-agent v1.4.1 OK

PITR incremental backup:

========================

Status [OFF]

Currently running:

==================

(none)

Backups:

========

FS  /mongodb/dfs

  Snapshots:

    2021-03-30T03:10:59Z 16.00GB [complete: 2021-03-30T03:21:19]

  PITR chunks:

    2021-03-29T12:32:36 - 2021-03-29T19:16:09 1.10MB

    2021-03-28T23:11:55 - 2021-03-29T12:20:23 2.14MB
[root@trs-v-mngo-1p02 ~]# pbm list

Backup snapshots:

  2021-03-30T03:10:59Z [complete: 2021-03-30T03:21:19]

PITR <off>:

  2021-03-28T23:11:55 - 2021-03-29T12:20:23

  2021-03-29T12:32:36 - 2021-03-29T19:16:09

Segue o erro quando executo o pbm restore

[root@trs-v-mngo-1p02 ~]# pbm restore --time="2021-03-29T19:16:09" --mongodb-uri="mongodb://pbmuser:password@trs-v-mngo-1p02:27017"

...............Error: no confirmation that restore has successfully started. Replsets status:

<no replset has started restore>
[root@trs-v-mngo-1p02 ~]# pbm logs

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p01:27017] got command pitrestore [name: 2021-03-31T20:41:18.783807038Z, point-in-time ts: 1617074479] <ts: 1617223278>

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p02:27017] [pitr/2021-03-30T03:21:19Z] Node in not suitable for restore

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p01:27017] got epoch {1617074485 1}

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p01:27017] [pitr/2021-03-30T03:21:19Z] Node in not suitable for restore

2021-03-31T20:41:20Z E [trs_replicaset/trs-v-mngo-1p02:27017] [pitr/2021-03-30T03:21:19Z] restore: define last oplog slice: get: mongo: no documents in result

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p01:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p01:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p01:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p02:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p02:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p02:27017] [pitr/2021-03-29T19:16:09Z] recovery started

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p02:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p02:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p01:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-7p02:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p02:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-7p02:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p01:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-7p02:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p01:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore
[root@trs-v-mngo-1p02 ~]# pbm logs

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p01:27017] got command pitrestore [name: 2021-03-31T20:41:18.783807038Z, point-in-time ts: 1617074479] <ts: 1617223278>

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p02:27017] [pitr/2021-03-30T03:21:19Z] Node in not suitable for restore

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p01:27017] got epoch {1617074485 1}

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p01:27017] [pitr/2021-03-30T03:21:19Z] Node in not suitable for restore

2021-03-31T20:41:20Z E [trs_replicaset/trs-v-mngo-1p02:27017] [pitr/2021-03-30T03:21:19Z] restore: define last oplog slice: get: mongo: no documents in result

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p01:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p01:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p01:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p02:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p02:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p02:27017] [pitr/2021-03-29T19:16:09Z] recovery started

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p02:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p02:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p01:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-7p02:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p02:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-7p02:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p01:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-7p02:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p01:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore
[root@trs-v-mngo-1p02 ~]# pbm logs

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p01:27017] got command pitrestore [name: 2021-03-31T20:41:18.783807038Z, point-in-time ts: 1617074479] <ts: 1617223278>

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p02:27017] [pitr/2021-03-30T03:21:19Z] Node in not suitable for restore

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p01:27017] got epoch {1617074485 1}

2021-03-31T20:41:19Z I [trs_replicaset/trs-v-mngo-2p01:27017] [pitr/2021-03-30T03:21:19Z] Node in not suitable for restore

2021-03-31T20:41:20Z E [trs_replicaset/trs-v-mngo-1p02:27017] [pitr/2021-03-30T03:21:19Z] restore: define last oplog slice: get: mongo: no documents in result

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p01:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p01:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p01:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p02:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p02:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-1p02:27017] [pitr/2021-03-29T19:16:09Z] recovery started

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p02:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p02:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p01:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-7p02:27017] got command pitrestore [name: 2021-03-31T20:42:01.079660884Z, point-in-time ts: 1617045369] <ts: 1617223321>

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p02:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-7p02:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p01:27017] got epoch {1617074485 1}

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-7p02:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

2021-03-31T20:42:01Z I [trs_replicaset/trs-v-mngo-2p01:27017] [pitr/2021-03-29T19:16:09Z] Node in not suitable for restore

The uri I exported on bash_profile as follows:

export PBM_MONGODB_URI=mongodb://pbmuser:password@trs-v-mngo-1p01:27017,trs-v-mngo-1p02:27017,trs-v-mngo-2p01:27017,trs-v-mngo-2p02:27017,trs-v-mngo-7p02:27017/admin?replicaSet=trs_replicaset

[root@trs-v-mngo-1p02 ~]# cat /etc/sysconfig/pbm-agent

PBM_MONGODB_URI="mongodb://pbmuser:password@trs-v-mngo-1p02:27017"

[root@trs-v-mngo-1p02 ~]# cat /mongodb/pbm_config.yml

storage:

  type: filesystem

  filesystem:

    path: /mongodb/dfs

pitr:

  enabled: true

On the other hand, pbm backup works normally

[root@trs-v-mngo-1p02 ~]# pbm backup

Starting backup '2021-03-31T22:27:09Z'....................

Backup '2021-03-31T22:27:09Z' to remote store '/mongodb/dfs' has started
[root@trs-v-mngo-1p02 ~]# pbm status

Cluster:

========

trs_replicaset:

  - trs_replicaset/trs-v-mngo-1p01:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-1p02:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-2p01:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-2p02:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-7p02:27017: pbm-agent v1.4.1 OK

PITR incremental backup:

========================

Status [OFF]

Currently running:

==================

Snapshot backup "2021-03-31T22:27:09Z", started at 2021-03-31T22:27:26Z. Status: snapshot backup. [op id: 6064f73ddd56da03da297aaa]

Backups:

========

FS  /mongodb/dfs

  Snapshots:

    2021-03-31T22:27:09Z 0.00B [running: running / 2021-03-31T22:27:28]

    2021-03-30T03:10:59Z 16.00GB [complete: 2021-03-30T03:21:19]

  PITR chunks:

    2021-03-29T12:32:36 - 2021-03-29T19:16:09 1.10MB

    2021-03-28T23:11:55 - 2021-03-29T12:20:23 2.14MB
[root@trs-v-mngo-1p02 ~]# pbm status

Cluster:

========

trs_replicaset:

  - trs_replicaset/trs-v-mngo-1p01:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-1p02:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-2p01:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-2p02:27017: pbm-agent v1.4.1 OK

  - trs_replicaset/trs-v-mngo-7p02:27017: pbm-agent v1.4.1 OK

PITR incremental backup:

========================

Status [OFF]

Currently running:

==================

Snapshot backup "2021-03-31T22:27:09Z", started at 2021-03-31T22:27:26Z. Status: snapshot backup. [op id: 6064f73ddd56da03da297aaa]

Backups:

========

FS  /mongodb/dfs

  Snapshots:

    2021-03-31T22:27:09Z 0.00B [running: running / 2021-03-31T22:27:28]

    2021-03-30T03:10:59Z 16.00GB [complete: 2021-03-30T03:21:19]

  PITR chunks:

    2021-03-29T12:32:36 - 2021-03-29T19:16:09 1.10MB

    2021-03-28T23:11:55 - 2021-03-29T12:20:23 2.14MB
[root@trs-v-mngo-1p02 ~]# pbm logs

2021-03-31T22:22:27Z I [trs_replicaset/trs-v-mngo-7p02:27017] got epoch {1617074485 1}

2021-03-31T22:22:28Z I [trs_replicaset/trs-v-mngo-7p02:27017] [pitr/2021-03-30T03:21:19Z] Node in not suitable for restore

2021-03-31T22:22:28Z I [trs_replicaset/trs-v-mngo-2p01:27017] got command pitrestore [name: 2021-03-31T22:22:27.466686762Z, point-in-time ts: 1617074479] <ts: 1617229347>

2021-03-31T22:22:28Z I [trs_replicaset/trs-v-mngo-2p01:27017] got epoch {1617074485 1}

2021-03-31T22:22:28Z I [trs_replicaset/trs-v-mngo-2p01:27017] [pitr/2021-03-30T03:21:19Z] Node in not suitable for restore

2021-03-31T22:22:28Z I [trs_replicaset/trs-v-mngo-1p02:27017] got command pitrestore [name: 2021-03-31T22:22:27.466686762Z, point-in-time ts: 1617074479] <ts: 1617229347>

2021-03-31T22:22:28Z I [trs_replicaset/trs-v-mngo-1p02:27017] got epoch {1617074485 1}

2021-03-31T22:22:28Z I [trs_replicaset/trs-v-mngo-1p02:27017] [pitr/2021-03-30T03:21:19Z] recovery started

2021-03-31T22:22:29Z E [trs_replicaset/trs-v-mngo-1p02:27017] [pitr/2021-03-30T03:21:19Z] restore: define last oplog slice: get: mongo: no documents in result

2021-03-31T22:27:09Z I [trs_replicaset/trs-v-mngo-1p02:27017] got command backup [name: 2021-03-31T22:27:09Z, compression: s2] <ts: 1617229629>

2021-03-31T22:27:09Z I [trs_replicaset/trs-v-mngo-2p01:27017] got command backup [name: 2021-03-31T22:27:09Z, compression: s2] <ts: 1617229629>

2021-03-31T22:27:09Z I [trs_replicaset/trs-v-mngo-1p02:27017] got epoch {1617074485 1}

2021-03-31T22:27:09Z I [trs_replicaset/trs-v-mngo-2p01:27017] got epoch {1617074485 1}

2021-03-31T22:27:10Z I [trs_replicaset/trs-v-mngo-1p01:27017] got command backup [name: 2021-03-31T22:27:09Z, compression: s2] <ts: 1617229629>

2021-03-31T22:27:10Z I [trs_replicaset/trs-v-mngo-1p01:27017] got epoch {1617074485 1}

2021-03-31T22:27:10Z I [trs_replicaset/trs-v-mngo-2p02:27017] got command backup [name: 2021-03-31T22:27:09Z, compression: s2] <ts: 1617229629>

2021-03-31T22:27:10Z I [trs_replicaset/trs-v-mngo-7p02:27017] got command backup [name: 2021-03-31T22:27:09Z, compression: s2] <ts: 1617229629>

2021-03-31T22:27:10Z I [trs_replicaset/trs-v-mngo-2p02:27017] got epoch {1617074485 1}

2021-03-31T22:27:10Z I [trs_replicaset/trs-v-mngo-7p02:27017] got epoch {1617074485 1}

2021-03-31T22:27:26Z I [trs_replicaset/trs-v-mngo-2p01:27017] [backup/2021-03-31T22:27:09Z] backup started

Hi @Samuel_Molling
That’s strange…
Can you check pbm list --restore output and content of admin.pbmRestores collection in there is should a meta for the restore.
Also some additional info could be in stderr of trs_replicaset/trs-v-mngo-1p02:27017.

1 Like

Hi,

[root@trs-v-mngo-1p02 ~]# pbm list --restore
Restores history:
  PITR: 2021-03-30T03:21:19Z    Failed with "define last oplog slice: get: mongo: no documents in result"
  PITR: 2021-03-30T03:21:19Z    Failed with "define last oplog slice: get: mongo: no documents in result"
        In progress [starting] (Launched at 2021-03-31T20:42:01Z)
        In progress [starting] (Launched at 2021-03-31T20:52:50Z)
  PITR: 2021-03-30T03:21:19Z    Failed with "define last oplog slice: get: mongo: no documents in result"
  PITR: 2021-03-30T03:10:59Z    Failed with "define last oplog slice: get: mongo: no documents in result"
  PITR: 2021-03-30T03:21:19Z    Failed with "define last oplog slice: get: mongo: no documents in result"
  PITR: 2021-03-30T03:21:19Z    Failed with "define last oplog slice: get: mongo: no documents in result"

trs_replicaset:PRIMARY> db.pbmRestores.find().pretty()
{
        "_id" : ObjectId("6064de6131f125430f17c1c4"),
        "opid" : "6064de60d52b3a427eea64ec",
        "name" : "2021-03-31T20:41:04.199283216Z",
        "backup" : "",
        "pitr" : NumberLong(1617074479),
        "replsets" : [ ],
        "hb" : Timestamp(1617223265, 14),
        "start_ts" : NumberLong(1617223265),
        "last_transition_ts" : NumberLong(1617223266),
        "status" : "error",
        "conditions" : [
                {
                        "timestamp" : NumberLong(1617223265),
                        "status" : "starting"
                },
                {
                        "timestamp" : NumberLong(1617223266),
                        "status" : "error",
                        "error" : "define last oplog slice: get: mongo: no documents in result"
                }
        ],
        "error" : "define last oplog slice: get: mongo: no documents in result"
}
{
        "_id" : ObjectId("6064de6f31f125430f17c1cb"),
        "opid" : "6064de6e5151c84baa1bf043",
        "name" : "2021-03-31T20:41:18.783807038Z",
        "backup" : "",
        "pitr" : NumberLong(1617074479),
        "replsets" : [ ],
        "hb" : Timestamp(1617223279, 18),
        "start_ts" : NumberLong(1617223279),
        "last_transition_ts" : NumberLong(1617223280),
        "status" : "error",
        "conditions" : [
                {
                        "timestamp" : NumberLong(1617223279),
                        "status" : "starting"
                },
                {
                        "timestamp" : NumberLong(1617223280),
                        "status" : "error",
                        "error" : "define last oplog slice: get: mongo: no documents in result"
                }
        ],
        "error" : "define last oplog slice: get: mongo: no documents in result"
}
{
        "_id" : ObjectId("6064de9931f125430f17c1d2"),
        "opid" : "6064de9957a4cbd22c726253",
        "name" : "2021-03-31T20:42:01.079660884Z",
        "backup" : "",
        "pitr" : NumberLong(1617045369),
        "replsets" : [ ],
        "hb" : Timestamp(1617223321, 14),
        "start_ts" : NumberLong(1617223321),
        "last_transition_ts" : NumberLong(1617223321),
        "status" : "starting",
        "conditions" : [
                {
                        "timestamp" : NumberLong(1617223321),
                        "status" : "starting"
                }
        ]
}
{
        "_id" : ObjectId("6064e122d99ef064153cbcb0"),
        "opid" : "6064e121b8fc7ccbebd318ab",
        "name" : "2021-03-31T20:52:49.318555001Z",
        "backup" : "",
        "pitr" : NumberLong(1617045369),
        "replsets" : [ ],
        "hb" : Timestamp(1617223970, 15),
        "start_ts" : NumberLong(1617223970),
        "last_transition_ts" : NumberLong(1617223970),
        "status" : "starting",
        "conditions" : [
                {
                        "timestamp" : NumberLong(1617223970),
                        "status" : "starting"
                }
        ]
}
{
        "_id" : ObjectId("6064f5d5a9f833bcafda8887"),
        "opid" : "6064f5d46210f58d18477044",
        "name" : "2021-03-31T22:21:08.774655521Z",
        "backup" : "",
        "pitr" : NumberLong(1617074479),
        "replsets" : [ ],
        "hb" : Timestamp(1617229269, 14),
        "start_ts" : NumberLong(1617229269),
        "last_transition_ts" : NumberLong(1617229270),
        "status" : "error",
        "conditions" : [
                {
                        "timestamp" : NumberLong(1617229269),
                        "status" : "starting"
                },
                {
                        "timestamp" : NumberLong(1617229270),
                        "status" : "error",
                        "error" : "define last oplog slice: get: mongo: no documents in result"
                }
        ],
        "error" : "define last oplog slice: get: mongo: no documents in result"
}
{
        "_id" : ObjectId("6064f603a9f833bcafda888e"),
        "opid" : "6064f6028ed743365106de1d",
        "name" : "2021-03-31T22:21:54.489919363Z",
        "backup" : "",
        "pitr" : NumberLong(1617073859),
        "replsets" : [ ],
        "hb" : Timestamp(1617229315, 15),
        "start_ts" : NumberLong(1617229315),
        "last_transition_ts" : NumberLong(1617229316),
        "status" : "error",
        "conditions" : [
                {
                        "timestamp" : NumberLong(1617229315),
                        "status" : "starting"
                },
                {
                        "timestamp" : NumberLong(1617229316),
                        "status" : "error",
                        "error" : "define last oplog slice: get: mongo: no documents in result"
                }
        ],
        "error" : "define last oplog slice: get: mongo: no documents in result"
}
{
        "_id" : ObjectId("6064f612a9f833bcafda8895"),
        "opid" : "6064f611dbd817796612f784",
        "name" : "2021-03-31T22:22:09.311420548Z",
        "backup" : "",
        "pitr" : NumberLong(1617074479),
        "replsets" : [ ],
        "hb" : Timestamp(1617229330, 13),
        "start_ts" : NumberLong(1617229330),
        "last_transition_ts" : NumberLong(1617229331),
        "status" : "error",
        "conditions" : [
                {
                        "timestamp" : NumberLong(1617229330),
                        "status" : "starting"
                },
                {
                        "timestamp" : NumberLong(1617229331),
                        "status" : "error",
                        "error" : "define last oplog slice: get: mongo: no documents in result"
                }
        ],
        "error" : "define last oplog slice: get: mongo: no documents in result"
}
{
        "_id" : ObjectId("6064f624a9f833bcafda889c"),
        "opid" : "6064f62346d6e4348e1c1578",
        "name" : "2021-03-31T22:22:27.466686762Z",
        "backup" : "",
        "pitr" : NumberLong(1617074479),
        "replsets" : [ ],
        "hb" : Timestamp(1617229348, 16),
        "start_ts" : NumberLong(1617229348),
        "last_transition_ts" : NumberLong(1617229349),
        "status" : "error",
        "conditions" : [
                {
                        "timestamp" : NumberLong(1617229348),
                        "status" : "starting"
                },
                {
                        "timestamp" : NumberLong(1617229349),
                        "status" : "error",
                        "error" : "define last oplog slice: get: mongo: no documents in result"
                }
        ],
        "error" : "define last oplog slice: get: mongo: no documents in result"
}

Where is stderr ?

Hi @Samuel_Molling

Restore error indicates that PBM cannot find the target snapshot as the starting point for the restore.
And as I can see from the pbm status output, indeed there is no snapshot preceding time you want to restore to (2021-03-29T19:16:09).

Point-in-Time Recovery restores the data from a backup snapshot and then replays all events that occurred to this data up to a specified moment from oplog slices. In your case, there is no backup snapshot to replay oplog events on top of.

I’m not sure how that happened that you have oplogs slices without a snapshot. Probably the snapshot was deleted from the storage manually and then pbm config --force-resync was run (changing storage in config would trigger resync as well).

1 Like

Andrew,

I took a snap that I have, but the error persists.

[root@trs-v-mngo-2p01 dfs]# pbm list
Backup snapshots:
2021-03-30T03:10:59Z [complete: 2021-03-30T03:21:19]
2021-03-31T22:27:09Z [complete: 2021-03-31T22:36:58]
2021-04-04T23:00:01Z [complete: 2021-04-04T23:09:02]

PITR :
2021-03-28T23:11:55 - 2021-03-29T12:20:23
2021-03-29T12:32:36 - 2021-03-29T19:16:09

[root@trs-v-mngo-2p01 dfs]# pbm restore --time=‘2021-03-30T03:10:59’
…Error: define last oplog slice: get: mongo: no documents in result

1 Like

@Samuel_Molling

You probably want to run pbm restore 2021-03-30T03:10:59Z.

Now you’re trying to restore to the time point in time inside the snapshot. But there is no way to restore to some particular time within the snapshot. You can restore the snapshot only as a whole. In that case your cluster would be restored to the time reflected in the “complete” section (time of the last write in the snapshot)

Format pbm restore --time=... can be used only with valid points-in-time reflected in the PITR section.

1 Like

It was very confusing for me.

I performed a pbm backup and left my oplogs running in 10min slices. If my bank breaks a day later, I restore from the last oplog, correct?

[root @ trs-v-mngo-1p02 ~] # pbm list
Backup snapshots:
2021-03-30T03: 10: 59Z [complete: 2021-03-30T03: 21: 19]
2021-03-31T22: 27: 09Z [complete: 2021-03-31T22: 36: 58]
2021-04-04T23: 00: 01Z [complete: 2021-04-04T23: 09: 02]
2021-04-08T17: 42: 04Z [complete: 2021-04-08T17: 52: 21]

PITR :
2021-03-28T23: 11: 55 - 2021-03-29T12: 20: 23
2021-03-29T12: 32: 36 - 2021-03-29T19: 16: 09
[root @ trs-v-mngo-1p02 ~] # pbm restore --time = 2021-03-29T19: 16: 09
… Error: no confirmation that restore has successfully started. Status replsets:

1 Like

Yes, correct. But you have to keep a backup that precedes the oplog. Oplog basically is a redo log - stream of actions (insert/update/delete) that was applied to the data. In order to replay it, you have to have something these actions to apply to. Point-in-Time Recovery includes restoring the data from a backup snapshot and replaying all events that occurred to this data up to a specified moment from oplog slices.

Based on your pbm list output you cannot recover to any time in PITR section. Even though you have an oplog slices, you don’t have a backup snapshot this oplog could be applied to. But you still can recover to any available backup snapshot - the most recent is 2021-04-08T17:42:04Z (pbm restore 2021-04-08T17:42:04Z) and it will recover the database to the 2021-04-08T17:52:21 (complete section in the pbm list)

1 Like

In theory it should work with the team 2021-04-08T17: 42: 04, correct?

[root @ trs-v-mngo-1p02 ~] # pbm restore --time = 2021-04-08T17: 42: 04
…Error: define last oplog slice: get: mongo: no documents in result

1 Like

In this case, I did a pbm backup and activated the pitr for a few minutes, after that I deactivated and applied the last restore, following logs.

[root@trs-v-mngo-1p02 ~]# pbm restore --time=‘2021-04-13T20:14:50’
…Restore to the point in time ‘2021-04-13T20:14:50’ has started
[root@trs-v-mngo-1p02 ~]# pbm status

Cluster:

trs_replicaset:

  • trs_replicaset/trs-v-mngo-1p01:27017: pbm-agent v1.4.1 OK
  • trs_replicaset/trs-v-mngo-1p02:27017: pbm-agent v1.4.1 OK
  • trs_replicaset/trs-v-mngo-2p01:27017: pbm-agent v1.4.1 OK
  • trs_replicaset/trs-v-mngo-2p02:27017: pbm-agent v1.4.1 OK
  • trs_replicaset/trs-v-mngo-7p02:27017: pbm-agent v1.4.1 OK

PITR incremental backup:

Status [OFF]

Currently running:

PITR restore “2021-04-13T19:27:20Z”, started at 2021-04-13T20:26:14Z. Status: snapshot restore. [op id: 6075fe65e37333b3c3f6440e]

Backups:

FS /mongodb/dfs
Snapshots:
2021-04-13T19:27:20Z 15.99GB [complete: 2021-04-13T19:38:35]
2021-04-11T23:00:00Z 0.00B [ERROR: mongodump: write data: change permissions for file </mongodb/dfs/2021-04-11T23:00:00Z_trs_replicaset.dump.s2>: chmod /mongodb/dfs/2021-04-11T23:00:00Z_trs_replicaset.dump.s2: operation not permitted.] [2021-04-11T23:00:21]
2021-04-08T17:42:04Z 0.00B [ERROR: get file 2021-04-08T17:42:04Z_trs_replicaset.dump.s2: no such file] [2021-04-08T17:52:21]
2021-04-08T17:40:14Z 0.00B [ERROR: mongodump: write data: change permissions for file </mongodb/dfs/2021-04-08T17:40:14Z_trs_replicaset.dump.s2>: chmod /mongodb/dfs/2021-04-08T17:40:14Z_trs_replicaset.dump.s2: operation not permitted.] [2021-04-08T17:40:35]
2021-04-04T23:00:01Z 0.00B [ERROR: get file 2021-04-04T23:00:01Z_trs_replicaset.dump.s2: no such file] [2021-04-04T23:09:02]
2021-03-31T22:27:09Z 0.00B [ERROR: get file 2021-03-31T22:27:09Z_trs_replicaset.dump.s2: no such file] [2021-03-31T22:36:58]
2021-03-30T03:10:59Z 0.00B [ERROR: get file 2021-03-30T03:10:59Z_trs_replicaset.dump.s2: no such file] [2021-03-30T03:21:19]
PITR chunks:
2021-04-13T19:38:35 - 2021-04-13T20:25:05 122.04KB
2021-03-29T12:32:36 - 2021-03-29T19:16:09 1.10MB
2021-03-28T23:11:55 - 2021-03-29T12:20:23 2.14MB
[root@trs-v-mngo-1p02 ~]# pbm logs
2021-04-13T20:14:52Z I [trs_replicaset/trs-v-mngo-1p01:27017] [pitr] created chunk 2021-04-13T20:04:50 - 2021-04-13T20:14:50. Next chunk creation scheduled to begin at ~2021-04-13T20:24:50
2021-04-13T20:24:52Z I [trs_replicaset/trs-v-mngo-1p01:27017] [pitr] created chunk 2021-04-13T20:14:50 - 2021-04-13T20:24:50. Next chunk creation scheduled to begin at ~2021-04-13T20:34:50
2021-04-13T20:25:05Z I [trs_replicaset/trs-v-mngo-1p01:27017] [pitr] got done signal, stopping
2021-04-13T20:25:07Z I [trs_replicaset/trs-v-mngo-1p01:27017] [pitr] created chunk 2021-04-13T20:24:50 - 2021-04-13T20:25:05
2021-04-13T20:25:07Z I [trs_replicaset/trs-v-mngo-1p01:27017] [pitr] pausing/stopping with last_ts 2021-04-13 20:25:05 +0000 UTC
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-2p02:27017] got command pitrestore [name: 2021-04-13T20:26:13.298094949Z, point-in-time ts: 1618344890] <ts: 1618345573>
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-2p02:27017] got epoch {1618345494 1}
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p01:27017] got command pitrestore [name: 2021-04-13T20:26:13.298094949Z, point-in-time ts: 1618344890] <ts: 1618345573>
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p01:27017] got epoch {1618345494 1}
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-2p02:27017] [pitr/2021-04-13T20:14:50Z] Node in not suitable for restore
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p01:27017] [pitr/2021-04-13T20:14:50Z] Node in not suitable for restore
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p02:27017] got command pitrestore [name: 2021-04-13T20:26:13.298094949Z, point-in-time ts: 1618344890] <ts: 1618345573>
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p02:27017] got epoch {1618345494 1}
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p02:27017] [pitr/2021-04-13T20:14:50Z] recovery started
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-7p02:27017] got command pitrestore [name: 2021-04-13T20:26:13.298094949Z, point-in-time ts: 1618344890] <ts: 1618345573>
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-2p01:27017] got command pitrestore [name: 2021-04-13T20:26:13.298094949Z, point-in-time ts: 1618344890] <ts: 1618345573>
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-7p02:27017] got epoch {1618345494 1}
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-7p02:27017] [pitr/2021-04-13T20:14:50Z] Node in not suitable for restore
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-2p01:27017] got epoch {1618345494 1}
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-2p01:27017] [pitr/2021-04-13T20:14:50Z] Node in not suitable for restore

[root@trs-v-mngo-1p02 ~]# systemctl status pbm-agent.service
● pbm-agent.service - pbm-agent
Loaded: loaded (/usr/lib/systemd/system/pbm-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-04-12 14:07:09 UTC; 1 day 6h ago
Main PID: 68934 (pbm-agent)
CGroup: /system.slice/pbm-agent.service
└─68934 /usr/bin/pbm-agent

Apr 13 20:26:38 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:38.147+0000 using 10 insertion workers
Apr 13 20:26:38 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:38.599+0000 no indexes to restore
Apr 13 20:26:38 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:38.599+0000 finished restoring config.system.indexBuilds (0 documen…ailures)
Apr 13 20:26:38 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:38.600+0000 dropping collection test.portfolios before restoring
Apr 13 20:26:38 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:38.647+0000 reading metadata for test.portfolios from archive on stdin
Apr 13 20:26:38 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:38.647+0000 creating collection test.portfolios using options from metadata
Apr 13 20:26:38 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:38.660+0000 restoring test.portfolios from archive on stdin
Apr 13 20:26:38 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:38.660+0000 using 10 insertion workers
Apr 13 20:26:39 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:39.112+0000 restoring indexes for collection test.portfolios from metadata
Apr 13 20:26:39 trs-v-mngo-1p02 pbm-agent[68934]: 2021-04-13T20:26:39.125+0000 finished restoring test.portfolios (0 documents, 0 failures)

1 Like

[root@trs-v-mngo-1p02 ~]# pbm list --restore
Restores history:
PITR: 2021-03-30T03:21:19Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-03-30T03:21:19Z Failed with “define last oplog slice: get: mongo: no documents in result”
In progress [starting] (Launched at 2021-03-31T20:42:01Z)
In progress [starting] (Launched at 2021-03-31T20:52:50Z)
PITR: 2021-03-30T03:21:19Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-03-30T03:10:59Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-03-30T03:21:19Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-03-30T03:21:19Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-03-30T03:10:59Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-03-30T03:10:59Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-03-30T03:10:59Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-03-30T03:10:59Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-04-08T17:52:21Z Failed with “define last oplog slice: get: mongo: no documents in result”
In progress [starting] (Launched at 2021-04-10T00:54:20Z)
PITR: 2021-04-08T17:42:04Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-04-08T17:42:04Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-04-08T17:42:04Z Failed with “define last oplog slice: get: mongo: no documents in result”
PITR: 2021-04-13T20:14:50Z
[root@trs-v-mngo-1p02 ~]# pbm logs
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p01.ad.stone.com.br:27017] got command pitrestore [name: 2021-04-13T20:26:13.298094949Z, point-in-time ts: 1618344890] <ts: 1618345573>
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p01.ad.stone.com.br:27017] got epoch {1618345494 1}
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-2p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] Node in not suitable for restore
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p01.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] Node in not suitable for restore
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] got command pitrestore [name: 2021-04-13T20:26:13.298094949Z, point-in-time ts: 1618344890] <ts: 1618345573>
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] got epoch {1618345494 1}
2021-04-13T20:26:13Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] recovery started
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-7p02.ad.stone.com.br:27017] got command pitrestore [name: 2021-04-13T20:26:13.298094949Z, point-in-time ts: 1618344890] <ts: 1618345573>
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-2p01.ad.stone.com.br:27017] got command pitrestore [name: 2021-04-13T20:26:13.298094949Z, point-in-time ts: 1618344890] <ts: 1618345573>
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-7p02.ad.stone.com.br:27017] got epoch {1618345494 1}
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-7p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] Node in not suitable for restore
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-2p01.ad.stone.com.br:27017] got epoch {1618345494 1}
2021-04-13T20:26:14Z I [trs_replicaset/trs-v-mngo-2p01.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] Node in not suitable for restore
2021-04-13T22:14:26Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] mongorestore finished
2021-04-13T22:14:28Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] starting oplog replay
2021-04-13T22:14:29Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] oplog replay finished on {1618342714 1}
2021-04-13T22:14:29Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] restoring users and roles
2021-04-13T22:14:29Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] replay chunks
2021-04-13T22:14:30Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] oplog replay finished on {1618344889 1} <1618344890>
2021-04-13T22:14:32Z I [trs_replicaset/trs-v-mngo-1p02.ad.stone.com.br:27017] [pitr/2021-04-13T20:14:50Z] recovery successfully finished

It ended, but it took 2 hours to complete the restore, I thought the time was really bad. Is that correct?

1 Like

Hi @Samuel_Molling
Most of that time was spent on mongorestore (snapshot restore) stage. You can try to tweak restore numInsertionWorkers and batchSize options in the pbm config (Configuration file options — Percona Backup for MongoDB 1.8 Documentation). Increasing these parameters should improve recovery time but it’ll increase RAM consumption by agents during the restore.

2 Likes

For a vm that has 32gb of memory, 22gb being used for caching of wiredTiger, what are the recommended numbers to use in pbm?

1 Like

For a vm that has 32gb of memory, 22gb being used for caching of wiredTiger, what are the recommended numbers to use in pbm?

PBM doesn’t use much memory - compared to the mongod process’s memory use (and the OS file buffer / page cache) it only takes a relatively small buffer as data moves through. Up to 2GB at an instantaneous moment i.i.r.c… Thinking about where the OS will provide that from, it’d be 2GB of memory area that otherwise would have been used for page cache.

1 Like

But for numInsertionWorkers and batchSize, it has a value that they usually use or recommend, given the infra presented

1 Like