PBM Restore failed

Team,
PBM restore to a new cluster got failed multiple time with same message.

“retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body”

[mongod@ip-10-80-11-253 ~]$ pbm status
Cluster:

shard1ReplSet:

  • shard1ReplSet/10.80.11.253:27038 [P]: pbm-agent v2.1.0 OK
    configReplSet:
  • configReplSet/10.80.11.253:27039 [P]: pbm-agent v2.1.0 OK
    shard3ReplSet:
  • shard3ReplSet/10.80.10.159:27038 [P]: pbm-agent v2.1.0 OK
    shard2ReplSet:
  • shard2ReplSet/10.80.10.186:27038 [P]: pbm-agent v2.1.0 OK

PITR incremental backup:

Status [OFF]

Currently running:

Snapshot restore “2023-05-09T14:56:30Z”, started at 2023-05-13T05:10:22Z. Status: error. [op id: 645f1bbdfe3cefd0df864ba6]

Backups:

S3 us-east-1 s3://cm-mongo-db-shared-prod-va/pbm/backup/
Snapshots:
2023-05-09T14:56:30Z 2.68TB [restore_to_time: 2023-05-10T01:41:15Z]
2023-05-07T01:00:03Z 2.65TB [restore_to_time: 2023-05-07T11:20:39Z]
-=====================================================
pbm logs:

2023-05-12T17:16:39Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-12T17:16:39.016551478Z] restore started
2023-05-12T17:16:39Z D [configReplSet/10.80.11.253:27039] [restore/2023-05-12T17:16:39.016551478Z] waiting for ‘starting’ status
2023-05-12T17:16:39Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-12T17:16:39.016551478Z] moving to state running
2023-05-12T17:16:39Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-12T17:16:39.016551478Z] moving to state running
2023-05-12T17:16:40Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] moving to state running
2023-05-12T17:16:40Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-12T17:16:39.016551478Z] moving to state running
2023-05-12T17:16:51Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-12T17:16:39.016551478Z] restoring users and roles
2023-05-12T17:16:51Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-12T17:16:39.016551478Z] moving to state dumpDone
2023-05-12T18:09:11Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-12T17:16:39.016551478Z] releasing lock
2023-05-12T18:09:11Z E [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-12T17:16:39.016551478Z] mongorestore: restore mongo dump (successes: 52340979 / fails: 0): ctr.lineage.dom.com: error restoring from archive on stdin: received termination signal
2023-05-12T18:09:12Z D [configReplSet/10.80.11.253:27039] [restore/2023-05-12T17:16:39.016551478Z] releasing lock
2023-05-12T18:09:12Z E [configReplSet/10.80.11.253:27039] [restore/2023-05-12T17:16:39.016551478Z] check cluster for restore dumpDone: convergeCluster: restore on the shard shard2ReplSet failed with: mongorestore: restore mongo dump (successes: 52340979 / fails: 0): ctr.lineage.dom.com: error restoring from archive on stdin: received termination signal
2023-05-12T20:02:17Z W [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-12T20:02:17Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] session recreated, resuming download
2023-05-12T20:13:29Z W [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-12T20:13:29Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] session recreated, resuming download
2023-05-13T00:07:23Z W [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-13T00:07:23Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] session recreated, resuming download
2023-05-13T00:11:12Z W [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-13T00:11:12Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] session recreated, resuming download
2023-05-13T00:12:27Z W [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-13T00:12:27Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] session recreated, resuming download
2023-05-13T00:17:16Z W [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-12T17:16:39.016551478Z] retryChunk got copy: context deadline

Hi @aranjith0 ,

It seems the shard1ReplSet had an issue with downloading the backup. Could you please test network performance from the node to storage?

HI @ Dmytro_Zghoba,

Random 10G file created on s3 bucket and tried to download on all 3 nodes. I don’t see any much difference in performance.

node-1 (shard1ReplSet ):

[mongod@ip-10-80-11-253 tmp]$ time aws s3 cp s3://cm-mongo-db-shared-prod-va/data.img1 /tmp/
download: s3://cm-mongo-db-shared-prod-va/data.img1 to ./data.img1

real 0m4.422s
user 0m4.927s
sys 0m6.647s


node-2 (shard2ReplSet ):

[mongod@ip-10-80-10-186 ~]$ time aws s3 cp s3://cm-mongo-db-shared-prod-va/data.img1 /tmp/
download: s3://cm-mongo-db-shared-prod-va/data.img1 to …/…/tmp/data.img1

real 0m4.820s
user 0m4.617s
sys 0m6.915s


node-3 (shard3ReplSet ):

[mongod@ip-10-80-10-159 ~]$ time aws s3 cp s3://cm-mongo-db-shared-prod-va/data.img1 /tmp/
download: s3://cm-mongo-db-shared-prod-va/data.img1 to …/…/tmp/data.img1

real 0m4.834s
user 0m4.596s
sys 0m6.879s

Does the restore always fail with such error? Did you manage to restore the backup successfully at least once?

Yes- I tried once again yesterday and it took 20 hrs for a FULL restore and seen same messages in that.

I had done a restore successfully with version 1.8.1 without any issues ( it took 13 hrs).

[mongod@ip-10-80-11-253 ~]$ pbm logs -sD -t0 -i 646469f41ff7942aef24ab9b
2023-05-17T05:45:24Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] backup: 2023-05-16T15:30:02Z
2023-05-17T05:45:24Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] restore started
2023-05-17T05:45:24Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] waiting for ‘starting’ status
2023-05-17T05:45:24Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] backup: 2023-05-16T15:30:02Z
2023-05-17T05:45:24Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] restore started
2023-05-17T05:45:24Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] waiting for ‘starting’ status
2023-05-17T05:45:24Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] backup: 2023-05-16T15:30:02Z
2023-05-17T05:45:24Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] restore started
2023-05-17T05:45:24Z D [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] waiting for ‘starting’ status
2023-05-17T05:45:24Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] backup: 2023-05-16T15:30:02Z
2023-05-17T05:45:24Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] restore started
2023-05-17T05:45:24Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] waiting for ‘starting’ status
2023-05-17T05:45:25Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] moving to state running
2023-05-17T05:45:25Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] moving to state running
2023-05-17T05:45:25Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] moving to state running
2023-05-17T05:45:25Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] moving to state running
2023-05-17T05:45:36Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] restoring users and roles
2023-05-17T05:45:36Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] moving to state dumpDone
2023-05-17T13:38:59Z W [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-17T13:38:59Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] session recreated, resuming download
2023-05-17T13:41:40Z W [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-17T13:41:40Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] session recreated, resuming download
2023-05-17T19:19:01Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] restoring users and roles
2023-05-17T19:19:01Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] moving to state dumpDone
2023-05-17T21:40:22Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] restoring users and roles
2023-05-17T21:40:22Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] moving to state dumpDone
2023-05-18T01:22:46Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] starting oplog replay
2023-05-18T01:22:46Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] starting sharded txn sync
2023-05-18T01:22:46Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] + applying {shard3ReplSet 2023-05-16T15:30:02Z/shard3ReplSet/local.oplog.rs.bson.s2 s2 {1684251002 20} {1684287522 4} 0}
2023-05-18T01:22:46Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] restoring users and roles
2023-05-18T01:22:46Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] moving to state dumpDone
2023-05-18T01:22:46Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] starting oplog replay
2023-05-18T01:22:46Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] starting sharded txn sync
2023-05-18T01:22:46Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] + applying {shard2ReplSet 2023-05-16T15:30:02Z/shard2ReplSet/local.oplog.rs.bson.s2 s2 {1684251002 20} {1684287522 4} 0}
2023-05-18T01:22:47Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] starting oplog replay
2023-05-18T01:22:47Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] starting sharded txn sync
2023-05-18T01:22:47Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] + applying {shard1ReplSet 2023-05-16T15:30:02Z/shard1ReplSet/local.oplog.rs.bson.s2 s2 {1684251002 20} {1684287522 4} 0}
2023-05-18T01:22:47Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] starting oplog replay
2023-05-18T01:22:47Z D [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] starting sharded txn sync
2023-05-18T01:22:47Z D [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] + applying {configReplSet 2023-05-16T15:30:02Z/configReplSet/local.oplog.rs.bson.s2 s2 {1684251002 20} {1684287522 4} 0}
2023-05-18T01:22:49Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] oplog replay finished on {1684287522 1}
2023-05-18T01:22:49Z D [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] exit waitingTxnChecker
2023-05-18T01:22:49Z D [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] exit distTxnChecker
2023-05-18T01:24:17Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [prepare] {0 0}
2023-05-18T01:24:17Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [commit] {1684260886 543}
2023-05-18T01:24:22Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [prepare] {0 0}
2023-05-18T01:24:22Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [commit] {1684260886 543}
2023-05-18T01:27:00Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] txn converged to [commit]
2023-05-18T01:27:01Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] txn converged to [commit]
2023-05-18T01:27:01Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [prepare] {0 0}
2023-05-18T01:27:01Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [commit] {1684260886 543}
2023-05-18T01:27:02Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] txn converged to [commit]
2023-05-18T01:27:02Z W [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-18T01:27:02Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] session recreated, resuming download
2023-05-18T01:27:03Z W [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-18T01:27:03Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] session recreated, resuming download
2023-05-18T01:27:35Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: <SAAAAAVpZAAQAAAABJTEweYh4EcaoGFjCBR/8r0FdWlkACAAAAAAVFw9FlgTnygmweWDa1rDk7sYjGO20oeqPTWV5kLHyqEA-2994> [prepare] {0 0}
2023-05-18T01:27:35Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: <SAAAAAVpZAAQAAAABJTEweYh4EcaoGFjCBR/8r0FdWlkACAAAAAAVFw9FlgTnygmweWDa1rDk7sYjGO20oeqPTWV5kLHyqEA-2994> [commit] {1684264714 115739}
2023-05-18T01:29:45Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: <SAAAAAVpZAAQAAAABJTEweYh4EcaoGFjCBR/8r0FdWlkACAAAAAAVFw9FlgTnygmweWDa1rDk7sYjGO20oeqPTWV5kLHyqEA-2994> [prepare] {0 0}
2023-05-18T01:29:45Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: <SAAAAAVpZAAQAAAABJTEweYh4EcaoGFjCBR/8r0FdWlkACAAAAAAVFw9FlgTnygmweWDa1rDk7sYjGO20oeqPTWV5kLHyqEA-2994> [commit] {1684264714 115739}
2023-05-18T01:29:55Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: <SAAAAAVpZAAQAAAABJTEweYh4EcaoGFjCBR/8r0FdWlkACAAAAAAVFw9FlgTnygmweWDa1rDk7sYjGO20oeqPTWV5kLHyqEA-2994> [prepare] {0 0}
2023-05-18T01:29:55Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] txn converged to [commit] <SAAAAAVpZAAQAAAABJTEweYh4EcaoGFjCBR/8r0FdWlkACAAAAAAVFw9FlgTnygmweWDa1rDk7sYjGO20oeqPTWV5kLHyqEA-2994>
2023-05-18T01:29:55Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: <SAAAAAVpZAAQAAAABJTEweYh4EcaoGFjCBR/8r0FdWlkACAAAAAAVFw9FlgTnygmweWDa1rDk7sYjGO20oeqPTWV5kLHyqEA-2994> [commit] {1684264714 115739}
2023-05-18T01:29:56Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] txn converged to [commit] <SAAAAAVpZAAQAAAABJTEweYh4EcaoGFjCBR/8r0FdWlkACAAAAAAVFw9FlgTnygmweWDa1rDk7sYjGO20oeqPTWV5kLHyqEA-2994>
2023-05-18T01:29:57Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] txn converged to [commit] <SAAAAAVpZAAQAAAABJTEweYh4EcaoGFjCBR/8r0FdWlkACAAAAAAVFw9FlgTnygmweWDa1rDk7sYjGO20oeqPTWV5kLHyqEA-2994>
2023-05-18T01:30:06Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [prepare] {0 0}
2023-05-18T01:30:06Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [commit] {1684264750 162}
2023-05-18T01:30:54Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [prepare] {0 0}
2023-05-18T01:30:54Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [commit] {1684264750 162}
2023-05-18T01:30:56Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [prepare] {0 0}
2023-05-18T01:30:56Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] found dist txn: [commit] {1684264750 162}
2023-05-18T01:30:56Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] txn converged to [commit]
2023-05-18T01:30:57Z W [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-18T01:30:57Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] session recreated, resuming download
2023-05-18T01:30:57Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] txn converged to [commit]
2023-05-18T01:30:58Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] txn converged to [commit]
2023-05-18T01:30:58Z W [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-18T01:30:58Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] session recreated, resuming download
2023-05-18T01:30:59Z W [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] retryChunk got copy: context deadline exceeded (Client.Timeout or context cancellation while reading body), try to reconnect in 0s
2023-05-18T01:30:59Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] session recreated, resuming download
2023-05-18T01:34:45Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] oplog replay finished on {1684287521 2}
2023-05-18T01:34:45Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] exit waitingTxnChecker
2023-05-18T01:34:45Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] exit distTxnChecker
2023-05-18T01:34:45Z I [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] restore finished successfully
2023-05-18T01:34:45Z D [shard3ReplSet/10.80.10.159:27038] [restore/2023-05-17T05:45:24.629271292Z] releasing lock
2023-05-18T01:35:16Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] oplog replay finished on {1684287522 1}
2023-05-18T01:35:16Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] exit waitingTxnChecker
2023-05-18T01:35:16Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] exit distTxnChecker
2023-05-18T01:35:16Z I [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] restore finished successfully
2023-05-18T01:35:16Z D [shard2ReplSet/10.80.10.186:27038] [restore/2023-05-17T05:45:24.629271292Z] releasing lock
2023-05-18T01:35:35Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] oplog replay finished on {1684287522 4}
2023-05-18T01:35:35Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] exit waitingTxnChecker
2023-05-18T01:35:35Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] exit distTxnChecker
2023-05-18T01:35:35Z I [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] restore finished successfully
2023-05-18T01:35:35Z D [shard1ReplSet/10.80.11.253:27038] [restore/2023-05-17T05:45:24.629271292Z] releasing lock
2023-05-18T01:35:35Z I [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] restore finished successfully
2023-05-18T01:35:35Z D [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] epoch set to {1684373735 11}
2023-05-18T01:35:35Z D [configReplSet/10.80.11.253:27039] [restore/2023-05-17T05:45:24.629271292Z] releasing lock