Error when restore using Operator MongoDB

Description:

I am also getting error during restore.
The process of copying files from remote storage to MongoDB pod was successful, but when mongod started, it got an error as shown below. I use the same version cluster mongodb to backup and restore.

Steps to Reproduce:

Version:

I’m using:

  • MongoDB Operator 1.14.0
  • percona-server-mongodb:6.0.4-3
  • PBM: 2.2.0

Logs:

2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/index-69-3054101180844405747.wt.gz> to </data/db/index-69-3054101180844405747.wt>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/index-7-3054101180844405747.wt.gz> to </data/db/index-7-3054101180844405747.wt>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/index-74-3054101180844405747.wt.gz> to </data/db/index-74-3054101180844405747.wt>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/index-76-3054101180844405747.wt.gz> to </data/db/index-76-3054101180844405747.wt>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/index-77-3054101180844405747.wt.gz> to </data/db/index-77-3054101180844405747.wt>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/index-79-3054101180844405747.wt.gz> to </data/db/index-79-3054101180844405747.wt>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/index-9-3054101180844405747.wt.gz> to </data/db/index-9-3054101180844405747.wt>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/sizeStorer.wt.gz> to </data/db/sizeStorer.wt>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/WiredTiger.backup.gz> to </data/db/WiredTiger.backup>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/WiredTiger.gz> to </data/db/WiredTiger>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/storage.bson.gz.0-114> to </data/db/storage.bson>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/journal/WiredTigerLog.0000000002.gz> to </data/db/journal/WiredTigerLog.0000000002>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/key.db/WiredTigerLog.0000000001.gz> to </data/db/key.db/WiredTigerLog.0000000001>
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] copy <2023-08-11T02:21:21Z/rs0/key.db/WiredTigerLog.0000000002.gz> to </data/db/key.db/WiredTigerLog.0000000002>
2023-08-11T04:05:31.000+0000 D [restore/2023-08-11T04:04:42.434647131Z] download stat: buf 1610612736, arena 268435456, span 33554432, spanNum 8, cc 6, [{1 0} {1 0} {1 0} {1 0} {1 0} {1 0}]
2023-08-11T04:05:31.000+0000 I [restore/2023-08-11T04:04:42.434647131Z] preparing data
2023/08/11 04:05:32 mongod process: signal: aborted (core dumped), [1691726732:305191][261:0x7f21e08ebbc0], WT_CURSOR.next: [WT_VERB_EXTENSION][ERROR]: libcrypto: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:crypto/evp/evp_enc.c:643:

[1691726732:305217][261:0x7f21e08ebbc0], WT_CURSOR.next: [WT_VERB_EXTENSION][ERROR]: setting return code to WT_PANIC
[1691726732:305247][261:0x7f21e08ebbc0], txn-recover: [WT_VERB_DEFAULT][ERROR]: __wt_txn_recover, 1125: Recovery failed: WT_PANIC: WiredTiger library panic
[1691726732:305598][261:0x7f21e08ebbc0], connection: [WT_VERB_DEFAULT][ERROR]: __wt_cache_destroy, 364: cache server: exiting with 3 pages in memory and 0 pages evicted
[1691726732:305617][261:0x7f21e08ebbc0], connection: [WT_VERB_DEFAULT][ERROR]: __wt_cache_destroy, 371: cache server: exiting with 6566 bytes in memory
[1691726732:305621][261:0x7f21e08ebbc0], connection: [WT_VERB_DEFAULT][ERROR]: __wt_cache_destroy, 375: cache server: exiting with 6212 bytes dirty and 1 pages dirty
[1691726732:313862][261:0x7f21e08ebbc0], WT_CURSOR.next: [WT_VERB_EXTENSION][ERROR]: libcrypto: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:crypto/evp/evp_enc.c:643:

[1691726732:313881][261:0x7f21e08ebbc0], WT_CURSOR.next: [WT_VERB_EXTENSION][ERROR]: setting return code to WT_PANIC
[1691726732:313890][261:0x7f21e08ebbc0], txn-recover: [WT_VERB_DEFAULT][ERROR]: __wt_txn_recover, 1125: Recovery failed: WT_PANIC: WiredTiger library panic
[1691726732:314036][261:0x7f21e08ebbc0], connection: [WT_VERB_DEFAULT][ERROR]: __wt_cache_destroy, 364: cache server: exiting with 3 pages in memory and 0 pages evicted
[1691726732:314046][261:0x7f21e08ebbc0], connection: [WT_VERB_DEFAULT][ERROR]: __wt_cache_destroy, 371: cache server: exiting with 6566 bytes in memory
[1691726732:314049][261:0x7f21e08ebbc0], connection: [WT_VERB_DEFAULT][ERROR]: __wt_cache_destroy, 375: cache server: exiting with 6212 bytes dirty and 1 pages dirty
[1691726732:321388][261:0x7f21e08ebbc0], WT_CURSOR.next: [WT_VERB_EXTENSION][ERROR]: libcrypto: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:crypto/evp/evp_enc.c:643:

[1691726732:321401][261:0x7f21e08ebbc0], WT_CURSOR.next: [WT_VERB_EXTENSION][ERROR]: setting return code to WT_PANIC
[1691726732:321409][261:0x7f21e08ebbc0], txn-recover: [WT_VERB_DEFAULT][ERROR]: __wt_txn_recover, 1125: Recovery failed: WT_PANIC: WiredTiger library panic
[1691726732:321590][261:0x7f21e08ebbc0], connection: [WT_VERB_DEFAULT][ERROR]: __wt_cache_destroy, 364: cache server: exiting with 3 pages in memory and 0 pages evicted
[1691726732:321600][261:0x7f21e08ebbc0], connection: [WT_VERB_DEFAULT][ERROR]: __wt_cache_destroy, 371: cache server: exiting with 6566 bytes in memory
[1691726732:321604][261:0x7f21e08ebbc0], connection: [WT_VERB_DEFAULT][ERROR]: __wt_cache_destroy, 375: cache server: exiting with 6212 bytes dirty and 1 pages dirty
2023/08/11 04:05:32 mongod log:
{“t”:{“$date”:“2023-08-11T04:05:31.825+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“-”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}

Expected Result:

Restore successful

Actual Result:

Restore error

Additional Information:

[Include any additional information that could be helpful to diagnose the issue, such as browser or device information]

@Sergey_Pronin this is the topic I mentioned

A few more updates, the above error occurs when backing up without specifying disable encrypt data. (official docs noted that Data at rest encryption is turned on by default)

When I set “enableEncryption: false”, I was able to successfully restore to the new cluster from the backup data. This doesn’t seem to be mentioned in the backup documentation.

image

I have set “pitr.enabled=false” in cluster backup and restore.
However, the new cluster launches with ghost status and I can’t get data from cluster.

ganislp,
Sergey_Pronin

1 Like

Got the same error during replication while trying to create a multi cluster as per this article. Interestingly, one out of the 3 nodes from that new replica appears to be progressing without issues. Is it possible the 3 of them are using the encryprion key for say rs0-2, rather than each taking the key that correspond to their mirror node rs0-0, rs0-1 and rs0-2?