PBM backup failed to Backblaze

Hi! We have a backup of a sharded monga.
The backup is sent to s3 backblaze, everything worked previously. We ask for help.

Error:

2024-03-18T09:30:29Z E [rfm_1_shard/rfm2:27100] [backup/2024-03-18T09:24:24Z] backup: mongodump: decompose: archive parser: corruption found in archive; ParserConsumer.BodyBSON() ( "rfmEngine.orderItem": write: upload: "rfmEngine.orderItem": upload to S3: MultipartUpload: upload multipart failed
	upload id: 4_z9c0c88a84e8636758dee051f_f2114f574dfe7227b_d20240318_m092441_c003_v0312025_t0036_u01710753881526
caused by: InvalidRequest: The request body was too small
	status code: 400, request id: 60dbf3100fa47c9a, host id: aY2Bj1jj4OOJlETbZNpI182Q+ZTo14WbD )
2024-03-18T09:30:30Z I [rfm_config_server/ma1:27000] [backup/2024-03-18T09:24:24Z] dropping tmp collections
2024-03-18T09:30:30Z I [rfm_config_server/ma1:27000] [backup/2024-03-18T09:24:24Z] created chunk 2024-03-18T09:24:25 - 2024-03-18T09:30:30
2024-03-18T09:30:30Z I [rfm_config_server/ma1:27000] [backup/2024-03-18T09:24:24Z] mark RS as error `check cluster for dump done: convergeCluster: backup on shard rfm_1_shard failed with: %!s(<nil>)`: <nil>
2024-03-18T09:30:30Z I [rfm_config_server/ma1:27000] [backup/2024-03-18T09:24:24Z] mark backup as error `check cluster for dump done: convergeCluster: backup on shard rfm_1_shard failed with: %!s(<nil>)`: <nil>
2024-03-18T09:30:30Z E [rfm_config_server/ma1:27000] [backup/2024-03-18T09:24:24Z] backup: check cluster for dump done: convergeCluster: backup on shard rfm_1_shard failed with: %!s(<nil>)
2024-03-18T09:34:33Z I [rfm_2_shard/rfm1:27200] [backup/2024-03-18T09:24:24Z] created chunk 2024-03-18T09:24:17 - 2024-03-18T09:34:21. Next chunk creation scheduled to begin at ~2024-03-18 12:44:33.618387473 +0300 MSK m=+2974.084574996
2024-03-18T09:34:40Z I [rfm_2_shard/rfm1:27200] [backup/2024-03-18T09:24:24Z] mongodump finished, waiting for the oplog
2024-03-18T09:34:41Z I [rfm_2_shard/rfm1:27200] [backup/2024-03-18T09:24:24Z] dropping tmp collections
2024-03-18T09:34:42Z I [rfm_2_shard/rfm1:27200] [backup/2024-03-18T09:24:24Z] created chunk 2024-03-18T09:34:21 - 2024-03-18T09:34:39
2024-03-18T09:34:42Z I [rfm_2_shard/rfm1:27200] [backup/2024-03-18T09:24:24Z] mark RS as error `waiting for dump done: backup stuck, last beat ts: 1710754230`: <nil>
2024-03-18T09:34:42Z E [rfm_2_shard/rfm1:27200] [backup/2024-03-18T09:24:24Z] backup: waiting for dump done: backup stuck, last beat ts: 1710754230

Config pbm:

pitr:
  enabled: false
  oplogSpanMin: 0
  compression: gzip
storage:
  type: s3
  s3:
    provider: aws
    region: EU Central
    endpointUrl: https://s3.eu-central-003.backblazeb2.com
    forcePathStyle: true
    bucket: mongo-rfm
    prefix: rfm
    credentials:
      access-key-id: '***'
      secret-access-key: '***'
    uploadPartSize: 20000000
    maxUploadParts: 30000
    storageClass: STANDARD
    insecureSkipTLSVerify: false
    debugLogLevels: EventStreamBody
    retryer:
      numMaxRetries: 5
      minRetryDelay: 1s
      maxRetryDelay: 5m0s
backup:
  oplogSpanMin: 0
  timeouts:
    startingStatus: 300
  compression: gzip

Hi @Anton_Kireev
Welcome to the community!

Regarding the error, we can see that you have set the maxUploadParts from default 10K to 30K and uploadPartSize from 10MB to 20MB. To improve the performance of upload, we recommend lowering the value or let it be default for maxUploadParts and uploadPartSize. As stated over [here](Remote backup storage options - Percona Backup for MongoDB, Percona Backup for MongoDB automatically increases the uploadPartSize value if the size of the file to be uploaded exceeds the max allowed file size.

We could also see that; you have enabled the debugLogLevel with EventStreamBody. Can you please share the debug details like DEBUG: Response s3/CreateMultipartUpload Details, at what stage it failed, it can be checked further.

Also, to response you in a better way, please share the pbm version along with pbm logs in debug mode as below:
pbm logs -e backup/backup_name -s D -t 0

Thanks,
Mukesh

Hi! version pbm
Version: 2.4.0
Platform: linux/amd64
GitCommit: 767bdcf7300a7cb197081818206e05d80a260d72
GitBranch: release-2.4.0
BuildTime: 2024-03-04_10:58_UTC
GoVersion: go1.19

Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: PUT /RR-mongo-me/me/2024-03-20T23%3A32%3A02Z/email_5_shard/oplog/20240321001204-33.20840321001615-91.s2 HTTP/1.1
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Host: s3.eu-central-003.backblazeb2.com
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: User-Agent: aws-sdk-go/1.48.4 (go1.19; linux; amd64) S3Manager
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Content-Length: 476860
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Authorization: AWS4-HMAC-SHA256 Credential=003cc88e663degf0000000014/20245321/EU Central/s3/aws4_request, SignedHeaders=content-length;content-md5;host;x-amz-content-sha256;x-amz-date;x-amz-storage-class, Signature=e275f5404ed96b4addc0f4c2e0ebe9ga653666550756093648af1c3193a2729ab3
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Content-Md5: dePyRm4G8Y19nLfylxsPyxg==
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: X-Amz-Content-Sha256: 5638cb503f773f4bdb391b5b9c34gfd74fc8f20bs179e5fd94305d6a49a5edac
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: X-Amz-Date: 20240321T5001616Z
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: X-Amz-Storage-Class: STANDARD
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Accept-Encoding: gzip
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: #015
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: -----------------------------------------------------
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: 2024-03-21T03:16:16.000+0300 D [backup/2024-03-20T23:32:02Z] DEBUG: Response s3/PutObject Details:
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: ---[ RESPONSE ]--------------------------------------
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: HTTP/1.1 200
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Content-Length: 0
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Cache-Control: max-age=0, no-cache, no-store
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Connection: keep-alive
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Date: Thu, 21 Mar 2024 00:16:16 GMT
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Etag: "75e3f2466e3c635f672dfca5c6c3f2c6"
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Server: nginx
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: Strict-Transport-Security: max-age=63072000
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: X-Amz-Id-2: aY1pj2jiQOCplxDYRNkGc10GTlZfs1e2YG
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: X-Amz-Request-Id: b3e40c664dd822b676
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: X-Amz-Version-Id: 4_z6c6c68a84e2636758e53dee051f_f1009f19541382c2b3f_d20240321_m001616_6c003_v0312019_t0000_u01710980176017
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: #015
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: -----------------------------------------------------
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: 2024-03-21T03:16:16.000+0300 I [backup/2024-03-20T23:32:02Z] created chunk 2024-03-21T00:12:04 - 2024-03-21T00:16:15
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: 2024-03-21T03:16:16.000+0300 I [backup/2024-03-20T23:32:02Z] mark RS as error `mongodump: decompose: archive parser: corruption found in archive; ParserConsumer.BodyBSON() ( "emailing.emailMessage": write: upload: "emailing.emailMessage": upload to S3: MultipartUpload: upload multipart failed
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: #011upload id: 4_z6c6c68a84e2263675845dee051f_f2144ca6f6b9g8186b_d20240320_m2433208_c003_v03512007_t0013_u01710977528982
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: caused by: InvalidRequest: The request body was too small
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: #011status code: 400, request id: 3705cf5eb4a08346, host id: aYzRjlDjhOOdlmja9NsE1+2S0ZVI15Ga9 )`: <nil>
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: 2024-03-21T03:16:16.000+0300 D [backup/2024-03-20T23:32:02Z] set balancer on
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: 2024-03-21T03:16:16.000+0300 E [backup/2024-03-20T23:32:02Z] backup: mongodump: decompose: archive parser: corruption found in archive; ParserConsumer.BodyBSON() ( "emailing.emailMessage": write: upload: "emailing.emailMessage": upload to S3: MultipartUpload: upload multipart failed
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: #011upload id: 4_z6c6c68a84e2636758deed051f_f2144c4a6f6b698186b_d20240320_m2g3320G8_c003_v0312007_t0013_u01710977528982
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: caused by: InvalidRequest: The request body was too small
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: #011status code: 400, request id: 3705cf5ebb4a08346, host id: aYzRjlDjhgGOOdl4mja9NsE1+2S0ZVI15Ga9 )
Mar 21 03:16:16 me1 pbm-agent-email_5_shard[23243]: 2024-03-21T03:16:16.000+0300 D [backup/2024-03-20T23:32:02Z] releasing lock
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: 2024-03-21T03:16:42.000+0300 D [agentCheckup] DEBUG: Request s3/HeadObject Details:
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: ---[ REQUEST POST-SIGN ]-----------------------------
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: HEAD /RR-mongo-me/me/.pbm.init HTTP/1.1
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: Host: s3.eu-central-003.backblazeb2.com
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: User-Agent: aws-sdk-go/1.48.4 (go1.19; linux; amd64)
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: Authorization: AWS4-HMAC-SHA256 Credential=003cc88e6f65de5f040000000014/202403422/EU Central/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=5cbdd1f06fe3b0d6faf7435d2fe8a3b84ec71879daa1ff8ea0356e83263110854
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb9242455467ae44649b934c5a45991b7852b855
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: X-Amz-Date: 20240321T001642Z
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: #015
Mar 21 03:16:42 me1 pbm-agent-email_config_server[23244]: -----------------------------------------------------
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: 2024-03-21T03:16:42.000+0300 D [agentCheckup] DEBUG: Request s3/HeadObject Details:
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: ---[ REQUEST POST-SIGN ]-----------------------------
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: HEAD /RR-mongo-me/me/.pbm.init HTTP/1.1
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: Host: s3.eu-central-003.backblazeb2.com
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: User-Agent: aws-sdk-go/1.48.4 (go1.19; linux; amd64)
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: Authorization: AWS4-HMAC-SHA256 Credential=003cc88e646de5f000000014/20240321/EU Central/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=5cbdd1f06fe3b0d6faf04735d2e8a3b84ec71879daa1f8ea0356e83263110854
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: X-Amz-Content-Sha256: e3b0c44298fc1c149afbf46c8996fb92427ae41e4649b934ca493991b7852b855
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: X-Amz-Date: 20240321T001642Z
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: #015
Mar 21 03:16:42 me1 pbm-agent-email_5_shard[23243]: -----------------------------------------------------
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: 2024-03-21T03:16:42.000+0300 D [agentCheckup] DEBUG: Request s3/HeadObject Details:
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: ---[ REQUEST POST-SIGN ]-----------------------------
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: HEAD /RR-mongo-me/me/.pbm.init HTTP/1.1
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: Host: s3.eu-central-003.backblazeb2.com
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: User-Agent: aws-sdk-go/1.48.4 (go1.19; linux; amd64)
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: Authorization: AWS4-HMAC-SHA256 Credential=003cc388e64de5f0000000014/20240321/EU Central/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=5cbdd1f06fe3b0d6faf04735d32e8a384ec71879da5a1f8ea036e83263110854
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: X-Amz-Content-Sha256: e3b0c44298fc1c149aff4c899fb92427a3e41e4649b934ca4695991b7852b855
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: X-Amz-Date: 20240321T001642Z
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: #015
Mar 21 03:16:42 me1 pbm-agent-email_1_shard[23242]: -----------------------------------------------------


Hello! I sent the logs in a previous message.

@Mukesh_Kumar Hello! We experience problems with backup in 80% of cases. It is not clear what the reason might be. If you can help, we would be very grateful.

Hi @Anton_Kireev,

Here you can find the S3 storages we tested against. Overview - Percona Backup for MongoDB

We do support S3-compatible storages but we do not test with all of them. In your case, this seems more like a request to backplaze directly.

Kai

@Kai_Wagner Hello! Yes, I switched the backup to Amazon s3 storage, but we don’t see any problems.
As a result, PBM has some problems interacting with Baskblaze s3.

Will any work or tests be carried out with Baskblaze resources?

Hi @Anton_Kireev,

as we do not support Backblaze officially, we also do not have any plans to test this currently.

Hello Kai! Why don’t you want to support the backblaze? This is a great storage alternative. You might be interested in collaborating with backblaze. Contact them, they are aware of the problem and at least they are trying to help.

Hi Anton,

we do have more feature and enhancements requests than we have developers and testers ;-). If you would like to help and support, feel free to create a pull request and we’ll happily look at it, but this also means testing infrastructure needs to be set up and available etc.

So I’m not against adding it at this point, but we simply do not have a focus on Backblaze.

Please create a PR GitHub - percona/percona-backup-mongodb: Percona Backup for MongoDB or at least an issue with all the details in our issue tracker: Jira - Percona JIRA

Kai

UPD:
we tried to make a backup on cloudflare r2, there is also a problem there, 9 out of 10 backups fail. But the error is different:

 2024-04-17T07:07:36Z I [cdv_2_shard/mcdv6:29200] [backup/2024-04-17T07:05:17Z] mark RS as error `mongodump: decompose: archive parser: corruption found in archive; ParserConsumer.BodyBSON() ( "cdv.linkToVisitor": write: upload: "cdv.linkToVisitor": upload to S3: MultipartUpload: upload multipart failed
    upload id: APHRKRTlNnOtum6qwAxTXShGBG9iIBLMERUSkCxha90HrTYl4g2K9Nfj-XCnR2yZ0a5Xmj6uvgNN4uGlKwQm3W06SzrezT6X9If2FNpCCxD-w-Bv5hZ_g2fI1BWZkZ-nGqgmXbSS5I_YYc-ksmGxWgWbnvcqYQxiC20WkHLWanKnsXqRIaCiqfxOdaGEgpKH1rCkzxQnPVDGhkOKWh9jiNAXnbMB3iR9Cca1jaeN_z-OtMNgl0tBE94-dr3babgXiLg07SuPXTvOcOkZ8IaTKN0sJBpTwMPYvMUqpM-dtCWm1T2KpWd-UVj56BFCrPFOWSV9I2inoaWbHEeQhJ74lZG-YLqQaWbK0
caused by: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your secret access key and signing method. 
    status code: 403, request id: , host id:  )`: <nil>
2024-04-17T07:07:36Z E [cdv_2_shard/mcdv6:29200] [backup/2024-04-17T07:05:17Z] backup: mongodump: decompose: archive parser: corruption found in archive; ParserConsumer.BodyBSON() ( "cdv.linkToVisitor": write: upload: "cdv.linkToVisitor": upload to S3: MultipartUpload: upload multipart failed
    upload id: APHRKRTlNnOtum6qwAgxgTXShGBIG9iIBLMERUSkCxha90HgrTYl2K9Nfj-XCnR2ygZ0a5Xj6uvNN4uGlKwQm3W06SzrezTg6X9If23NpCCxD-w-Bv5hZ_g2fI1gBWZkZ-nGqmXbgSS5I_YYc-ksGxWWbnvWgcqYWQxiC20WkHhLWanKnsXqRIaCiqfxOdGEpKH1rCkzxQnWPVDGhkOKhWh9jiNgAXnbB3iR9Cc1jaeN_z-OtMNl0tBEh4-dr3babXiLg07SuPXTvOcOkZ8ITKN0hsJBpTwMPYjvMUqpM-dtCWm1T2KpWWd-UVj56WBCrPFOzSV9I2inoabHEhWeQhJ74lZG-YLqQWabK0
caused by: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your secret access key and signing method. 
    status code: 403, request id: , host id:  )
2024-04-17T07:07:37Z I [cdv_config_server/mcdv6:27000] [backup/2024-04-17T07:05:17Z] dropping tmp collections
2024-04-17T07:07:37Z I [cdv_config_server/mcdv6:27000] [backup/2024-04-17T07:05:17Z] created chunk 2024-04-17T07:05:18 - 2024-04-17T07:07:36
2024-04-17T07:07:37Z I [cdv_config_server/mcdv6:27000] [backup/2024-04-17T07:05:17Z] mark RS as error `check cluster for dump done: convergeCluster: backup on shard cdv_2_shard failed with: %!s(<nil>)`: <nil>
2024-04-17T07:07:37Z I [cdv_config_server/mcdv6:27000] [backup/2024-04-17T07:05:17Z] mark backup as error `check cluster for dump done: convergeCluster: backup on shard cdv_2_shard failed with: %!s(<nil>)`: <nil>
2024-04-17T07:07:37Z E [cdv_config_server/mcdv6:27000] [backup/2024-04-17T07:05:17Z] backup: check cluster for dump done: convergeCluster: backup on shard cdv_2_shard failed with: %!s(<nil>)
2024-04-17T07:09:14Z I [cdv_1_shard/mcdv1:29100] [backup/2024-04-17T07:05:17Z] dropping tmp collections
2024-04-17T07:09:15Z I [cdv_1_shard/mcdv1:29100] [backup/2024-04-17T07:05:17Z] created chunk 2024-04-17T07:05:23 - 2024-04-17T07:09:14
2024-04-17T07:09:15Z I [cdv_1_shard/mcdv1:29100] [backup/2024-04-17T07:05:17Z] mark RS as error `mongodump: decompose: archive parser: corruption found in archive; ParserConsumer.BodyBSON() ( "cdv.linkToVisitor": write: upload: "cdv.linkToVisitor": upload to S3: MultipartUpload: upload multipart failed
    upload id: ABBUtkmMyUUWx5VSCjaPHwbND5A_ZHKjZSdokeaIThSyyJL2MVErVWMjjFkjn_hZmMvHhaIK11cNK7klYAamukZ7vTv5Ue7G6e1Ajk_wIkX5oxjx6OZmLp3xKieTvskvevAqw7Z_c-gXngoonHEYVojszuWVBBPt9NmyJnBTDBaZIjs6_73WZw1KWm1xKj1evh229-FzbRWQkcit-_fcZrW9WMo-7t2BCcWsfpWybqoIbjLQDYx-gEfTLrJeBkZ7WHjWxdxDTSe9MZPh1SWbusv_itstkajPWWL1aq-hZw1WVjPqzTeCpF8Mga3NWNZpTwd0YS3is6Eh6jUvKjUDPxjF3McSmeZweQWZ4p2cVg2mNujJodI
caused by: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your secret access key and signing method. 
    status code: 403, request id: , host id:  )`: <nil>

I’m having this exactly same issue since pbm-agent updates in the end of 2023 (since version 2.3.0 I believe).
I just sent Backblaze’s feedback informing the situation and suggesting for them to investigate compatibility issue or even contact Percona’s team.
This problem is relevant for me and the company I work with, we would not like to switch storage provider.

Hi! I had a long correspondence with backblaze, I already suggested contacting percona. They tried to find a problem on their side, but alas. PBM doesn’t work well with backblaze, cloudflare and there’s nothing I can do about it.

That kind of problem is prioritized as there is more users with it. So, it is worth for every one having this issue to contact both, backblaze and percona.
The issue shows 2 things as it stopped working: backblaze’s s3 compatibility is incomplete and a bug was introduced on recent PBM’s updates.
From the little I looked, I believe this error is related to the lib used by PBM to connect with S3, with handling escape chars on URI (maybe those escapes being sent and not counted on B3’s backend) , and seems to only trigger on B3 because on how different their backend works, and missing workarounds on their S3 frontend API.
As it used to work last year, I believe there’s some breaking changes on aws-sdk-go or minio-go, which seems to be the libraries they use for S3

Have you tried rolling back to an earlier version of PBM?

Earlier version works, however, the backup completes, but there is a big warning that it is incompatible with mongodb 7, which is unrollbackable…
Due to urgency for solution, I’m am working around doing backups with mongodump, rclone and shellscripts.