PBM: GCS storage not working as intended in 2.5.0

Hello,

We are using PBM 2.5.0 (upgraded from 1.6.1) for Debian and trying to setup the storage from local to GCS (Google cloud Storage) but we have encountered a few issues.
First, the documentation is not clear at all for GCS: Configure remote backup storage - Percona Backup for MongoDB doesn’t have any reference for GCS

When we tried to setup the remote storage to GCS, we setup a service account with the Storage Admin permissions (highest level possible for GCS) and we got the following errors :
from pbm-status :

[S]: pbm-agent v2.5.0 FAILED status:
      > ERROR with storage: storage check failed with: get S3 object header: Forbidden: Forbidden
	status code: 403, request id: , host id: 

From pbm-logs:

[agentCheckup] check storage connection: storage check failed with: get S3 object header: Forbidden: Forbidden
	status code: 403, request id: , host id: 

On all nodes of our clusters and on both pbm-agent (27019) and pbm-agent-data (27018)

We tried using gcloud cli by using the same account or another one and impersonating the service account and we got no error uploading a file.

We also tried to change to another bucket and it doesn’t change anything. The service account also got full admin storage rights on the said bucket. We also tried to change our bucket from multi region part to only one region still the same errors.
The bucket is empty and we know that pbm need to create the .pbm.init and it should be able to do so with the permissions of the service account.
Here is our config file for pbm :

storage:
  type: s3
  s3:
    provider: gcs
    region: eu
    
    bucket: bucket_name
    
    endpointUrl: https://storage.googleapis.com
    credentials:
      access-key-id: "*******"
      secret-access-key: "-----BEGIN PRIVATE KEY-----*******"

I also have a question, why the url shown by pbm-status for GCS begin with S3 and is formatted like that ? : s3://https://storage.googleapis.com/

Note that we didn’t change the config file between 1.6.1 and 2.5.0.
The local storage and logical cluster backup are working fine.

Can someone help us ?
Thank you

Hi @craynaud ,

Look at the example here:
Configure remote backup storage - Percona Backup for MongoDB :

storage:
 type: s3
 s3:
     region: us-east1
     bucket: pbm-testing
     prefix: pbm/test
     endpointUrl: https://storage.googleapis.com
     credentials:
       access-key-id: <your-access-key-id-here>
       secret-access-key: <your-secret-key-here>

The provider is automatically defined depending on the endpointUrl. Please remove “provider: gcs” from your configuration. I believe this may cause it cannot be set correctly.

Hello @radoslaw.szulgo ,

Thank you for your answer :slight_smile: I have tested it but i still got the same error and if I check my config file it’s correctly set to :

  type: s3
  s3:
    region: eu
    
    bucket: bucket_name
    
    endpointUrl: https://storage.googleapis.com
    credentials:
      access-key-id: "******"
      secret-access-key: "-----BEGIN PRIVATE KEY-----***"

But in the running config the provider still appear :

# pbm config
pitr:
  enabled: false
  oplogSpanMin: 0
  compression: gzip
  compressionLevel: -1
storage:
  type: s3
  s3:
    provider: gcs
    region: eu
    endpointUrl: https://storage.googleapis.com
    forcePathStyle: true
    bucket: bucket_name
    credentials:
      access-key-id: '***'
      secret-access-key: '***'
    maxUploadParts: 10000
    storageClass: STANDARD
    insecureSkipTLSVerify: false
backup:
  oplogSpanMin: 0
  priority:
[....]

Is it ok ?

Does the problem can be because of the region: eu ? Since it’s multi region bucket , i’ve set it up to simply eu

Also, i’ve put the provider: gcs parameter in the config file, because the documentation mention it here : Remote backup storage options - Percona Backup for MongoDB

Regards,

I think you should use a specific region like “EUROPE-WEST3”.

I’ve tried 2 different ways with the same results:
1 - using a specific region in the config file with multi region bucket = same errors
2 - using a specific region in the config file with a regional bucket on the same region as in the config file = same errors

And my service account is still Storage Admin on both buckets

Regards,

Hi @craynaud,

I’ve noticed that your secret-access-key starts with "-----BEGIN PRIVATE KEY-----", it’s not exactly the expected format. Can you please check the docs below? You can find there a sample how access ID and secret should look like and the way to create them:

@craynaud any news from you? Have you resolved the issue?

Hello,

I’m checking it just now, I was on vacation :slight_smile:

I have a question though, i’ve setup a key on the service account from the IAM interface which is the one i’m using currently and the one that begin with "-----BEGIN PRIVATE KEY-----", why can’t we use that one and have to create a specific HMAC Key on GCS for the service account ?
I mean is there any technical reasons behind that ?

Also, it seems that the documentation related to GCS is not quite on point because this is not specified in it for the moment neither is any explanation on why.

Best regards,

Hello,

I’ve tested your solution but i still got 403 from GCS and without proper documentation I don’t even know what to put in the “acces-key-id” line , is it the service account email now ?
On the secret-acces-key i’ve put the hmac key like you told me.

And another question, do you fully support multiregion bucket or only monoregion bucket ?

Regards,

Hi, HMAC keys are typically used to authenticate against a service. The HMAC key has two parts:

  1. Access ID
  2. Secret

That is an example of what you have to put in the section:

    credentials:
      access-key-id: 'GOOGTS7C7FUP3AIRVJTE2BCDKINBTES3HC2GY5CBFJDCQ2SYHV6A6XXVTJFSA'
      secret-access-key: 'bGoa+V7g/yqDXvKRqq+JTFn4uQZbPiQJo4pf9RzJ'

This is explained in more detail in the documentation linked above.

Hope that helps

Hi,

Yes sorry about that i saw this after and using terraform to create the hmac_key i’m not sure i can retrieve the secret, i’ll check that !

And what about multiregion bucket ?

You should be able to retrieve the secret using
terraform output -json
Re: multi region bucket try putting EU in your case. Let me know how that goes.

1 Like

Hello,

It works :slight_smile: also setting EU as the region value works for multiregion bucket !
Maybe please write in the documentation that for gcs we need hmac key and not a iam service account key

Thank you

glad to hear it works. We are open to contributions to improve our docs. Feel free to submit a PR Sign in to GitHub · GitHub

1 Like