Disaster recovery for mysql database

Hello Percona Support,
I’m running the Percona Operator with a Percona XtraDB Cluster (image: percona/percona-xtradb-cluster:8.0.31-23.2) on Kubernetes. I’ve configured:

storages:
  ostore:
    type: s3
    verifyTLS: false
    s3:
      bucket: devops/dev/mysql/fullbackup
      endpointUrl: https://example.r2.cloudflarestorage.com
      credentialsSecret: s3-backup-secret
  minio:
    type: s3
    verifyTLS: false
    s3:
      bucket: mysql-backup-pitr
      endpointUrl: https://minio.example.ostore.com
      credentialsSecret: s3-backup-pitr-secret

Daily full backups to Cloudflare R2 and continuous PITR logs to MinIO are running without issue. I can successfully restore a full backup (for example, the snapshot taken at 00:01 AM), but I’m unclear on how to apply the PITR logs afterward to bring the cluster back to a specific point in time—say, 11:30 AM today—without losing any data. Could you outline the steps or point me to the documentation for performing a point-in-time recovery in this setup?
And my restoration file for fullbackup is:

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterRestore
metadata:
  name: restore123
spec:
  pxcCluster: pxccluster
  backupSource:
    destination: s3://mysql-backup-prod/pxccluster-2025-02-22-08:41:15-full/
    s3:
      bucket: mysql-backup-prod
      credentialsSecret: s3-backup-secret
      region: us-east-1
      endpointUrl: https://example.r2.cloudflarestorage.com

You can follow the steps mentioned in the documentation below.

i.e. mainly PITR section of the restore.yaml needs to be properly configured.

  • put additional restoration parameters to the pitr section:
    • type key can be equal to one of the following options,
      • date - roll back to specific date,
      • transaction - roll back to a specific transaction (available since Operator 1.8.0),
      • latest - recover to the latest possible transaction,
      • skip - skip a specific transaction (available since Operator 1.7.0).
    • date key is used with type=date option and contains value in datetime format,
    • gtid key (available since the Operator 1.8.0) is used with type=transaction option and contains exact GTID of a transaction which follows the last transaction included into the recovery

Hi Abhinav, My full backup and pitr are working fine but while restoring I am getting error in the restoration job.

Can't create/write to file './performance_schema/objects_summary__107.sdi' (OS errno 24 - Too many open files)

And my Yaml file which I applied is this:

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterRestore
metadata:
  name: restore1356
spec:
  pxcCluster: pxccluster
  backupSource:
    destination: s3://devops/dev/mysql/fullbackup/pxccluster-2025-06-26-02:00:30-full/
    s3:
      bucket: devops/dev/mysql/fullbackup
      credentialsSecret: s3-backup-secret
      region: us-east-1
      endpointUrl: https://example.r2.cloudflarestorage.com
  pitr:
    type: latest
    # date: "2025-06-11 16:31:28"
    # gtid: "binlog_1749638158_d323799c3751af5b9d313553baf60c9b-gtid-set"
    backupSource:
      storageName: "minio"
      s3:
        bucket: mysql-backup-dev-pitr
        credentialsSecret: s3-backup-pitr-secret
        endpointUrl: https://ostore.example.tech```
I want to know what is this issue and do i make any more changes in the yaml so that, this error can be remidiated.