XtraDB pitr deployment requests & limits

Hello,

When Point-in-Time Recovery (PITR) is enabled. Operator will create a new Kubernetes deployment. Then the Pod will copy binlogs from server to storage which is defined in cr.yaml.

Is it possible to set requests & limits for PITR deployment? I have set ResourceQuotas on namespaces and PITR Pod won’t start because:
status:
conditions:

  • lastTransitionTime: “2022-01-07T10:54:47Z”
    lastUpdateTime: “2022-01-07T10:54:47Z”
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: “False”
    type: Available
  • lastTransitionTime: “2022-01-07T10:54:47Z”
    lastUpdateTime: “2022-01-07T10:54:47Z”
    message: ‘pods “enhancement-pitr-94bb45469-wv2dp” is forbidden: failed quota:
    quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory’
    reason: FailedCreate
    status: “True”
    type: ReplicaFailure

Now I think there usually is two possibilities:

  1. change deployment and add requests & limits
  2. create Limit Ranges (Limit Ranges | Kubernetes)

I tried the first option, but it seems like Operator (?) is overwriting my configuration.

So how to set requests & limits for PITR deployment?

Thanks.

1 Like

Hi @katajistok

Of course you can set resources for PIRT deployment:

     pitr:
       enabled: true
       storageName: minio-binlogs
       timeBetweenUploads: 55
       resources:
         requests:
           memory: 0.1G
           cpu: 100m
         limits:
           memory: 1G
           cpu: 800m

I will add this example into deploy/cr.yaml .

1 Like

Thank you, I tried to make the change on cr.yaml, but pitr pod didnt show up. So I ran “kubectl delete -f cr.yaml” and after “kubectl apply -f ct.yaml”, then I encountered the next situation.

Cluster is not starting:
NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE
enhancement enhancement-haproxy.dbaas-enhancement initializing 22m

When I’m looking on Operator Pod logs, I can see something related to SSL/TLS.

{“level”:“error”,“ts”:1641560426.7100654,“logger”:“controller-runtime.manager.controller.perconaxtradbcluster-controller”,“msg”:“Reconciler error”,“name”:“enhancement”,“namespace”:“dbaas-enhancement”,“error”:“failed to reconcile SSL.Please create your TLS secret and manually or setup cert-manager correctly: create ssl internally: create TLS secret: Secret “” is invalid: metadata.name: Required value: name or generateName is required”,“errorVerbose”:“create ssl internally: create TLS secret: Secret “” is invalid: metadata.name: Required value: name or generateName is required\nfailed to reconcile SSL.Please create your TLS secret and manually or setup cert-manager correctly\ngithub.com - This website is for sale! - ngithub Resources and Information.).deploy\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:578\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:309\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581",“stacktrace”:"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99”}
{“level”:“info”,“ts”:1641560428.4964073,“caller”:“pxc/backup.go:87”,“msg”:“Creating or updating backup job”,“name”:“a79cb-weekly-backup”,“schedule”:“0 21 * * WED”}
{“level”:“info”,“ts”:1641560428.496675,“caller”:“pxc/version.go:65”,“msg”:“add new job”,“schedule”:“0 6 * * *”}
{“level”:“info”,“ts”:1641560428.496824,“caller”:“pxc/version.go:107”,“msg”:“add new job”,“name”:“ensure-version/dbaas-enhancement/enhancement”,“schedule”:“0 6 * * *”}

1 Like

Could you please send me your CR.

1 Like

Hello,
cr.yaml is over here: cr - Pastebin.com
-Kimmo

1 Like

Hi @katajistok ,
Please check your my-cluster-ssl secret (If it exists) . As you can see from log
create TLS secret: Secret “” is invalid: metadata.name: Required value: name or generateName is required\nfailed to reconcile SSL something is wrong with this secret.

1 Like

Secret is there and it has the certificates in it. I have encountered this error message before also, but then I just deleted all the resources along with the namespace and deployed again.

[production@dbaasjump002 enhancement]$ kubectl get secrets
NAME TYPE DATA AGE
default-token-ktp2n kubernetes.io/service-account-token 3 24d
enhancement-ca-cert kubernetes.io/tls 3 20d
internal-enhancement Opaque 8 20d
my-cluster-name-backup-s3 Opaque 2 20d
my-cluster-secrets Opaque 8 20d
my-cluster-ssl kubernetes.io/tls 3 20d
my-cluster-ssl-internal kubernetes.io/tls 3 20d
percona-xtradb-cluster-operator-token-wl78r kubernetes.io/service-account-token 3 20d

[production@dbaasjump002 enhancement]$ kubectl get secrets my-cluster-ssl -o yaml
apiVersion: v1
data:
ca.crt: xyz
tls.crt: xyz
tls.key: xyz
kind: Secret
metadata:
annotations:
cert-manager.io/alt-names: enhancement-pxc,.enhancement-pxc,.enhancement-proxysql
cert-manager.io/certificate-name: enhancement-ssl
cert-manager.io/common-name: enhancement-proxysql
cert-manager.io/ip-sans: “”
cert-manager.io/issuer-group: “”
cert-manager.io/issuer-kind: Issuer
cert-manager.io/issuer-name: enhancement-pxc-issuer
cert-manager.io/uri-sans: “”
creationTimestamp: “2021-12-20T16:45:49Z”
name: my-cluster-ssl
namespace: dbaas-enhancement
resourceVersion: “295957531”
uid: 086bfb51-660f-492d-ac86-1f217260e11e
type: kubernetes.io/tls

1 Like

@katajistok ,

Root of the issue is that finaliser ‘delete-pxc-pods-in-order’ is commented in your CR. It is very dangerous because when you delete your CR all pods are deleted simultaneously and it causes the issue with pxc cluster. When you start the cluster pxc pod does not know which pod has the latest data and automatic recovery procedure is triggered. It can take some time and that is why you will see that your cluster is not ready for some period of time. To check what is going on with your cluster you need to check the logs of all pxc pods e.g.
kubectl logs cluster1-pxc-0 -c pxc (get log from pxc container)
kubectl logs cluster1-pxc-0 -c logs (get log from logcollector container)

P.S. the error message about TLS certificates was also present in my tests but it didn’t cause any issues and it was fixed in main branch because I could not see it when I tested the operator from it.

1 Like

Ok. I’ll try to use images from main branch. When 1.11 version will be published?

1 Like

Please do not use main branch (you can use it only for tests not for production). Everything is ok with 1.10.0 you need just to uncomment ‘delete-pxc-pods-in-order’ finaliser to avoid crash of database when you delete the CR.

P.S. If you want to test main branch CR is percona-xtradb-cluster-operator/cr.yaml at main · percona/percona-xtradb-cluster-operator · GitHub . The 1.11.0 will be available approximately in two months.

2 Likes