Is there any regression on 1.2.0 regarding serviceType being set as “LoadBalancer” ? After setting up, operator keep complaining about unmarshaling the load balancer source parameter.
Hi @Daniel_Bichuetti ,
Could you share the error message and your cr.yaml
if possible?
Hi @Ege_Gunes ,
I’ve deployed using Helm chart. I’ll paste the code of Pulumi Iac Helm implementation, if you can’t understand it please point so I can provide helm command-line. But it’s pretty simple, values parameter takes as dict the values we are sending to helm.
postgres_cluster = Release(
"postgres-cluster",
ReleaseArgs(
repository_opts=RepositoryOptsArgs(
repo="https://percona.github.io/percona-helm-charts/"
),
chart="pg-db",
version="1.2.0",
namespace=pg_operator.namespace_postgres_cluster.id,
values={
"pause": False,
"standby": False,
"keepData": True,
"keepBackups": True,
"defaultUser": "pguser",
"defaultDatabase": "pgdb",
"pgPrimary": {
"volumeSpec": {
"storagetype": "dynamic",
"storageclass": "default",
"size": "10Gi",
"accessmode": "ReadWriteOnce",
},
"expose": {
"serviceType": "ClusterIP",
# "loadBalancerSourceRanges" : [
# "177.206.141.170/32",
# "177.204.0.0/14"
# ]
# "annotations": {
# "external-dns.alpha.kubernetes.io/hostname": "teste-pg.intelijus.ai"
# },
},
},
"backup": {
"volumeSpec": {
"storagetype": "dynamic",
"storageclass": "default",
"size": "10Gi",
"accessmode": "ReadWriteOnce",
}
},
"pmm": {"enabled": True},
"replicas": {
"size": 0,
"volumeSpec": {
"storagetype": "dynamic",
"storageclass": "default",
"size": "10Gi",
"accessmode": "ReadWriteOnce",
},
},
"pgBouncer": {
"expose": {
"serviceType": "LoadBalancer"
# "annotations": {
# "external-dns.alpha.kubernetes.io/hostname": "teste-pgbouncer.intelijus.ai"
# },
},
},
},
),
opts=ResourceOptions(
provider=config.k8s_provider,
depends_on=[pg_operator.postgres_operator],
),
)
Setting pgBouncer.expose.serviceType to LoadBalancer makes operator unable to reconcile, saying that it can’t unmarshal loadBalancerSourceRanges [ ]string. I even tried setting the value also, but operator is still unable to reconcile:
time="2022-07-14T15:16:00Z" level=error msg="reconcile perocnapgclusters: list perconapgclusters: json: cannot unmarshal object into Go struct field Expose.items.spec.pgBouncer.expose.loadBalancerSourceRanges of type []string" func="github.com/percona/percona-postgresql-operator/percona/controllers/pgc.(*Controller).reconcilePerconaPG()" file="/go/src/github.com/percona/percona-postgresql-operator/percona/controllers/pgc/pgc.go:300" version=1.2.0
It might be better to see actual custom resource manifests, rather than pulumi or helm values. Could you provide following outputs, I suspect something is not what we expect there:
kubectl get perconapgclusters postgres-cluster -o yaml
kubectl get pgclusters postgres-cluster -o yaml
kubectl get perconapgclusters postgres-cluster-b277 -o yaml
returns
apiVersion: pg.percona.com/v1
kind: PerconaPGCluster
metadata:
annotations:
current-primary: postgres-cluster-b277
meta.helm.sh/release-name: postgres-cluster-b277bee0
meta.helm.sh/release-namespace: postgres-system-2d2ffb45
creationTimestamp: "2022-07-15T09:56:31Z"
generation: 1
labels:
app.kubernetes.io/instance: postgres-cluster-b277bee0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: pg-db
app.kubernetes.io/version: 1.2.0
crunchy-pgha-scope: postgres-cluster-b277
deployment-name: postgres-cluster-b277
helm.sh/chart: pg-db-1.2.0
name: postgres-cluster-b277
pg-cluster: postgres-cluster-b277
pgo-version: 1.2.0
pgouser: admin
name: postgres-cluster-b277
namespace: postgres-system-2d2ffb45
resourceVersion: "190773"
uid: b0821cf4-e428-42e8-b149-60d16b200125
spec:
backup:
backrestRepoImage: percona/percona-postgresql-operator:1.2.0-ppg14-pgbackrest-repo
image: percona/percona-postgresql-operator:1.2.0-ppg14-pgbackrest
resources:
requests:
memory: 48Mi
storageTypes:
- local
volumeSpec:
accessmode: ReadWriteOnce
size: 10Gi
storageclass: default
storagetype: dynamic
database: pgdb
disableAutofail: false
keepBackups: true
keepData: true
pause: false
pgBadger:
enabled: false
image: percona/percona-postgresql-operator:1.2.0-ppg14-pgbadger
port: 10000
pgBouncer:
expose:
loadBalancerSourceRanges:
annotations:
pg-cluster-annot: postgres-cluster-b277
labels:
pg-cluster-label: postgres-cluster-b277
serviceType: LoadBalancer
image: percona/percona-postgresql-operator:1.2.0-ppg14-pgbouncer
resources:
limits:
cpu: 2
memory: 512Mi
requests:
cpu: 1
memory: 128Mi
size: 3
pgPrimary:
expose:
serviceType: ClusterIP
image: percona/percona-postgresql-operator:1.2.0-ppg14-postgres-ha
resources:
requests:
memory: 128Mi
tolerations: []
volumeSpec:
accessmode: ReadWriteOnce
size: 10Gi
storageclass: default
storagetype: dynamic
pgReplicas:
hotStandby:
enableSyncStandby: false
expose:
serviceType: ClusterIP
resources:
limits:
cpu: 1
memory: 128Mi
requests:
cpu: 1
memory: 128Mi
size: 0
volumeSpec:
accessmode: ReadWriteOnce
size: 10Gi
storageclass: default
storagetype: dynamic
pmm:
enabled: true
image: percona/pmm-client:2.26.1
pmmSecret: postgres-cluster-b277-pmm-secret
resources:
requests:
cpu: 500m
memory: 200M
serverHost: monitoring-service
serverUser: admin
port: "5432"
secretsName: null
standby: false
tlsOnly: false
upgradeOptions:
apply: 14-recommended
schedule: 0 4 * * *
versionServiceEndpoint: https://check.percona.com
user: pguser
userLabels:
pgo-version: 1.2.0
On the other hand
kubectl get pgclusters postgres-cluster-b277 -o yaml
say there is no resource found.
In the yaml I see the following:
pgBouncer:
expose:
loadBalancerSourceRanges:
annotations:
pg-cluster-annot: postgres-cluster-b277
labels:
pg-cluster-label: postgres-cluster-b277
The problems is there for sure, but I don’t know who is responsible for it: pulumi or helm
Sorry for delay. I tested using Helm, from the official repo and same error occurs. So it’s not a Pulumi issue. It’s on the Helm chart.
Hi @Daniel_Bichuetti , thanks for update. We will recheck our helm chart.