Kubernetes ingress regression in PMM helm chart 1.2.0

PMM helm chart 1.2.0 introduces a breaking change in the ingress configuration. With default value of the nginxInc value (false) it assumes that something called “community managed ingress” is used which requires two separate ingresse configurations for pmm and pmm-grpc services. The implementation doesn’t work with e.g. traefik ingress controller (pmm paths work but pmm-grpc return 500 internal server error).

Reverting to the helm chart 1.0.1 solves the problem (a singe ingress is used). I assume that setting nginxInc to true might have worked as well, but haven’t tested it because it would assume we are using nginx inc ingress controller, which we don’t.

2 Likes

Hi @Artem_Baguinski , thank you for posting to the Percona forums!

While I don’t have an answer for you, I have escalated your request in our Slack channel and you should get a reply here shortly :+1:

@Artem_Baguinski, thanks for letting us know. I had my concerns about that during my review.

How does it affect you? It looks like just an additional endpoint that doesn’t work.

Hi @Denys_Kondratenko

Several endpoints don’t work, among them the one that ends with /agent.. As long as that is the case pmm-agent fails to register the services. The logs at that moment look like:

Mar 06 11:16:58 db3 pmm-agent[73062]: DEBU[2023-03-06T11:16:58.394+01:00] Exiting receiver goroutine.                   component=channel
Mar 06 11:16:58 db3 pmm-agent[73062]: ERRO[2023-03-06T11:16:58.394+01:00] Failed to establish two-way communication channel: unexpected HTTP status code received from server: 500 (Internal Server Error); transport: received unexpected content-type "text/plain; charset=utf-8".  component=client
Mar 06 11:16:58 db3 pmm-agent[73062]: DEBU[2023-03-06T11:16:58.394+01:00] Connection closed.                            component=client
Mar 06 11:17:16 db3 pmm-agent[73062]: INFO[2023-03-06T11:17:16.246+01:00] Connecting to https://api_key:***@pmm.performation.cloud:443/ ...  component=client
Mar 06 11:17:16 db3 pmm-agent[73062]: INFO[2023-03-06T11:17:16.260+01:00] Connected to pmm.performation.cloud:443.      component=client
Mar 06 11:17:16 db3 pmm-agent[73062]: INFO[2023-03-06T11:17:16.260+01:00] Establishing two-way communication channel ...  component=client
Mar 06 11:17:16 db3 pmm-agent[73062]: DEBU[2023-03-06T11:17:16.260+01:00] Sending message (4 bytes): id:1  ping:{}.     component=channel
Mar 06 11:17:16 db3 pmm-agent[73062]: DEBU[2023-03-06T11:17:16.265+01:00] Closing with error: rpc error: code = Unknown desc = unexpected HTTP status code received from server: 500 (Internal Server Error); transport: received unexpected content-type "text/plain; charset=utf-8"
Mar 06 11:17:16 db3 pmm-agent[73062]: failed to receive message
Mar 06 11:17:16 db3 pmm-agent[73062]: github.com/percona/pmm/agent/client/channel.(*Channel).runReceiver
Mar 06 11:17:16 db3 pmm-agent[73062]: /tmp/go/src/github.com/percona/pmm/agent/client/channel/channel.go:220
Mar 06 11:17:16 db3 pmm-agent[73062]: runtime.goexit

https://jira.percona.com/browse/PMM-11872

working on a fix

1 Like

A work-around i provided to PMM Server deployed on k8s - failed to establish two-way communication channel - #5 by jeff.plewes

Add this to your helm values for ingress:

  community:
    annotations:
      nginx.ingress.kubernetes.io/use-regex: "true"

The nginx.ingress.kubernetes.io/use-regex: “true” was needed to have the ingress controller use regex location blocks for ‘/agent., /inventory.’ etc… without this config, such calls to these urls would not match the location blocks in the ingress controller and instead hit the default location / and its non-grpc upstream.

1 Like

Hi Jeff

Thanks for the suggestion. However it wouldn’t work for us, as we don’t use an ingress controller based on nginx.

Cheers,
Artem.