PMM integration issue with k8s mongo operator

Hi Team,

i am trying to enable PMM monitoring for mongo db deployment created with k8s operator

pmm server is running in ec2 instance https://3.142../

cr.yaml :

apiVersion: psmdb.percona.com/v1-9-0
kind: PerconaServerMongoDB
metadata:
  name: cdb
#  finalizers:
#    - delete-psmdb-pvc
spec:
#  platform: openshift
#  clusterServiceDNSSuffix: svc.cluster.local
#  pause: true
  crVersion: 1.9.0
  image: percona/percona-server-mongodb:4.4.6-8
  imagePullPolicy: Always
#  imagePullSecrets:
#    - name: private-registry-credentials
#  runUid: 1001
  allowUnsafeConfigurations: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: 4.4-recommended
    schedule: "0 2 * * *"
    setFCV: false
  secrets:
    users: cdb-secrets
  pmm:
    enabled: true
    image: percona/pmm-client:2.18.0
    serverHost: 3.142.***.***
#    mongodParams: --environment=ENVIRONMENT
#    mongosParams: --environment=ENVIRONMENT

pmm-client agent is failing and replicaset is not getting created.

secrets.yml file

apiVersion: v1
kind: Secret
metadata:
  name: cdb-secrets
type: Opaque
stringData:
  MONGODB_BACKUP_USER: backup
  PMM_SERVER_USER: admin
  PMM_SERVER_PASSWORD: K8spoc13
  MONGODB_BACKUP_PASSWORD: backup123456
  MONGODB_CLUSTER_ADMIN_USER: clusterAdmin
  MONGODB_CLUSTER_ADMIN_PASSWORD: clusterAdmin123456
  MONGODB_CLUSTER_MONITOR_USER: clusterMonitor
  MONGODB_CLUSTER_MONITOR_PASSWORD: clusterMonitor123456
  MONGODB_USER_ADMIN_USER: userAdmin
  MONGODB_USER_ADMIN_PASSWORD: userAdmin123456

secrets o/p :

# kubectl get secrets
NAME                                          TYPE                                  DATA   AGE
cdb-mongodb-encryption-key                    Opaque                                1      42s
cdb-mongodb-keyfile                           Opaque                                1      42s
cdb-secrets                                   Opaque                                8      47s
cdb-ssl                                       kubernetes.io/tls                     3      43s
cdb-ssl-internal                              kubernetes.io/tls                     3      42s
default-token-zdkfg                           kubernetes.io/service-account-token   3      49m
internal-cdb-users                            Opaque                                8      47s
percona-server-mongodb-operator-token-swpqw   kubernetes.io/service-account-token   3      52s

After applying cr.yaml don’t see PMM user/ pwd in secrets.

[root@ip-20-5-6-44 deploy]# kubectl get secrets cdb-secrets -o yaml
apiVersion: v1
data:
  MONGODB_BACKUP_PASSWORD: aFdQcHYxWE93UnU0S0h6RE1BUQ==
  MONGODB_BACKUP_USER: YmFja3Vw
  MONGODB_CLUSTER_ADMIN_PASSWORD: VG9qNnZYa1lxTlFtQ2d4cQ==
  MONGODB_CLUSTER_ADMIN_USER: Y2x1c3RlckFkbWlu
  MONGODB_CLUSTER_MONITOR_PASSWORD: N1VwQU1nOEpzTlQ4ckxQelNF
  MONGODB_CLUSTER_MONITOR_USER: Y2x1c3Rlck1vbml0b3I=
  MONGODB_USER_ADMIN_PASSWORD: c05Sb0xiWVJMWWVWdm9GM0c=
  MONGODB_USER_ADMIN_USER: dXNlckFkbWlu

if i first apply secrets.yml and then cr.yml then it’s creating the pmm user/pwd in secrets but still pmm-client is failing with connectivity issue to pmm serverhost

Error:

INFO[2021-09-03T13:05:26.964+00:00] 2021-09-03T13:05:26.964Z	error	VictoriaMetrics/lib/promscrape/scrapework.go:231	error when scraping "http://127.0.0.1:30101/metrics" from job "mongodb_exporter_agent_id_a214448a-7d13-47ea-84d8-208c2a1900f1_hr-5s" with labels {agent_id="/agent_id/a214448a-7d13-47ea-84d8-208c2a1900f1",agent_type="mongodb_exporter",cluster="cdb",instance="/agent_id/a214448a-7d13-47ea-84d8-208c2a1900f1",job="mongodb_exporter_agent_id_a214448a-7d13-47ea-84d8-208c2a1900f1_hr-5s",node_id="/node_id/1c9d4954-ee12-4053-96df-7d6126da0ab7",node_name="mg-cdb-rs0-0",node_type="container",service_id="/service_id/3d1ce676-c591-432d-8e05-2b11b6ed054c",service_name="mg-cdb-rs0-0",service_type="mongodb"}: error when scraping "http://127.0.0.1:30101/metrics": dial tcp4 127.0.0.1:30101: connect: connection refused; try -enableTCP6 command-line flag if you scrape ipv6 addresses  agentID=/agent_id/ddfb03f6-7494-4897-9d9a-7597eafb11fb component=agent-process type=vm_agent

it’s not connection to ec2 instance ip

Can you please let me know how we can enable PMM server / client connectivity .
What should be the value of serverHost . Can we give ec2 instance public IP where pmm server is running.

Note : PMM server is created in AWS marketplace subscription on ec2 instance.

~ Adithya

Hello @Adithya.

Operator generates system users secrets automatically with random passwords when you apply cr.yaml unless Secrets are already there.

So for your case please first create the secrets (kubectl apply -f secrets.yaml) and then apply cr.yaml.

As for the error - do you see it constantly? There might be a race condition where pmm wants to scrape the data, but there is none yet or exporter is not up yet.

2 Likes

Thanks @Sergey_Pronin, Appreciate your quick help.

Sure will follow that . Fixed the issue with PMM it works fine now.

one general query : Trying to enable external access using loadbalencer expose: enabled for replicaset configuration.

    expose:
      enabled: true
      exposeType: LoadBalancer
      serviceAnnotations:
        service.beta.kubernetes.io/aws-load-balancer-name: "cdbdns"
        service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip

But annotations are not working as expected.

Load balancer with random name it getting created in AWS console and of load balencer type as classic load balancer in AWS . Anything can be done to fix this issue ?

1 Like

@Adithya we had a bug [K8SPSMDB-470] ServiceAnnotation and LoadBalancerSourceRanges fields don't propagate to k8s service - Percona JIRA

The code is merged in main branch already. 1.10.0 release will be out somewhere closer to the end of September.

2 Likes

Thanks for the update @Sergey_Pronin

1 Like