Percona Everest 1.3.0 can't see any namespaces or create them from WEB UI

Hello folks.

I am new to Everest. I tried to install it in alignment with docs, but faced issue with on WEB UI I can’t do anything, no buttons, no namespaces, no ability to change settings, just like I have read-only user instead of admin.

My actions:

  1. Install Everest CLI on macOS Install Percona Everest CLI - Percona Everest
  2. Install Everest by the command:

everestctl install --namespaces everest-qa --operator.mongodb=true --operator.postgresql=false --operator.xtradb-cluster=true --skip-wizar

  1. execute port forwarding (just for make it quick)

kubectl port-forward svc/everest 8080:8080 -n everest-system

  1. change admin password (without changing result was the same)

everestctl accounts set-password --username admin

  1. open WEB UI http://127.0.0.1:8080/databases


Could you please help me to understand what am I missing and how to fix it?

Quick debug:

  1. everestctl install output:
➜  ~ everestctl install --namespaces everest-qa --operator.mongodb=true --operator.postgresql=false --operator.xtradb-cluster=true --skip-wizard
ℹ️ Installing Everest version 1.3.0

✓ Install Operator Lifecycle Manager
✓ Install Percona OLM Catalog
✓ Create namespace 'everest-monitoring'
✓ Install VictoriaMetrics Operator
✓ Provision monitoring stack
✓ Create namespace 'everest-qa'
✓ Install operators [pxc, psmdb] in namespace 'everest-qa'
✓ Configure RBAC in namespace 'everest-qa'
✓ Install Everest Operator
✓ Install Everest API server

🚀 Everest has been successfully installed!

To view the password for the 'admin' user, run the following command:

everestctl accounts initial-admin-password


IMPORTANT: This password is NOT stored in a hashed format. To secure it, update the password using the following command:

everestctl accounts set-password --username admin
  1. check crds, pods, services in everest-system and everest-qa
➜  ~ kubectl get all -n everest-system
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/everest-operator-controller-manager-f5f94dfc7-fb6b9   2/2     Running   0          20m
pod/percona-everest-5bd9bdb95-b9ftd                       1/1     Running   0          19m

NAME                                                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/everest                                               ClusterIP   10.100.14.36    <none>        8080/TCP   78m
service/everest-operator-controller-manager-metrics-service   ClusterIP   10.100.225.28   <none>        8443/TCP   78m

NAME                                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/everest-operator-controller-manager   1/1     1            1           78m
deployment.apps/percona-everest                       1/1     1            1           78m

NAME                                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/everest-operator-controller-manager-7cdc8c989c   1         1         1       78m
replicaset.apps/percona-everest-5bd9bdb95                        1         1         1       78m
replicaset.apps/percona-everest-6f47b48486                       0         0         0       78m



➜  ~ kubectl get all -n everest-qa
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/percona-server-mongodb-operator-5c569776f7-xssgm   1/1     Running   0          3h2m
pod/percona-xtradb-cluster-operator-64dbd66989-ddlg7   1/1     Running   0          3h3m

NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/percona-xtradb-cluster-operator   ClusterIP   10.100.252.196   <none>        443/TCP   3h2m

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/percona-server-mongodb-operator   1/1     1            1           3h2m
deployment.apps/percona-xtradb-cluster-operator   1/1     1            1           3h3m

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/percona-server-mongodb-operator-5c569776f7   1         1         1       3h2m
replicaset.apps/percona-xtradb-cluster-operator-64dbd66989   1         1         1       3h3m


➜  ~ kubectl get crds | grep percona
backupstorages.everest.percona.com                               2024-11-18T16:06:52Z
databaseclusterbackups.everest.percona.com                       2024-11-18T16:06:52Z
databaseclusterrestores.everest.percona.com                      2024-11-18T16:06:52Z
databaseclusters.everest.percona.com                             2024-11-18T16:06:53Z
databaseengines.everest.percona.com                              2024-11-18T16:06:52Z
monitoringconfigs.everest.percona.com                            2024-11-18T16:06:52Z
perconaservermongodbbackups.psmdb.percona.com                    2024-11-18T16:06:29Z
perconaservermongodbrestores.psmdb.percona.com                   2024-11-18T16:06:29Z
perconaservermongodbs.psmdb.percona.com                          2024-11-18T16:06:29Z
perconaxtradbclusterbackups.pxc.percona.com                      2024-11-18T16:05:06Z
perconaxtradbclusterrestores.pxc.percona.com                     2024-11-18T16:05:06Z
perconaxtradbclusters.pxc.percona.com                            2024-11-18T16:05:06Z
  1. checked k8s logs of percona-everest and everest-operator-controller-manager - no errors or warnings.

I’ve encountered this, did you have the everest-qa namespace created before installing?

Try re-installing to a new namespace so that the install script creates it.

1 Like

I can’t describe the exact scenario right now, but the developers definitely know the problem and will fix it.

Thank you @daniil.bazhenov you were right about it. I just used another NS that Everest created by itself and everything looks good now.

It may be a good idea to mention this limitation in documentation or CLI help.

Another question concerns dropping the old NS. I have removed all the resources (operators) from it, but it still says it can’t be removed. I suspect CRDs described in docs, but I can’t remove them if I have active deployments in another NS, right?

Calling @Diogo_Recharte for help.

Hi @Stateros, we are also working on a way to clean up a single namespace so that you don’t need to do the manual operation you described. Nonetheless, let’s try to get you out of this state.

I suspect that your namespace deletion is stuck because you tried to delete the namespace directly. This operation removes the operator before removing some other resources like a database. When you try to delete a DB CR without the operator running, the DB resource will be stuck waiting for the operator to clean up the finalizers.

You can check which resources are stuck waiting for the finalizers to be removed by running a

kubectl describe namespace <YourStuckNamespace>

Depending on the DB types you provisioned you’ll see references to perconaxtradbclusters.pxc.percona.com, perconaservermongodbs.psmdb.percona.com or perconapgclusters.pgv2.percona.com and postgresclusters.postgres-operator.crunchydata.com.

You can force the deletion of these resources by running the following commands:

export NAMESPACE=<YourStuckNamespace>
kubectl -n $NAMESPACE get pxc -o name | awk -F '/' {'print $2'} | xargs --no-run-if-empty kubectl patch pxc -n $NAMESPACE -p '{"metadata":{"finalizers":null}}' --type merge

kubectl -n $NAMESPACE get psmdb -o name | awk -F '/' {'print $2'} | xargs --no-run-if-empty kubectl patch psmdb -n $NAMESPACE -p '{"metadata":{"finalizers":null}}' --type merge

kubectl -n $NAMESPACE get pg -o name | awk -F '/' {'print $2'} | xargs --no-run-if-empty kubectl patch pg -n $NAMESPACE -p '{"metadata":{"finalizers":null}}' --type merge

kubectl -n $NAMESPACE get postgrescluster -o name | awk -F '/' {'print $2'} | xargs --no-run-if-empty kubectl patch postgrescluster -n $NAMESPACE -p '{"metadata":{"finalizers":null}}' --type merge

kubectl -n $NAMESPACE get db -o name | awk -F '/' {'print $2'} | xargs --no-run-if-empty kubectl patch db -n $NAMESPACE -p '{"metadata":{"finalizers":null}}' --type merge

This should hopefully unlock the deletion of your namespace.