I’m trying to follow the documentation on Migrate PMM 2 to PMM 3 - Percona Monitoring and Management to migrate from PMM2 to 3. However, the script get-pmm.sh is not working. It is attempting to create a Docker volume to backup the PMM2 data, but it’s not substituting the prefix pmm-data and only appending the date/timestamp. Is anyone else is having this issue?
[root@pmm ~]# ./get-pmm.sh -n pmm-server -b
Gathering/downloading required components, this may take a moment
Checking docker installation - installed.
Found Watchtower Token:
Generated Watchtower Token: random-2026-01-28-153206
Pulling percona/pmm-server:3
Existing PMM Server found, renaming to pmm-server-2026-01-28-153207
pmm-server
Backing up existing PMM Data Volume to -2026-01-28-153207
unknown shorthand flag: '2' in -2026-01-28-153207
Usage: docker volume create [OPTIONS] [VOLUME]
Run 'docker volume create --help' for more information
Hello Jimmy,
I hope you are well.
Can you share with us the output of the following command:
docker inspect pmm-server
The get-pmm.sh script gets the volume name to back up from the command docker inspect -f ‘{{ range .Mounts }}{{ if and (eq .Type “volume”) (eq .Destination “/srv” )}}{{ .Name }}{{ “\n” }}{{ end }}{{ end }}’ pmm-server , so I would like to have the full output to see what’s going on.
@hieu.nguyen I think I figured out at least what’s causing the script to malfunction. Our PMM2 server was migrated to Docker install some time ago from OVA install. However, the way PMM2 is now deployed seems to be different. The data is stored in a volume called pmm-data whereas our deployment is mounting out to /var/lib/pmm. I’ve tried making the volume and copying the content from /var/lib/pmm into pmm-data. The upgrade script is now able to run but the upgrade doesn’t seem to be working quite right still. After the upgrade, the server seems to come up but I cannot log in as admin.
Hi Jimmy,
If I understand you correctly, then you were able to get the pmm-server container to RUNNING state, but were not able to login.
Can you share the PMM service logs with us to see if the upgrade has completed successfully? They are stored in the /srv/logs/ directory inside the pmm-server container.
Also, you can reset the Grafana admin password by running the following commands:
# open a bash session in pmm-server container as the pmm user
docker exec -ti pmm-server bash
# stop the grafana service
supervisorctl stop grafana
# reset grafana admin password
/usr/sbin/grafana cli --homepath=/usr/share/grafana --config=/etc/grafana/grafana.ini admin reset-admin-password <your new password>
# start the grafana service
supervisorctl start grafana
Yes that is correct. However, I did more digging. I was able to eventually disable the Grafana auto lockout and then reset the admin password. However, the instance did not contain any of my historical metrics data. To explain what I did.
- My current instance is migrated from OVA to Docker, however
/srv is bind mounted out to the system at /var/lib/pmm
- I attempted to make a volume called
pmm-data then copy in all the content of /var/lib/pmm and fix all permissions of directories and files
- Then I create a new
pmm-server running percona/pmm-server:2.44.1 image with /srv mounted to the new pmm-data volume I created
- The instance comes up, I can get to the UI but
admin user password does not work, which indicates something about Grafana did not port over
- I check
/srv/logs/grafana.log and I see entries stating incorrect password and login failures which is locking out the user
- I add
disable_brute_force_login_protection = true to security block in /etc/grafana.ini and restart the container to disable auto lockout
- I tried to reset the password via guide https://www.percona.com/blog/changing-the-default-admin-password-in-docker-based-deployment-of-pmm2/ but it does not seem to actually change the password, though now I’m able to get a prompt to set a new password
- I log in with new password but cannot find any of my historical data
I’m not sure what the proper way is to migrate my PMM2 instance to a confirming state where I can retain my historical data AND upgrade to PMM3.
Hi Jimmy,
Can you try migrating from your original container using the following steps:
-
Create a backup of your current deployment’s data by creating a compressed archive of the /srv directory in the container
-
Create the new percona/pmm-server:2.44.1 container
-
Stop all running services in both containers
docker exec -ti <original container> supervisorctl stop all
docker exec -ti <new container> supervisorctl stop all
- Copy data between the containers using
docker cp
docker cp <original container>:/srv .
docker cp srv <new container>:/
docker exec -ti <new container> bash
chown -R root:pmm /srv/clickhouse /srv/ia /srv/nginx /srv/pmm-distribution /srv/update
chown -R pmm:pmm /srv/logs /srv/victoriametrics /srv/alertmanager /srv/prometheus
chown -R grafana:grafana /srv/grafana
chown -R postgres:postgres /srv/postgres /srv/logs/postgresql.log
docker exec -ti <new container> supervisorctl start all
# validate services are running
docker exec -ti <new container> supervisorctl status all
If you can see your historical data in the new container, then upgrading to PMM 3 should be just by running the get-pmm.sh script as you do in the beginning
Ok, I’ll give your instructions a try.
Ok that seem to have worked! Few minor things though, a few directories did not exist per your commands to set the permission.
chown: cannot access '/srv/ia': No such file or directory
chown: cannot access '/srv/update': No such file or directory
chown: cannot access '/srv/postgres': No such file or directory
chown: cannot access '/srv/logs/postgresql.log': No such file or directory
I manually made ia and update folders, and postgres is actually called postgres14, along with the log. Anyways, the historical data seem to have ported over just fine and I’m able to see the existing services. Though I did notice this after running get-pmm.sh to upgrade to PMM3. Not quite sure how to fix it.

Also, get-pmm.sh installed a watchtower container along side, but it seems to be crashlooping.
time="2026-01-30T21:50:14Z" level=info msg="Waiting for the notification goroutine to finish" notify=no
time="2026-01-30T21:51:14Z" level=warning msg="No allowed image repositories specified. Setting default value to percona/pmm-server and perconalab/pmm-server"
time="2026-01-30T21:51:15Z" level=error msg="Error response from daemon: client version 1.25 is too old. Minimum supported API version is 1.44, please upgrade your client to a newer version"
Hey Jimmy,
That’s a known issue with watchtower, you can fix it by adding the env DOCKER_API_VERSION=1.45 to the container.
You can find more details about this in the Troubleshoot section of PMM documentation: Troubleshoot upgrade issues - Percona Monitoring and Management
Got it. I resolved the watchtower container issue as instructed. However, the Qan postgresql pgstatements agent status in Services section is still showing red. Is there a way to reset it?
Hi Jimmy,
Can you share the PMM’s log files (by accessing the URL https://<PMM_SERVER>/logs.zip)?
The compressed archive should contain the log for the QAN agent (named in the format AGENT_TYPE_QAN_POSTGRESQL_PGSTATEMENTS_AGENT <agentID>.log), hopefully it will tell us what’s wrong with your agent.
Hi Jimmy,
That error is due to PMM 3 disabling the QAN agent for the PMM server node by default (while it is still enabled in PMM2), so the server cannot find any agents with that ID.
You can fix it by enabling QAN for PMM Server in PMM advanced settings: https://<PMM Server IP>:<PMM Server Port>/graph/settings/advanced-settings
- Enable
QAN for PMM Server and click Apply changes
- Checking on PMM server’s agents and you will see it all running, with new ID for QAN agent
- After that, you can disable
QAN for PMM Server again, to reduce the load on the server as well as the amount of metrics stored in Clickhouse, if necessary. Doing so won’t affect the monitoring status again (the QAN agent’s status will change to Done)
Hope that helps.
1 Like
Perfect, that did the trick! Thank you!