PMM 3 - Correct SELinux configuration for use with a separate disk?

Hello there,

I believe the following threads and issues:

All refer to the same issue:
In the installation procedure, it is not specified whether to keep SELinux enabled or disabled.

In my case, the container would be unhealthy:

# podman ps -a
CONTAINER ID  IMAGE                           COMMAND               CREATED             STATUS  PORTS
       NAMES
3fb893d94976  docker.io/percona/pmm-server:3  /opt/entrypoint.s...  About a minute ago  Up About a minute (unhealthy)  0.0.0.0:443->844
3/tcp  pmm-server

And the init container would costantly repeat this step, relaunching the init playbook everytime thus continuously disabling the webserver by setting the mainteinance mode as enabled:

# podman exec -it pmm-server supervisorctl status
clickhouse                       RUNNING   pid 18, uptime 1:48:12
grafana                          RUNNING   pid 19, uptime 1:48:12
nginx                            RUNNING   pid 20, uptime 1:48:12
nomad-server                     STOPPED   Not started
pmm-agent                        RUNNING   pid 428, uptime 1:48:09
pmm-init                         STARTING
pmm-managed                      RUNNING   pid 28, uptime 1:48:12
postgresql                       RUNNING   pid 17, uptime 1:48:12
qan-api2                         RUNNING   pid 436, uptime 1:48:09
victoriametrics                  RUNNING   pid 21, uptime 1:48:12
vmalert                          RUNNING   pid 22, uptime 1:48:12
vmproxy                          RUNNING   pid 23, uptime 1:48:12
# podman exec -it pmm-server tail -f /srv/logs/pmm-init.log
TASK [dashboards : Set permissions for the plugin directory] *******************
changed: [127.0.0.1]

TASK [dashboards : Synchronize Percona Dashboards version file after upgrade] ***
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permissio$
 denied: b'/srv/grafana/.ansible_tmpma91at74PERCONA_DASHBOARDS_VERSION' -> b'/srv/grafana/PERCONA_DASHBOARDS_VERSION'
fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "Unable to make b'/srv/grafana/tmpuwgjcg5e' into to /srv/grafana/PERCONA_DASH$
OARDS_VERSION, failed final rename from b'/srv/grafana/.ansible_tmpma91at74PERCONA_DASHBOARDS_VERSION': [Errno 13] Permission denied: $
'/srv/grafana/.ansible_tmpma91at74PERCONA_DASHBOARDS_VERSION' -> b'/srv/grafana/PERCONA_DASHBOARDS_VERSION'"}

PLAY RECAP *********************************************************************
127.0.0.1                  : ok=18   changed=3    unreachable=0    failed=1    skipped=1    rescued=0    ignored=0

Summary wouldn’t work either:

# podman exec -it pmm-server pmm-admin summary
open summary_c5420dafd6f7_2025_07_10_15_27_39.zip: permission denied

This was all due to the fact that I had set:

Environment=PMM_VOLUME_NAME=/data/pmm-server/

Inside /usr/lib/systemd/system/pmm-server.service.

This was to ensure separation between the data and the actual container executables/running containers, this way should the disk fill itself you’d still be able to access the machine.

I checked the documentation for PMM 3 but so far I couldn’t find anything relevant to SELinux.

So I ask here, is there a best practice or an indication on how to handle SELinux with PMM 3 when using the docker/podman setup whilst mounting an external disk for increased and separated PMM retention?

I was thinking of some solutions and for now I got:

  1. Setting up /data/pmm-server with the appropriate SELinux context (but this would be a customization, and I’d like make as few modifications from the documentation as possible)

  2. We could leave it as a podman volume, but this would exhaust disk space on the main OS

  3. We could mount the podman volume directory as an external disk and do a restorecon (but I don’t really think this is a podman or docker best practice?)

  4. Disabling SElinux

Would some of this solutions suffice? Is there already some documentation for this usecase?

Thanks and best regards

Hi @SuperHammer,
I manage https://pmmdemo.percona.com, and I run with SElinux enabled. Here is how I start the PMM v3 container:

docker run -d -p 443:8443 -v pmm-data:/srv:Z --name pmm-server --network pmm-net --restart always docker.io/percona/pmm-server:3

[root@pmm-server ~]# getenforce
Enforcing

Be sure to use the :Z modifier when mounting volumes so that docker can manage the SELinux contexts.

Hi @matthewb ,
Just tested this:

Rolled back to enforcing local SELinux:

[root@ded-pmm ~]# systemctl stop pmm-server
# podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
# getenforce
Permissive
# setenforce 1
# getenforce
Enforcing

And I modified /usr/lib/systemd/system/pmm-server.service to use both :z (shared) and :Z (unshared):

Environment=PMM_VOLUME_NAME=/bind/mount/to/pmm-server/
...
    --volume=${PMM_VOLUME_NAME}:/srv:z \ (also tried ":Z" here)

Ran systemctl daemon-reload.

I forgot to mention that originally I was using :z but still encountered the problem.

After using :Z it is now working correctly, but I tried reverting the modification to use :z and it now seems to be always working correctly. I did not apply any other modifications. Maybe by setenforce 0 and using :Z something unblocked and the init process went forward?

Anyway, changing to using :Z did solve the problem so I’m marking your answer as the solution, I’ll keep an eye on this issue in case it reappears. Thanks for the help!