Hello everyone,
I have created my own certificates and they are present in /etc/pmm-certs, which I have mounted onto my docker container to /srv/nginx
ls -l
total 16
-rwxrwxrwx 1 root root 2017 Jun 5 09:11 ca.pem
-rwxrwxrwx 1 root root 769 Jun 5 09:11 dhparam.pem
-rwxrwxrwx 1 root root 1679 Jun 5 09:11 server.crt
-rwxrwxrwx 1 root root 1679 Jun 5 09:11 server.key
And I have used this command for mouting the certs when running the docker container,
docker run -d -p 443:443 --volumes-from pmm-data \
--name pmm-server -v /etc/pmm-certs:/srv/nginx \
--restart always percona/pmm-server:2
Until here, everything works fine, but now when I inspect pmm-server , the status is unhealthy and hence the UI is not accessible.
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1787780,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-06-05T11:05:52.274904768Z",
"FinishedAt": "0001-01-01T00:00:00Z",
"Health": {
"Status": "unhealthy",
"FailingStreak": 7,
"Log": [
{
"Start": "2022-06-05T16:36:10.867167648+05:30",
"End": "2022-06-05T16:36:10.950526279+05:30",
"ExitCode": 1,
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed connect to 127.0.0.1:80; Connection refused\n"
},
This is what the docker logs show:
$ docker logs pmm-server
2022-06-05 11:05:52,441 INFO Included extra file "/etc/supervisord.d/alertmanager.ini" during parsing
2022-06-05 11:05:52,441 INFO Included extra file "/etc/supervisord.d/dbaas-controller.ini" during parsing
2022-06-05 11:05:52,441 INFO Included extra file "/etc/supervisord.d/grafana.ini" during parsing
2022-06-05 11:05:52,441 INFO Included extra file "/etc/supervisord.d/pmm.ini" during parsing
2022-06-05 11:05:52,441 INFO Included extra file "/etc/supervisord.d/prometheus.ini" during parsing
2022-06-05 11:05:52,441 INFO Included extra file "/etc/supervisord.d/qan-api2.ini" during parsing
2022-06-05 11:05:52,441 INFO Included extra file "/etc/supervisord.d/victoriametrics.ini" during parsing
2022-06-05 11:05:52,441 INFO Included extra file "/etc/supervisord.d/vmalert.ini" during parsing
2022-06-05 11:05:52,441 INFO Set uid to user 0 succeeded
2022-06-05 11:05:52,458 INFO RPC interface 'supervisor' initialized
2022-06-05 11:05:52,458 INFO supervisord started with pid 1
2022-06-05 11:05:53,463 INFO spawned: 'pmm-update-perform-init' with pid 15
2022-06-05 11:05:53,466 INFO spawned: 'postgresql' with pid 16
2022-06-05 11:05:53,469 INFO spawned: 'clickhouse' with pid 17
2022-06-05 11:05:53,477 INFO spawned: 'grafana' with pid 18
2022-06-05 11:05:53,481 INFO spawned: 'nginx' with pid 25
2022-06-05 11:05:53,488 INFO spawned: 'victoriametrics' with pid 27
2022-06-05 11:05:53,495 INFO spawned: 'vmalert' with pid 28
2022-06-05 11:05:53,515 INFO spawned: 'alertmanager' with pid 29
2022-06-05 11:05:53,524 INFO spawned: 'qan-api2' with pid 30
2022-06-05 11:05:53,532 INFO spawned: 'pmm-managed' with pid 31
2022-06-05 11:05:53,565 INFO spawned: 'pmm-agent' with pid 42
2022-06-05 11:05:53,566 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:05:53,775 INFO exited: qan-api2 (exit status 1; not expected)
2022-06-05 11:05:54,599 INFO success: pmm-update-perform-init entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:54,599 INFO success: postgresql entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:54,599 INFO success: clickhouse entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:54,599 INFO success: grafana entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:54,602 INFO spawned: 'nginx' with pid 86
2022-06-05 11:05:54,603 INFO success: victoriametrics entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:54,603 INFO success: vmalert entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:54,603 INFO success: alertmanager entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:54,603 INFO success: pmm-managed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:54,603 INFO success: pmm-agent entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:54,660 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:05:54,780 INFO spawned: 'qan-api2' with pid 122
2022-06-05 11:05:55,843 INFO success: qan-api2 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-06-05 11:05:57,226 INFO spawned: 'nginx' with pid 177
2022-06-05 11:05:57,251 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:06:00,423 INFO spawned: 'nginx' with pid 253
2022-06-05 11:06:00,437 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:06:02,186 INFO exited: pmm-update-perform-init (exit status 0; expected)
2022-06-05 11:06:04,599 INFO spawned: 'nginx' with pid 345
2022-06-05 11:06:04,618 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:06:10,601 INFO spawned: 'nginx' with pid 360
2022-06-05 11:06:10,617 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:06:17,227 INFO spawned: 'nginx' with pid 383
2022-06-05 11:06:17,244 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:06:24,600 INFO spawned: 'nginx' with pid 398
2022-06-05 11:06:24,620 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:06:33,342 INFO spawned: 'nginx' with pid 453
2022-06-05 11:06:33,358 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:06:42,513 INFO spawned: 'nginx' with pid 476
2022-06-05 11:06:42,529 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:06:52,650 INFO spawned: 'nginx' with pid 498
2022-06-05 11:06:52,666 INFO exited: nginx (exit status 1; not expected)
2022-06-05 11:06:53,668 INFO gave up: nginx entered FATAL state, too many start retries too quickly
$ docker exec -it pmm-server supervisorctl status
alertmanager RUNNING pid 29, uptime 0:04:14
clickhouse RUNNING pid 17, uptime 0:04:14
dbaas-controller STOPPED Not started
grafana RUNNING pid 18, uptime 0:04:14
nginx FATAL Exited too quickly (process log may have details)
pmm-agent RUNNING pid 42, uptime 0:04:14
pmm-managed RUNNING pid 31, uptime 0:04:14
pmm-update-perform STOPPED Not started
pmm-update-perform-init EXITED Jun 05 11:06 AM
postgresql RUNNING pid 16, uptime 0:04:14
prometheus STOPPED Not started
qan-api2 RUNNING pid 122, uptime 0:04:13
victoriametrics RUNNING pid 27, uptime 0:04:14
vmalert RUNNING pid 28, uptime 0:04:14
Why is this happening? Any workaround for this?
Thanks