Hi,
I have one quesion about stopping Prometheus in Docker.
when I restart PMM docker , Prometheus could be crashed like this :
.
.
.
time=“2018-01-10T06:53:00Z” level=info msg=“5170000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:01Z” level=info msg=“5180000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:01Z” level=info msg=“5190000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:01Z” level=info msg=“5200000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:01Z” level=info msg=“5210000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:01Z” level=info msg=“5220000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:01Z” level=info msg=“5230000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:01Z” level=info msg=“5240000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:02Z” level=info msg=“5250000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:02Z” level=info msg=“5260000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:02Z” level=info msg=“5270000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:02Z” level=info msg=“5280000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:02Z” level=info msg=“5290000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:02Z” level=info msg=“5300000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:02Z” level=info msg=“5310000 archived metrics checked.” source=“crashrecovery.go:418”
time=“2018-01-10T06:53:02Z” level=info msg=“5320000 archived metrics checked.” source=“crashrecovery.go:418”
.
.
.
I think this log is related with recover process in prometheus.
but it takes too long time.
so I want to know how can we stop the prometheus gracefully without crash.
thanks.