i did upgrade pmm 1.10 to 1.15

hello

I did upgrade pmm 1.10 to 1.15

so old graph(scrap at 1.10) is not viewing.

how I can view old scrap data?

I did upgrade client 1.15 before server upgrade.

but it’s too.

Hi next1009

I’m glad to see you on 1.15 now. You should not have lost any data, rather we did upgrade Prometheus in 1.13 so there are in fact two Prometheus deamons running in PMM Server. Prometheus has a feature called remote_read so that any queries not satisfied in the first instance will be automatically fetched from the second instance.

You can review the logs, please share anything of interest, they are /var/log/prometheus.log (Prom2) and /var/log/promtheus1.log (Prom1)

HI Michael Coburn

I did upgrade 1.13 to 1.15 one more. I can’t see old graph still.

“append failed” at /var/log/prometheus.log

what can i do?

level=warn ts=2018-10-30T07:21:18.100072499Z caller=scrape.go:713 component=“scrape manager” scrape_pool=mysql-mr target=https://host:42002/metrics-mr msg=“append failed” err=“out of order sample”
level=info ts=2018-10-30T07:21:18.122511242Z caller=head.go:348 component=tsdb msg=“head GC completed” duration=76.634098ms
level=info ts=2018-10-30T07:21:18.274702031Z caller=head.go:357 component=tsdb msg=“WAL truncation completed” duration=152.141564ms
level=info ts=2018-10-30T07:21:19.043692061Z caller=compact.go:398 component=tsdb msg=“write block” mint=1540684800000 maxt=1540692000000 ulid=01CV1XJ7SQV4B8MN9A8DA7YWBV
level=info ts=2018-10-30T07:21:19.254681803Z caller=head.go:348 component=tsdb msg=“head GC completed” duration=59.961139ms
level=info ts=2018-10-30T07:21:19.404501207Z caller=head.go:357 component=tsdb msg=“WAL truncation completed” duration=149.750971ms
level=warn ts=2018-10-30T07:21:19.738257458Z caller=scrape.go:942 component=“scrape manager” scrape_pool=mysql-mr target=https://host:42002/metrics-mr msg=“Error on ingesting out-of-order samples” num_dropped=2
level=warn ts=2018-10-30T07:21:19.738312684Z caller=scrape.go:713 component=“scrape manager” scrape_pool=mysql-mr target=https://host:42002/metrics-mr msg=“append failed” err=“out of order sample”
level=info ts=2018-10-30T07:21:19.973328165Z caller=compact.go:398 component=tsdb msg=“write block” mint=1540692000000 maxt=1540699200000 ulid=01CV1XJ8WJ65ZATYPHHG36QKN9
level=info ts=2018-10-30T07:21:20.228745853Z caller=head.go:348 component=tsdb msg=“head GC completed” duration=89.68515ms
level=info ts=2018-10-30T07:21:20.41413842Z caller=head.go:357 component=tsdb msg=“WAL truncation completed” duration=185.29103ms
level=warn ts=2018-10-30T07:21:21.674887216Z caller=scrape.go:942 component=“scrape manager” scrape_pool=mysql-mr target=https://host:42002/metrics-mr msg=“Error on ingesting out-of-order samples” num_dropped=1

Hi next1009

Can you check also in log file /var/log/prometheus1.log for any entries, particularly whether the instance is restarting frequently?

Hi Michael Coburn

I did check /var/log/prometheus1.log at after upgrade.

After docker restart, there was no log /var/log/prometheus1.log

thank you for your help and I’m sorry for bad English

Hi next1009 , pls see all Server logs for some ideas. the location of it explained in https://www.percona.com/doc/percona-monitoring-and-management/release-notes/1.15.0.html#server-and-client-logs