scrape data from client with wrong interval

I install pmm server:1.10.0 via docker.
And I use pmm clinet 1.10.0 version also.

os : 7.4 redhat

There is not any abnormal things in the period of installation with default setting.
After installation, I use [url]http://ip/prometheus[/url] to get some machine information.
Then i find that there are 4 records in 20 minutes. like pic 1.

So i try to check and tune the scrape setting in /etc/prometheus.yml.
I find that whatever i tune the situation is no changed.

Any one could help me to figure it out?
Following is the initial prometheus config file content.

global: scrape_interval: 1m scrape_timeout: 10s evaluation_interval: 1m scrape_configs: - job_name: prometheus scrape_interval: 1s scrape_timeout: 1s metrics_path: /prometheus/metrics scheme: http static_configs: - targets: - localhost:9090 labels: instance: pmm-server - job_name: grafana scrape_interval: 5s scrape_timeout: 4s metrics_path: /metrics scheme: http static_configs: - targets: - localhost:3000 labels: instance: pmm-server - job_name: linux scrape_interval: 1s scrape_timeout: 1s metrics_path: /metrics scheme: http static_configs: - targets: - localhost:9100 labels: instance: pmm-server consul_sd_configs: - server: localhost:8500 datacenter: dc1 tag_separator: ‘,’ scheme: http services: - linux:metrics basic_auth: username: pmm password: tls_config: insecure_skip_verify: true relabel_configs: - source_labels: [_meta_consul_tags] separator: ; regex: .*,alias([-\w:.]+),.* target_label: instance replacement: $1 action: replace - source_labels: [__meta_consul_tags] separator: ; regex: .,scheme_https,. target_label: scheme replacement: https action: replace - job_name: proxysql scrape_interval: 1s scrape_timeout: 1s metrics_path: /metrics scheme: http consul_sd_configs: - server: localhost:8500 datacenter: dc1 tag_separator: ‘,’ scheme: http services: - proxysql:metrics basic_auth: username: pmm password: tls_config: insecure_skip_verify: true relabel_configs: - source_labels: [_meta_consul_tags] separator: ; regex: .*,alias([-\w:.]+),.* target_label: instance replacement: $1 action: replace - source_labels: [__meta_consul_tags] separator: ; regex: .,scheme_https,. target_label: scheme replacement: https action: replace - job_name: mongodb scrape_interval: 1s scrape_timeout: 1s metrics_path: /metrics scheme: http consul_sd_configs: - server: localhost:8500 datacenter: dc1 tag_separator: ‘,’ scheme: http services: - mongodb:metrics basic_auth: username: pmm password: tls_config: insecure_skip_verify: true relabel_configs: - source_labels: [_meta_consul_tags] separator: ; regex: .*,alias([-\w:.]+),.* target_label: instance replacement: $1 action: replace - source_labels: [__meta_consul_tags] separator: ; regex: .,scheme_https,. target_label: scheme replacement: https action: replace - source_labels: [meta_consul_tags] separator: ; regex: .*,cluster([-\w:.]+),.* target_label: cluster replacement: $1 action: replace - job_name: mysql-hr scrape_interval: 1s scrape_timeout: 1s metrics_path: /metrics-hr scheme: http consul_sd_configs: - server: localhost:8500 datacenter: dc1 tag_separator: ‘,’ scheme: http services: - mysql:metrics basic_auth: username: pmm password: tls_config: insecure_skip_verify: true relabel_configs: - separator: ; regex: (.) target_label: job replacement: mysql action: replace - source_labels: [__meta_consul_tags] separator: ; regex: .,alias([-\w:.]+),.* target_label: instance replacement: $1 action: replace - source_labels: [_meta_consul_tags] separator: ; regex: .,scheme_https,. target_label: scheme replacement: https action: replace - job_name: mysql-mr scrape_interval: 5s scrape_timeout: 1s metrics_path: /metrics-mr scheme: http consul_sd_configs: - server: localhost:8500 datacenter: dc1 tag_separator: ‘,’ scheme: http services: - mysql:metrics basic_auth: username: pmm password: tls_config: insecure_skip_verify: true relabel_configs: - separator: ; regex: (.) target_label: job replacement: mysql action: replace - source_labels: [__meta_consul_tags] separator: ; regex: .,alias([-\w:.]+),.* target_label: instance replacement: $1 action: replace - source_labels: [_meta_consul_tags] separator: ; regex: .,scheme_https,. target_label: scheme replacement: https action: replace - job_name: mysql-lr scrape_interval: 1m scrape_timeout: 5s metrics_path: /metrics-lr scheme: http consul_sd_configs: - server: localhost:8500 datacenter: dc1 tag_separator: ‘,’ scheme: http services: - mysql:metrics basic_auth: username: pmm password: tls_config: insecure_skip_verify: true relabel_configs: - separator: ; regex: (.) target_label: job replacement: mysql action: replace - source_labels: [__meta_consul_tags] separator: ; regex: .,alias([-\w:.]+),.* target_label: instance replacement: $1 action: replace - source_labels: [__meta_consul_tags] separator: ; regex: .,scheme_https,. target_label: scheme replacement: https action: replace - job_name: rds-mysql-hr honor_labels: true scrape_interval: 1s scrape_timeout: 1s metrics_path: /metrics-hr scheme: http relabel_configs: - separator: ; regex: (.) target_label: job replacement: mysql action: replace - job_name: rds-mysql-mr honor_labels: true scrape_interval: 5s scrape_timeout: 1s metrics_path: /metrics-mr scheme: http relabel_configs: - separator: ; regex: (.) target_label: job replacement: mysql action: replace - job_name: rds-mysql-lr honor_labels: true scrape_interval: 1m scrape_timeout: 5s metrics_path: /metrics-lr scheme: http relabel_configs: - separator: ; regex: (.*) target_label: job replacement: mysql action: replace - job_name: rds-basic honor_labels: true scrape_interval: 1m scrape_timeout: 55s metrics_path: /basic scheme: http - job_name: rds-enhanced honor_labels: true scrape_interval: 10s scrape_timeout: 9s metrics_path: /enhanced scheme: http

Hi aaron

First, is there a particular reason you selected 1.10 instead of our latest 1.17.1 (released today in fact)? 1.10 is fine but just old…

Also it is hard to read your paste due to formatting, but it looks like you modified scrape_timeout=1m and scrape_interval=10s. I’m not sure if those values will be valid, for example we generally maintain scrape_interval=1s in order to gather very high resolution data.

So can you share with us the commands you used to create and start your docker instances? that way we can see how the config file got set based on environment variables passed to entrypoint.sh.

Thanks!