I have a usecase to backup and store PMM2 data (both victoria metrics and clickhouse) for future reference usecase.
Refering below doc, while exporting the data I am getting error, how to fix this??
Error:
remoteAddr: "127.0.0.1:49014", X-Forwarded-For: "127.0.0.1"; requestURI: /prometheus/api/v1/export/native?match={__name__!=""}; error during sending native data to remote client: search error after reading 0 data blocks: error when searching for tagFilters=[{__name__=~".+"}] on the time range [1970-01-01T00:00:00Z..2024-10-24T16:05:56Z]: error when searching for metricIDs in the current indexdb: the number of matching timeseries exceeds 10000000; either narrow down the search or increase -search.max* command-line flag values at vmselect; see https://docs.victoriametrics.com/#resource-usage-limits
I tried in a stage cluster with smaller data(< 100Gb) working fine, but for prod server where my PMM data is > 600Gb its encountering below error, how to fix this?
I even tried increasing the search.max arguments in the API as per error, no luck!!
Even tried limiting the data for a smaller duration using start and end args, still same error!
These are parameters you need to pass to victoriametrics on startup.
They are not URL parameters, as you showed above. You will need to exec into the docker container and adjust the supervisord script to launch victoriametrics with increased resources, or you need to reduce your data set and do it in much smaller batches.
I would recommend to split your data to multiple chunks and copy them one by one.
Another solution to backup is physically copying /srv directory, but it should be done on stopped PMM Server.
@matthewb to take the docker image export of the data volume, do we need to stop the pmm-server here OR I can take the dump while its receiving new data?
@nurlan
Can you please share more details on how to split the data to multiple chunks and copy them?
Also, I had already tried physically copying /srv directory post stoping the PMM server, but it was extremely slow, and we won’t be able to stop the prod PMM server for so long.
@nurlan
The PMM dump is available in PMM 2.41 via the UI, but in older versions (like mine, 2.39), it must be accessed through the PMM image.
I executed the export command in the container, but I’m still encountering the same error. Oddly, even when trying to export data for just one day, it throws an error stating “timeseries exceeds 10000000.”
It seems the PMM dump might be using the VM API internally. Are there any fixes for this, or am I approaching it incorrectly?
2024-10-27T12:28:13Z INF Credential user was obtained from pmm-url
2024-10-27T12:28:13Z INF Credential password was obtained from pmm-url
2024-10-27T12:28:13Z INF Exporting metrics...
2024-10-27T12:28:13Z INF Processing 1/289 chunk...
2024-10-27T12:28:13Z INF Processing 2/289 chunk...
------------
2024-10-27T12:28:13Z INF Processing 31/289 chunk...
2024-10-27T12:28:13Z INF Processing 32/289 chunk...
2024-10-27T12:28:20Z FTL Failed to export: failed to read chunks from source: failed to read chunk: non-OK response from victoria metrics: 400: remoteAddr: "127.0.0.1:53542", X-Forwarded-For: "127.0.0.1"; requestURI: /prometheus/api/v1/export?match%5B%5D=%7B__name__%3D~%22.%2A%22%7D&start=1726790459&end=1726790759; error when exporting data on the time range (start=1726790459000, end=1726790759000): cannot fetch data for "filters=[{__name__=~\".*\"}], timeRange=[2024-09-20T00:00:59Z..2024-09-20T00:05:59Z]": search error after reading 0 data blocks: error when searching for tagFilters=[{}] on the time range [2024-09-20T00:00:59Z..2024-09-20T00:05:59Z]: error when searching for metricIDs in the current indexdb: the number of matching timeseries exceeds 10000000; either narrow down the search or increase -search.max* command-line flag values at vmselect; see https://docs.victoriametrics.com/#resource-usage-limits