Alert Rules Lost Migrate to 2.4 from 2.39

Description:

After migration all rules are lost with the following message now showing the following

Errors loading rules
Failed to load Grafana rules state: failure getting rules: invalid character ‘z’ after object key:value pair; failure getting rules: invalid character ‘z’ after object key:value pair
Failed to load Grafana rules config: failed to get alert rules: invalid character ‘z’ after object key:value pair

Steps to Reproduce:

Docker upgrade via the gui

Version:

2.39 to 2.40

Logs:

logs.txt (66.5 KB)

Rules Should remain

Actual Result:

Alert Rules were lost

Hello @meyerder

In this case, you need to create/convert alert templates as the new pmm version changed many things for alerting.
Here is the reference from pmm doc, Percona Alerting - Percona Monitoring and Management

If you are upgrading from PMM 2.25 and earlier, alert templates will not be automatically migrated. This is because PMM 2.26.0 introduced significant changes to the core structure of rule templates.

In this scenario, you will need to manually recreate any custom rule templates that you want to transfer to PMM 2.26.0 or later.

Alert rules created with Integrated Alerting in PMM 2.30 and earlier are not automatically migrated to Percona Alerting.

@lalit.choudhary ALL alert rules are gone… All Alert templates migrated properly… All Alert Rules/templates were built on 2.39 as I have not had this instance up for more than a month or maybe 45 days

If you could give me a clue as to where to look to troubleshoot the issue that would help

I found this… but that is sqllite and not PostgreSQL It makes me think that the issue might be with one of the grafana alerts that may have existed (I cant remember but I may have had one or two grafana managed alerts vs all pmm templates)

@meyerder could you run following commands within docker container and provide result
psql -U grafana
select id, data, from alert_rule

@nurlan Looks like it was a single alert rule… the data was not in a proper json format it looks like… I deleted this rule and all works…

[{"refId":"A","queryType":"range","relativeTimeRange":{"from":3600,"to":0},"datasourceUid":"NuQ5hK64k","model":{"datasource":{"type":"loki","uid":"NuQ5hK64k"},"editorMode":"builder","expr":"sum by(applid) (count_over_time({job=\"system_logs\", filename=\"/var/log/messages\", applid=~\"xxx\", hostname=\"xxxxx\"} |~ "xxx" [1h]))","hide":false,"intervalMs":1000,"legendFormat":"{{appid}}","maxDataPoints":43200,"queryType":"range","refId":"A"}},{"refId":"B","queryType":"","relativeTimeRange":{"from":3600,"to":0},"datasourceUid":"__expr__","model":{"conditions":[{"evaluator":{"params":[14],"type":"lt"},"operator":{"type":"and"},"query":{"params":["A"]},"reducer":{"params":[],"type":"avg"},"type":"query"}],"datasource":{"type":"__expr__","uid":"__expr__"},"expression":"A","hide":false,"intervalMs":1000,"maxDataPoints":43200,"refId":"B","type":"classic_conditions"}}]

Great, thanks for update