Alerting and "Unexpected error" in graph rendered by GF Image Renderer service

Description:

I’m running two Docker containers, one for PMM and one for the Grafana Image Renderer service. I’ve used this link as a resource for this configuration:

Regarding the alert notifications, although I’ve been able to successfully configure the environment so an image of the graph that corresponds with the alert is included in the email alert, the graph is incomplete. At the center of the graph panel it shows “No Data” and in the upper right-hand corner it shows “Unexpected Error”.

Steps to Reproduce:

[Step-by-step instructions on how to reproduce the issue, including any specific settings or configurations]

Version:

PMM: 2.44.0
GF Image Renderer: 3.12.1

Logs:

[If applicable, include any relevant log files or error messages]

Expected Result:

I’m expecting to see an image of a graph containing data relevant to the alert.

Actual Result:

An image of the graph is rendered but contains no data and an error message.

Additional Information:

I don’t know if the log messages are related but here are the docker logs for the “renderer” container when an alert is triggered:
docker logs [render_container_here]
{“level”:“error”,“message”:“Browser console error”,“msg”:“Failed to load resource: the server responded with a status of 401 ()”,“url”:“https://[server_name]/v1/inventory/Services/ListTypes”}
{“level”:“error”,“message”:“Browser console error”,“msg”:“Failed to load resource: the server responded with a status of 401 ()”,“url”:“https://[server_name]/v1/Settings/Get”}
{“level”:“error”,“message”:“Browser console error”,“msg”:“Failed to load resource: the server responded with a status of 401 ()”,“url”:“https://[server_name]/v1/Platform/UserStatus”}
{“level”:“error”,“message”:“Browser console error”,“msg”:“Failed to load resource: the server responded with a status of 401 ()”,“url”:“https://[server_name]/v1/management/Advisors/List”}
{“level”:“error”,“message”:“Browser console error”,“msg”:“Failed to load resource: the server responded with a status of 401 ()”,“url”:“https://[server_name]/v1/user”}
{“level”:“error”,“message”:“Browser console error”,“msg”:“Failed to load resource: the server responded with a status of 401 ()”,“url”:“https://[server_name]/v1/Platform/ServerInfo”}
{“level”:“error”,“message”:“Browser console error”,“msg”:“Failed to load resource: the server responded with a status of 401 ()”,“url”:“https://[server_name]/graph/api/ds/query”}

These 7 entries in the log file continuously repeat.

Thus far, I’ve not been able to find any information to help me understand the cause, or what’s missing in my configuration. Any thoughts, advice or info anyone has to share would be appreciated.

401 mean unauthorized. Do you have roles/permissions enabled?

Verify the graph is working. Navigate to that graph in PMM, then “Share” the graph and you should get an image link. Verify that image link shows the image.

Yes, when performing this action the image link does show the image I’d expect to see in the alert. It contains the graph and the data in the graph.

Hi Matthew, Perhaps this is the piece I’m missing? I’ve not configured anything specific to roles/permissions. Can you please point me in the right direction to see more details on this requirement? Thanks

As far as I know you should not need any special permissions. Please show the docker commands you used to launch both containers.

services:
grafana-image-renderer-service:
environment:
- COMPOSE_PROJECT_NAME=pmm-project
- IGNORE_HTTPS_ERRORS=true
- GF_RENDERING_IGNORE_HTTPS_ERRORS=true
- ENABLE_METRICS=true
- GF_AUTH_TOKEN=12345
- GF_LOG_LEVEL=debug
- GF_LOG_FILTERS=rendering:debug
build:
context: ./renderer_build
dockerfile: Dockerfile
image: grafana/grafana-image-renderer:3.12.1
container_name: renderer
networks:
- pmm-network
restart: unless-stopped

pmm-service:
environment:
- COMPOSE_PROJECT_NAME=pmm-project
- GF_RENDERING_SERVER_URL=http://renderer:8081/render
- GF_RENDERING_CALLBACK_URL=https://[name-redacted]/graph/
- GF_RENDERER_TOKEN=12345
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_EXTERNAL_IMAGE_STORAGE_PROVIDER=local
- GF_ALERTING_ENABLED=false
- GF_UNIFIED_ALERTING_ENABLED=true
- GF_UNIFIED_ALERTING_SCREENSHOTS_CAPTURE=true
- GF_UNIFIED_ALERTING_SCREENSHOTS_CAPTURE_TIMEOUT=30s
- GF_UNIFIED_ALERTING_EVALUATION_TIMEOUT=90s
- GF_LOG_LEVEL=debug
- GF_LOG_FILTERS=rendering:debug
build:
context: ./pmm_build
dockerfile: Dockerfile
depends_on:
- grafana-image-renderer-service
ports:
- “443:443”
image: pmm-server:latest
container_name: pmm-server
networks:
- pmm-network
volumes:
- pmm-data:/srv
restart: unless-stopped
stdin_open: true
tty: true
command: /opt/entrypoint.sh

networks:
pmm-network:
name: pmm-network
external: true

volumes:
pmm-data:
name: pmm-data
external: true

Can you try to clean browser cache and try it again?
What error message screenshot is containing?
Do you Access control configured for metrics?
Do you see anything else except those 401 errors?

Hi Nurlan,

The issue I’m experiencing doesn’t involve the use of a browser. Everything that involves the use of the browser with PMM is working well.

The issue is that embedded images in email alerts generated by PMM contain only an “empty” graph, along with an error message in the upper right hand corner of the graph (embedded image) that says “Unexpected error”. That’s it. As well, in the center of the graph image are the words “No data”.

So the images are being embedded in the email alerts, they are just not complete (they do not include the data expected in the graph).

When I go to the graph in question in PMM I can view the graph and the data no problem. I can also check to verify the image is properly rendered by “Sharing” the graph as matthewb mentioned previously.

In the Docker logs for the renderer container I see nothing else in the logs aside from the 401 errors shown above.

Sorry, should have specified…I’m using docker compose (command = docker compose up -d) and the docker-compose.yml file provided above.

The reason why I asked you to clean cache it’s because grafana sends cookies from your browser to grafana image renderer which could lead to this problem you are facing. And since we changed cookies path during upgrade of PMM it could send outdated cookies.

Ok thanks for explaining. I did do what you asked, although still not understanding its relevance.

Sorry if I’m missing something, but let me explain a scenario that might better explain what I’m experiencing.

If I was only a recipient of email alerts from PMM and never opened the PMM site in my browser, my experience would be the following:

  1. I would receive an alert sent to my email with the alert condition.
  2. Upon opening the email I see a graph image in the body of the email but the graph contains “No data”. There are no lines on the chart indicating the metrics and the word “No data” are embedded in the center of the image. As well, in the upper right hand corner also embedded in the image are the words “Unexpected error”.
  3. In this situation the use of a browser is not involved. I do not have to open a browser to view the email and the embedded image. I can view it in the email program.

Thanks for your reply.

Hello Jay,
Using your docker compose file, we can configure the alert and get the dashboard as well.
Can you confirm if you have dashboard and your both docker instances can communicate.
You can use container names here instead like this,

      - GF_RENDERING_SERVER_URL=http://renderer:8081/render
      - GF_RENDERING_CALLBACK_URL=https://pmm-server:443/graph/

Have you configured any external authentication with grafana?
Can you try rendering the graph image from PMM itself
Here is an example when you view the graph and click share

Here we would like you to test if you click on “Direct link rendered image” do you see the graph?
Along with that if you can check grafana logs,
Exec into the pmm-server container and /srv/logs/grafana.log. Check if there are any errors there.
Let us know if that works.
Regards,
Yunus Shaikh.

I can open the dashboard and both docker instances can communicate with one another.

I have changed the names of my containers according to the names you have shown here, and that hasn’t changed anything.

I am using LDAP external authentication for accessing PMM.

I can render the graph image from PMM itself.

I am able to render an image by clicking on “Direct link rendered image”. The image includes the graph and the expected data.

I’ve also checked the grafana.log file during the firing of an alert and there are no errors logged. The only “errors” I see are from the renderer container when running docker logs command. Those are the errors initially reported above.

Here’s an example of the image I see in an email alert:

I have some more details from the logs I can share:

No change in behavior but notice the 404s just prior to the screenshot towards the end of the log. Is this expected?

logger=datasources t=2025-03-18T14:02:00.001706522-06:00 level=debug msg="Querying for data source via SQL store" uid=PA58DA793C7250F1B orgId=1
logger=secrets.kvstore t=2025-03-18T14:02:00.00319574-06:00 level=debug msg="got secret value" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:00.003260172-06:00 level=debug msg="Sending query" start=2025-03-18T13:52:00-06:00 end=2025-03-18T14:02:00-06:00 step=1s query="(node_filesystem_avail_bytes * 100) / node_filesystem_size_bytes < 25 and ON (instance, device, mountpoint) node_filesystem_readonly == 0"
logger=ngalert rule_uid=0klFlUpNk org_id=1 version=23 attempt=0 now=2025-03-18T14:02:00-06:00 t=2025-03-18T14:02:00.012169228-06:00 level=debug msg="alert rule evaluated" results="[{Instance:datasource_uid=PA58DA793C7250F1B, ref_id=A State:NoData Error:<nil> EvaluatedAt:2025-03-18 14:02:00 -0600 MDT EvaluationDuration:12.164619ms EvaluationString: Values:map[]}]" duration=10.541542ms
logger=ngalert rule_uid=0klFlUpNk org_id=1 t=2025-03-18T14:02:00.012214763-06:00 level=debug msg="state manager processing evaluation results" resultCount=1
logger=ngalert t=2025-03-18T14:02:00.012447657-06:00 level=debug msg="setting alert state" uid=0klFlUpNk
logger=ngalert rule_uid=0klFlUpNk org_id=1 t=2025-03-18T14:02:00.012455603-06:00 level=debug msg="saving new states to the database" count=1
logger=accesscontrol.service t=2025-03-18T14:02:00.29832152-06:00 level=debug msg="using cached permissions" key=rbac-permissions-1-apikey-17
logger=datasources t=2025-03-18T14:02:05.002424884-06:00 level=debug msg="Querying for data source via SQL store" uid=PA58DA793C7250F1B orgId=1
logger=secrets.kvstore t=2025-03-18T14:02:05.003859778-06:00 level=debug msg="got secret value" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:05.003932682-06:00 level=debug msg="Sending query" start=2025-03-18T13:52:00-06:00 end=2025-03-18T14:02:00-06:00 step=1s query="(1 - avg by(node_name) (rate(node_cpu_seconds_total{mode=\"idle\"}[30s])))\n* 100\n> bool 5"
logger=ngalert rule_uid=Ayf2v5FNz org_id=1 version=102 attempt=0 now=2025-03-18T14:02:00-06:00 t=2025-03-18T14:02:05.006278687-06:00 level=debug msg="alert rule evaluated" results="[{Instance:node_name=pmm-server State:Alerting Error:<nil> EvaluatedAt:2025-03-18 14:02:00 -0600 MDT EvaluationDuration:5.006264405s EvaluationString:[ var='A' labels={node_name=pmm-server} value=1 ] Values:map[A:{Var:A Labels:node_name=pmm-server Value:0xc001e89510}]} {Instance:node_name=[MyMonitoredServerName] State:Alerting Error:<nil> EvaluatedAt:2025-03-18 14:02:00 -0600 MDT EvaluationDuration:5.006272324s EvaluationString:[ var='A' labels={node_name=[MyMonitoredServerName]} value=1 ] Values:map[A:{Var:A Labels:node_name=[MyMonitoredServerName] Value:0xc001e89520}]}]" duration=3.929719ms
logger=ngalert rule_uid=Ayf2v5FNz org_id=1 t=2025-03-18T14:02:05.006325658-06:00 level=debug msg="state manager processing evaluation results" resultCount=2
logger=ngalert t=2025-03-18T14:02:05.006541422-06:00 level=debug msg="setting alert state" uid=Ayf2v5FNz
logger=ngalert.image dashboard=node-cpu panel=22 dashboard=node-cpu panel=22 t=2025-03-18T14:02:05.006559309-06:00 level=debug msg="Requesting screenshot"
logger=rendering renderer=http t=2025-03-18T14:02:05.007993531-06:00 level=info msg=Rendering path="d-solo/node-cpu/cpu-utilization-details?orgId=1&panelId=22"
logger=rendering renderer=http t=2025-03-18T14:02:05.009004826-06:00 level=debug msg="calling remote rendering service" url="http://renderer:8081/render?deviceScaleFactor=1.000000&domain=pmm-server&encoding=&height=500&renderKey=GsXf2VigupqgMK1PllBI3ZAYnYZOuJpj&timeout=10&timezone=&url=https%3A%2F%2Fpmm-server%3A443%2Fgraph%2Fd-solo%2Fnode-cpu%2Fcpu-utilization-details%3ForgId%3D1%26panelId%3D22%26render%3D1&width=1000"
logger=ngalert t=2025-03-18T14:02:05.361532427-06:00 level=debug msg="recording state cache metrics" now=2025-03-18T14:02:05.361527234-06:00
logger=accesscontrol.service t=2025-03-18T14:02:05.365220716-06:00 level=debug msg="fetch permissions from store" key=rbac-permissions-1-user-3
logger=accesscontrol.service t=2025-03-18T14:02:05.365298066-06:00 level=debug msg="fetch permissions from store" key=rbac-permissions-1-user-3
logger=accesscontrol.service t=2025-03-18T14:02:05.366681303-06:00 level=debug msg="cache permissions" key=rbac-permissions-1-user-3
logger=accesscontrol.service t=2025-03-18T14:02:05.366837886-06:00 level=debug msg="cache permissions" key=rbac-permissions-1-user-3
logger=accesscontrol.evaluator t=2025-03-18T14:02:05.374746915-06:00 level=debug msg="matched scope" userscope=datasources:* targetscope=datasources:uid:PA58DA793C7250F1B
logger=accesscontrol.evaluator t=2025-03-18T14:02:05.374766431-06:00 level=debug msg="matched scope" userscope=datasources:* targetscope=datasources:uid:PA58DA793C7250F1B
logger=accesscontrol.service t=2025-03-18T14:02:05.376762699-06:00 level=debug msg="using cached permissions" key=rbac-permissions-1-user-3
logger=accesscontrol.evaluator t=2025-03-18T14:02:05.376896826-06:00 level=debug msg="matched scope" userscope=datasources:* targetscope=datasources:uid:PA58DA793C7250F1B
logger=datasource t=2025-03-18T14:02:05.376925043-06:00 level=debug msg="Applying default URL parsing for this data source type" type=prometheus url=http://127.0.0.1:8430/
logger=accesscontrol.evaluator t=2025-03-18T14:02:05.383886253-06:00 level=debug msg="matched scope" userscope=datasources:* targetscope=datasources:uid:PA58DA793C7250F1B
logger=accesscontrol.evaluator t=2025-03-18T14:02:05.383905873-06:00 level=debug msg="matched scope" userscope=datasources:* targetscope=datasources:uid:PA58DA793C7250F1B
logger=accesscontrol.evaluator t=2025-03-18T14:02:05.608681169-06:00 level=debug msg="matched scope" userscope=plugins:* targetscope=plugins:id:pmm-app
logger=infra.kvstore.sql t=2025-03-18T14:02:05.609445022-06:00 level=debug msg="kvstore value not found" orgId=1 namespace=serviceaccounts key=hideApiKeys
logger=accesscontrol.service t=2025-03-18T14:02:06.302650027-06:00 level=debug msg="fetch permissions from store" key=rbac-permissions-1-apikey-17
logger=accesscontrol.service t=2025-03-18T14:02:06.304187423-06:00 level=debug msg="cache permissions" key=rbac-permissions-1-apikey-17
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:06.864433937-06:00 level=info msg="Request Completed" method=GET path=/api/live/ws status=-1 remote_addr=127.0.0.1 time_ms=1 duration=1.441576ms size=0 referer= handler=/api/live/ws
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:06.901179614-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.948181ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:06.903870181-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.631771ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:06.928769003-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=2 duration=2.041954ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:06.939512574-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.933081ms size=52 referer= handler=/api/user/
logger=live t=2025-03-18T14:02:06.982713146-06:00 level=debug msg="Client connected" user=0 client=6c5d6baf-b5b3-44f8-b009-03ee2eb045e6
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.008839712-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=2 duration=2.071271ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.011930669-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.797652ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.115447787-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.939977ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.118594185-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.726977ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.164928485-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.717517ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.168726568-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=2 duration=2.774377ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.178090246-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.663641ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.180354792-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.349048ms size=52 referer= handler=/api/user/
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.296065128-06:00 level=debug msg="matched scope" userscope=dashboards:* targetscope=dashboards:uid:node-cpu
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.296093104-06:00 level=debug msg="matched scope" userscope=dashboards:* targetscope=dashboards:uid:node-cpu
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.296099091-06:00 level=debug msg="matched scope" userscope=dashboards:* targetscope=dashboards:uid:node-cpu
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.296105198-06:00 level=debug msg="matched scope" userscope=dashboards:* targetscope=dashboards:uid:node-cpu
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.296109805-06:00 level=debug msg="matched scope" userscope=dashboards:* targetscope=dashboards:uid:node-cpu
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.296114347-06:00 level=debug msg="matched scope" userscope=dashboards:* targetscope=dashboards:uid:node-cpu
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.296702891-06:00 level=debug msg="matched scope" userscope=annotations:* targetscope=annotations:type:organization
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.296716593-06:00 level=debug msg="matched scope" userscope=annotations:* targetscope=annotations:type:organization
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.296721584-06:00 level=debug msg="matched scope" userscope=annotations:* targetscope=annotations:type:organization
logger=secrets.kvstore t=2025-03-18T14:02:07.43768063-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.437741694-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.441972732-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.442023075-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.447966775-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.448020137-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.460025888-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.460094651-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.463241063-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.463299265-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.477055483-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.477110267-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.489700665-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.489754457-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.502808715-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.502874402-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.520960472-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.521019974-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.521181372-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.521209164-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.540494577-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.540553026-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=secrets.kvstore t=2025-03-18T14:02:07.550274336-06:00 level=debug msg="got secret value from cache" orgId=1 type=datasource namespace=Metrics
logger=tsdb.prometheus t=2025-03-18T14:02:07.5503365-06:00 level=debug msg="Sending resource query" URL=api/v1/series
logger=live t=2025-03-18T14:02:07.57139574-06:00 level=debug msg="Client wants to subscribe" user=0 client=6c5d6baf-b5b3-44f8-b009-03ee2eb045e6 channel=1/grafana/dashboard/uid/node-cpu
logger=live t=2025-03-18T14:02:07.571452927-06:00 level=debug msg="Found cached channel handler" channel=grafana/dashboard/uid/node-cpu
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.581106357-06:00 level=debug msg="matched scope" userscope=datasources:* targetscope=datasources:uid:PA58DA793C7250F1B
logger=accesscontrol.evaluator t=2025-03-18T14:02:07.585522515-06:00 level=debug msg="matched scope" userscope=dashboards:* targetscope=dashboards:uid:node-cpu
logger=live t=2025-03-18T14:02:07.585540925-06:00 level=debug msg="Client subscribed" user=0 client=6c5d6baf-b5b3-44f8-b009-03ee2eb045e6 channel=1/grafana/dashboard/uid/node-cpu
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.666101351-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.884256ms size=52 referer= handler=/api/user/
logger=context userId=0 orgId=1 uname= t=2025-03-18T14:02:07.66852782-06:00 level=info msg="Request Completed" method=GET path=/api/user status=404 remote_addr=127.0.0.1 time_ms=1 duration=1.33977ms size=52 referer= handler=/api/user/
logger=live t=2025-03-18T14:02:08.446458724-06:00 level=debug msg="Client disconnected" user=0 client=6c5d6baf-b5b3-44f8-b009-03ee2eb045e6 reason="connection closed" elapsed=1.463723696s
logger=ngalert.image dashboard=node-cpu panel=22 dashboard=node-cpu panel=22 t=2025-03-18T14:02:08.532013919-06:00 level=debug msg="Took screenshot" path=/srv/grafana/png/w9iaaxoapN2Qmserpibz.png
logger=ngalert.image dashboard=node-cpu panel=22 dashboard=node-cpu panel=22 t=2025-03-18T14:02:08.534471479-06:00 level=debug msg="Saved image" token=f11faefe-d260-4f12-8fad-270ee116bc4e
logger=ngalert t=2025-03-18T14:02:08.534710421-06:00 level=debug msg="setting alert state" uid=Ayf2v5FNz
logger=ngalert.image dashboard=node-cpu panel=22 dashboard=node-cpu panel=22 t=2025-03-18T14:02:08.534723829-06:00 level=debug msg="Found cached image" token=f11faefe-d260-4f12-8fad-270ee116bc4e
logger=ngalert rule_uid=Ayf2v5FNz org_id=1 t=2025-03-18T14:02:08.534731962-06:00 level=debug msg="saving new states to the database" count=2
logger=ngalert t=2025-03-18T14:02:08.534825506-06:00 level=debug msg="alert state changed creating annotation" alertRuleUID=Ayf2v5FNz newState=Alerting oldState=Pending
logger=ngalert t=2025-03-18T14:02:08.5348247-06:00 level=debug msg="alert state changed creating annotation" alertRuleUID=Ayf2v5FNz newState=Alerting oldState=Pending
logger=alerts-router rule_uid=Ayf2v5FNz org=1 t=2025-03-18T14:02:08.543246841-06:00 level=debug msg="sending alerts to local notifier" count=2 alerts="[{Annotations:map[__alertImageToken__:f11faefe-d260-4f12-8fad-270ee116bc4e __dashboardUid__:node-cpu __orgId__:1 __panelId__:22 __value_string__:[ var='A' labels={node_name=pmm-server} value=1 ] description:pmm-server CPU load is more than 5%. summary:Node high CPU load (pmm-server)] EndsAt:2025-03-18T14:08:00.000-06:00 StartsAt:2025-03-18T14:02:00.000-06:00 Alert:{GeneratorURL:https://[MyPMMServerName]/graph/alerting/grafana/Ayf2v5FNz/view Labels:map[__alert_rule_namespace_uid__:f1TZ_qJVz __alert_rule_uid__:Ayf2v5FNz alertname:pmm_node_high_cpu_load Alerting Rule grafana_folder:OS node_name:pmm-server percona_alerting:1 severity:warning template_name:pmm_node_high_cpu_load]}} {Annotations:map[__alertImageToken__:f11faefe-d260-4f12-8fad-270ee116bc4e __dashboardUid__:node-cpu __orgId__:1 __panelId__:22 __value_string__:[ var='A' labels={node_name=[MyMonitoredServerName]} value=1 ] description:[MyMonitoredServerName] CPU load is more than 5%. summary:Node high CPU load ([MyMonitoredServerName])] EndsAt:2025-03-18T14:08:00.000-06:00 StartsAt:2025-03-18T14:02:00.000-06:00 Alert:{GeneratorURL:https://[MyPMMServerName]/graph/alerting/grafana/Ayf2v5FNz/view Labels:map[__alert_rule_namespace_uid__:f1TZ_qJVz __alert_rule_uid__:Ayf2v5FNz alertname:pmm_node_high_cpu_load Alerting Rule grafana_folder:OS node_name:[MyMonitoredServerName] percona_alerting:1 severity:warning template_name:pmm_node_high_cpu_load]}}]"
logger=alertmanager org=1 t=2025-03-18T14:02:08.543333895-06:00 level=debug component=dispatcher msg="Received alert" alert="pmm_node_high_cpu_load Alerting Rule[fa606b2][active]"
logger=alertmanager org=1 t=2025-03-18T14:02:08.543358752-06:00 level=debug component=dispatcher msg="Received alert" alert="pmm_node_high_cpu_load Alerting Rule[3d6389d][active]"
logger=ngalert t=2025-03-18T14:02:10.000988288-06:00 level=debug msg="No changes detected. Skip updating"
logger=accesscontrol.service t=2025-03-18T14:02:13.298515243-06:00 level=debug msg="using cached permissions" key=rbac-permissions-1-apikey-17