Integrated Alerting - Slack

I am a bit stumped on what I am doing wrong… As far as I can tell I have done the following

Enabled Alerts
Setup Notification Channels (Under Integrated Alerting)
Leveraged the Alert Templates to make an alert
See the Alerts state is “Firing” in the Alerts Tab
Made sure in the Alert Rule to set the Channel to the proper notification channel (Slack)
Unable to get anything to post into Slack

Same holds true with E-mail… I have been looking around in the docker image to see where the log file might be for this aspect of it but so far I have been unable to find it…

Can anyone help out?

Version: 2.20.0
PMMVersion: 2.20.0

**SIDE note… if Telemetry is disabled you can not turn on Integrated Alerting not sure why :slight_smile:

So the logs you’re looking for are likely in either pmm-managed.log or alertmanager.log in the /srv/logs directory of your PMM server. I am going to guess (because I just ran into this) that you are going to see a STARTTLS error which I’m guessing has not been implemented yet :man_facepalming: but I’m confirming with the team.

As for the correlation with Telemetry, right now you get some number of checks shipped with the default container but we are continuously adding new ones which get downloaded over our telemetry service.
We send you all of them currently regardless of being applicable to your environment but in an upcoming release we’ll know that you only have MySQL and Mongo nodes (we don’t today) and can reduce bandwidth and load by not also sending the Postgres templates until it’s detected that they’re needed. We will change it such that telemetry is not required but you won’t get any checks from our service, you’ll get whatever is shipped in the container and whatever you wrote yourself.

1 Like

Hmmm doesnt seem to be that…

I did capture this though… in the alertmanager file now to figure out what this means (yes I did a simple test of any cpu usage over 1 for one second so it would have triggered 23 times)

level=error ts=2021-10-27T17:17:23.994Z caller=dispatch.go:309 component=dispatcher msg=“Notify for alerts failed” num_alerts=23 err=“/channel_id/8de4b3cd-f98b-47e9-9cf4-ae40539f3995/slack[0]: notify retry canceled after 2 attempts: Post "": context deadline exceeded”

Managed by pmm-managed. DO NOT EDIT.


global:
resolve_timeout: 0s
smtp_from: grafana@xxxx
smtp_hello: xxx
smtp_smarthost: xxxx:25
smtp_require_tls: false
slack_api_url: Slack API | Slack
route:
receiver: empty
continue: false
routes:
- receiver: /channel_id/8de4b3cd-f98b-47e9-9cf4-ae40539f3995
match:
rule_id: /rule_id/3a2d6920-b33b-41d3-b09e-843acd65c0df
continue: false
receivers:
- name: empty
- name: disabled
- name: /channel_id/8de4b3cd-f98b-47e9-9cf4-ae40539f3995
slack_configs:
- send_resolved: false
channel: Slack API | Slack
short_fields: false
link_names: false
templates:

1 Like

Update I toned it down to 7 alerts at one time and same error…

1 Like

Hmm…well the error is simply “connection timeout”…I assume to hooks.slack.com. Can your PMM server talk to that DNS name on port 443? (you can yum install telnet to see if telnet hooks.slack.com 443 at least connects). When I see that come up here on the forums it’s typically corporate firewall or corporate proxy that’s clobbering the connection.

Also for the benefit of the internet…the STARTTLS error I was getting is known but the workaround is to edit /etc/alertmanager.yml and change the line smtp_require_tls: false to true then issue a supervisorctl restart alertmanager to reread the config…mail delivered in seconds!

1 Like

from inside the container

[root@55fca1488a0b logs]# curl Slack API | Slack
invalid_payload
[root@55fca1488a0b logs]#

Question: and though the docs do not say this… in the Settings/Communication area I notice only one Slack can be configured (which leads me to think that alerts can only ever be sent to one slack channel??) Should in the Settings/Communication it just be Slack API | Slack or hooks.slack.com and in the Integrated Alerting/Notification Channels when selecting the Type of Slack it be the remaining url ?

Basically in the Communication area the root url then in the notification channel it be anything more like /services/xxx/xxxx ?

1 Like

I may have figured it out but not quite traced down where to change items yet… I am behind a Corp firewall/proxy… Trying to figure out where to add the proxy to in which config file… but every file I look at says managed by pmm-managed dont touch All env variables are set the docker image can connect and all need to get proxy_url into the alertmanager.yml file and not have it overwritten

1 Like

It’s a slack limitation that enforces a webhook is good for only one channel (at least I thought it was from like a year ago) and we currently don’t support multiple slack configurations… in our next release I believe we add more generic webhook support so you can create a webhook per slack channel and name them accordingly in “Notification Channels” and choose one or many when creating the alert rule.

I’ll tag our product manager to consider moving the slack config right to “Notification Channels” for exactly what you outlined above. then you can just choose type ‘Slack’, name it (like slack channel name) and add the webhook url; then do that each time for multiple slack channels.

As for changes inside the container, yes…many of them will be clobbered by pmm-managed on upgrades. We’ve got a few things in flight that will allow us to have a user config section in /srv that will marry our defaults to your additions/overrides to avoid that in the future, but still in the future :frowning: . The best way to implement changes that persist are through env variables passed in from the docker container or make a backup of your config changes so you can easily restore them after upgrade.

2 Likes