Email alerts are durable; chat alerts are immediate. For engineering groups that live in Slack or Microsoft Teams during working hours, an Uptime Kuma alert that lands in the right chat channel reaches the right people within seconds — not the minutes or hours an email might sit before being noticed. This guide walks through wiring Uptime Kuma into both Slack and Microsoft Teams for UK teams, the routing patterns that keep chat-based alerting useful rather than overwhelming, and the operational practices that prevent "alert channel" from quietly degrading into "channel everyone mutes". If you are new to Uptime Kuma, start with our plain-English introduction.
Why chat alerts complement email
Chat and email solve different halves of the notification problem. Each is strong where the other is weak.
Chat is fast and collaborative. When an alert lands in a channel where the engineering team is already watching, the first response usually arrives in seconds — "I'm on it", "looks like that payment provider again", "ignore, I'm deploying". The shared visibility makes coordination automatic; everyone sees the same alert at the same moment.
Email is durable and individual. An email sits in an inbox until read. A 3am Saturday alert is still there on Monday morning. An email from two weeks ago can be searched for and referenced during an incident review. Chat is terrible at long-term memory; email is good at it.
The productive pattern for UK teams is to send the same alerts to both, but to expect chat to drive the first response and email to serve as the durable record. Start with email working first — our SMTP notifications guide covers that — then layer Slack or Teams on top for real-time collaboration. In a mature setup the two are inseparable: the chat alert gets the team on it in seconds, the email copy gets filed into the ticketing system or incident archive for later reference.
Setting up a Slack webhook
Slack uses incoming webhooks to accept external messages. Generating one takes about three minutes.
- In Slack, open the workspace's app directory and search for "Incoming Webhooks". Add the app to the workspace if it is not already installed.
- Click "Add New Webhook to Workspace" and select the channel you want alerts to post to. Start with a dedicated channel —
#alertsor#ops-alerts— rather than a general engineering channel. - Slack generates a webhook URL of the form
https://hooks.slack.com/services/T.../B.../.... Copy this. Treat it as a secret; anyone with the URL can post to the channel. - Optionally configure the webhook's default name and icon. Uptime Kuma overrides both per message, so the defaults only matter if you later use the same webhook for anything else.
Slack-side setup is now complete. The webhook is a stateless HTTPS endpoint — you POST JSON to it, it posts the content to the channel. Uptime Kuma does the rest.
Configuring Uptime Kuma for Slack
In Uptime Kuma, go to Settings → Notifications → Add New Notification and select Slack. The form asks for:
- Friendly Name: whatever you want to call this notification channel in the Uptime Kuma UI.
- Webhook URL: the URL from Slack.
- Username: optional display name for alert messages. "Uptime Kuma" is sensible.
- Icon Emoji: optional, for example
:warning:or:red_circle:. - Channel: optional channel override. Leaving blank posts to the webhook's default channel; filling this lets one webhook post to several channels.
Save, then use the Test button. A test message should appear in the target Slack channel within a second or two. If it does not, the most common cause is a mistyped webhook URL — copy it fresh from Slack and try again. A second common cause is an over-aggressive corporate network firewall blocking outbound HTTPS to hooks.slack.com; test-pinging the hostname from the Uptime Kuma host confirms whether that is the problem.
Once the notification channel exists, attach it to monitors. On each monitor's configuration, tick the notification channel under Notifications. You can attach several channels per monitor — one Slack channel for team visibility, one email address for durable record.
Setting up a Microsoft Teams webhook
Microsoft Teams uses an "Incoming Webhook" connector for the same purpose. Setup is very similar to Slack.
- In Teams, navigate to the channel you want alerts to post to. Click the three-dot menu on the channel and select "Connectors" (or "Workflows" if your tenant has migrated to the newer connector model).
- Find the Incoming Webhook connector and add it to the channel.
- Give the connector a name ("Uptime Kuma") and optionally upload an avatar image.
- Teams generates a webhook URL. Copy and save it.
The newer Workflows-based approach uses Power Automate and produces a URL with a slightly different structure, but the principle is identical: a stateless HTTPS endpoint that posts content to the connected channel.
Tenant-level controls can block connector creation in some enterprise Microsoft 365 deployments. If the Connectors menu is missing or disabled, check with your tenant administrator. For Workflows, individual users need the appropriate Power Automate licence — usually included in E3 and E5 but absent in some lower tiers.
One more Teams-specific consideration: Microsoft has been quietly migrating the old Office 365 Connectors to the newer Power Automate-based Workflows, with an announced deprecation path for the classic model. Webhooks generated today on either model still work; new webhooks in some tenants may default to the Workflow path. Uptime Kuma accepts either URL format, so you do not need to change anything on the Uptime Kuma side when Microsoft eventually flips the default.
Configuring Uptime Kuma for Teams
In Uptime Kuma, select Microsoft Teams as the notification type. The form asks for:
- Friendly Name: as for Slack.
- Webhook URL: from the Teams connector.
Teams is slightly less configurable than Slack in terms of per-message overrides — the connector's default avatar and name are used on every message. If you want different senders for different alert types, you need separate connectors.
Test the notification. Teams occasionally delays the first test message by a few seconds while the connector initialises; subsequent messages are instantaneous.
A managed Uptime Kuma plan on smartxhosting.uk gives you a fresh Uptime Kuma instance on UK infrastructure. You log in, create the admin account and configure Slack or Teams notification channels using webhook URLs you obtain from your own workspace — the application is the standard Uptime Kuma release. Chat integrations work identically to a self-hosted installation; platform management sits with the provider.
Routing alerts to the right channel
Posting every alert to a single channel works for a team of three. At five or more people, with more than a handful of monitored services, it rapidly becomes noise. The pattern that scales is multiple channels with targeted routing.
By severity. One channel for critical alerts (revenue-impacting services, customer-facing outages), another for warnings (certificate expiry, non-urgent capacity warnings), a third for informational notices (scheduled maintenance, deploy announcements). Critical is the channel the on-call engineer watches; the others are the channels everyone glances at during normal work.
By service ownership. If your organisation splits services across teams — frontend, backend, data, infrastructure — each team can have its own alert channel. Alerts for services that team owns go there; cross-cutting alerts go to a shared channel.
By environment. Production alerts to the team-wide critical channel; staging alerts to a dedicated staging channel where flapping is acceptable; development-only alerts to an individual's DMs or a private channel. Mixing environments in one alert stream is the fastest route to alert fatigue.
By external/internal audience. Customer-visible incidents (things that should also go on your status page) in one channel; internal-only issues (failed cron job, database backup slow) in another.
Uptime Kuma supports all of this through multiple notification channels — you can have several Slack channels, several Teams channels, or a mix, each attached to the specific monitors that should alert there. The thinking-about-severity question is the harder part; once you know which alerts deserve which channel, configuring the plumbing is a ten-minute exercise.
A pragmatic starting point for a UK SME: two Slack or Teams channels. One is #alerts-critical — revenue-impacting, customer-facing, on-call-paging alerts. The other is #alerts-info — certificate expiry warnings, maintenance starts and ends, non-urgent reachability issues. Everyone in engineering has both channels in their sidebar; only the first one has notifications enabled by default. That one small piece of discipline — critical gets pings, informational gets an occasional glance — stops the channels degrading into noise and preserves the signal where it matters.
Making alerts actionable
An alert that says "Monitor is down" tells you something is wrong. An alert that says "Monitor is down: checkout.your-brand.co.uk, owner @ops-team, runbook https://wiki/checkout-outage" tells you what to do next. The difference is worth deliberate effort.
Uptime Kuma's message templates for Slack and Teams support the same placeholders as the email templates — monitor name, URL, status, duration, tags, certificate details. Use them. The practical rule is to pack as much context as possible into the first alert message, because that is the one that drives first-response time.
Useful elements to include:
- Monitor name in a bold or large format (so it is visible without clicking)
- The URL being monitored
- The failure reason (status code, timeout, keyword missing)
- A team ownership tag (@ops-team, @platform, etc.) that pages the right people
- A link to the service's runbook
- A link to the monitor's history in Uptime Kuma
- The duration of the outage if the alert is for recovery, so the team sees at a glance whether it was a blip or a real incident
- Tags associated with the monitor (environment, team, criticality) so triage can happen at a glance
The anti-pattern to avoid is the single-line "Monitor Down: Example" alert with no context. Anyone seeing it has to open Uptime Kuma, find the monitor, read the history and decide what to do — work that should have been in the original alert message. Ten extra seconds of effort on the template saves minutes of effort on every future incident.
Slack in particular supports block-formatted messages, though Uptime Kuma's default integration uses a simpler format that is good enough for most teams. If you need richer formatting — coloured sidebars, interactive buttons — route alerts through a webhook and a small intermediary script that converts Uptime Kuma's output into Slack Block Kit.
Acknowledgement and ownership
Slack and Teams do not natively support "acknowledge" button interactions with Uptime Kuma — alerts post into the channel and any response is a human message in the thread. That is fine for small teams: the person taking the incident says "on it" in the thread, and the rest of the team knows.
For larger teams or formal on-call rotations, there are three good patterns.
Emoji-based acknowledgement. Agree as a team that a specific emoji reaction on an alert message means "I am handling this". Typical choices are :eyes: (watching), :raising_hand: (taking this) or :wrench: (fixing). Everyone in the channel can see who responded first.
Thread-based coordination. First person to respond replies in a thread; subsequent coordination and resolution notes go in the same thread. The channel stays tidy, and the thread is a self-documenting incident record.
PagerDuty/Opsgenie overlay. For true on-call rotations with escalation and formal acknowledgement, route the critical alerts through PagerDuty or Opsgenie as the primary channel, with Slack/Teams as the notification channel. Uptime Kuma supports both of these as native notification types. The paging tool handles escalation; chat handles visibility.
Which pattern you pick depends on team size and urgency. For teams under ten people, emoji plus thread coordination is usually enough. Above that, formal paging tooling is worth the overhead.
One failure mode worth calling out: alerts that nobody acknowledges during working hours. If the engineering team is in the office and an alert sits in the channel for ten minutes with no response, either the alert was ignored (bad) or the channel was muted (worse). Both of those outcomes are better caught in real time than discovered during an incident review. A simple daily practice some UK teams adopt: the first engineer into the office scrolls the alert channel briefly, reacts to anything overnight with the acknowledge emoji, and raises anything concerning with the team. Takes two minutes, prevents the channel from becoming a silent dump.
Pitfalls that kill chat alerting
The failure mode that kills chat-based alerting faster than any other is alert volume overwhelming the channel. A channel that fires twenty alerts a day gets muted by everyone within two weeks. Once muted, it is no longer reaching anyone. The monitoring is effectively silent.
Three practices that keep volume manageable:
Retry to filter noise. Every monitor should have retries configured — typically 2-3 — so transient blips do not fire alerts. An alert channel that fires every time a network hiccup happens somewhere on the internet is a worthless alert channel.
Suppress during maintenance. Uptime Kuma's maintenance windows prevent alerts during scheduled downtime. Use them. A single deploy that fires eight alerts because eight monitors went red for thirty seconds trains the team to ignore the channel.
Route informational noise elsewhere. Certificate expiry warnings, scheduled-job completions, soft capacity alerts — these are all useful but do not belong in the channel where the on-call engineer watches for real incidents. Create a secondary channel for them and accept that it will be checked daily rather than instantly.
For the broader framing of how to prevent your alerting from degrading over time, our alert fatigue guide walks through the strategy in detail.
Another pitfall worth flagging: webhook rotation. Slack and Teams occasionally invalidate webhook URLs — during app re-authorisation, connector reshuffles, or deliberate administrative rotation. When a webhook stops working, the alert is dropped silently; Uptime Kuma records the failure to notify in its own logs, but no external message reaches anyone. The mitigation is to test notification channels regularly — monthly at least — so a broken webhook is discovered before a real incident finds it.
If you are considering commercial chat-based monitoring tools rather than wiring Uptime Kuma into Slack yourself, our Uptime Kuma vs Uptime Robot comparison covers the relevant trade-offs.
Summary
Chat alerting via Slack or Microsoft Teams is the right complement to email for UK engineering teams that spend their working hours in a chat tool. The webhook setup is straightforward on both platforms. The discipline that separates useful chat alerting from channel-muting noise is about routing — separate channels for severity, ownership and environment — and about volume — retries, maintenance windows and routing informational alerts away from the critical channel. Make alerts actionable with context packed into the first message, and agree a team convention for acknowledgement, whether that is an emoji reaction, a thread response or a formal paging tool overlay.
None of this is complicated. All of it rewards a little deliberate thought up front with significant operational effectiveness afterwards. The alternative — a single firehose channel full of every alert from every monitor — is the commonest reason small UK teams give up on chat alerting altogether. Done well, chat alerts are the difference between finding out about an outage from a customer and finding out from a colleague already ten seconds into investigating it.