How to Build a Multi-Platform AI Agent with Vercel Chat SDK
Most teams building AI agents hit the same wall: the agent works great in one place, but the moment you want it on Slack and WhatsApp and Telegram, you're maintaining three separate codebases with three different APIs, three auth flows, and three sets of formatting quirks.
Vercel just released the Chat SDK — a unified TypeScript library that solves exactly this. You write your bot logic once, snap in platform adapters, and deploy everywhere from a single codebase.
We paired it with the Vercel AI SDK to build an open-source starter template that gets a working multi-platform AI agent running in under 30 minutes.
Here's how it works, and how to build your own.
What We're Building
A single AI agent that:
- Responds to messages on Slack, WhatsApp, and Telegram
- Streams AI-generated responses in real time with platform-native formatting
- Maintains conversation state across platforms using Redis
- Runs on a single Next.js deployment
By the end of this guide, you'll have a working agent you can fork, customize, and deploy.
The Architecture
The stack has three layers:
Messages in (Slack, WhatsApp, Telegram)
↓
Chat SDK (unified message handling + state)
↓
AI SDK + LLM (reasoning + responses)
↓
Messages out (platform-native formatting)
Chat SDK handles the messaging plumbing — receiving webhooks from each platform, managing threads and subscriptions, and sending responses back in the right format.
AI SDK handles the AI layer — streaming responses, managing tool calls, and handling multi-turn conversations.
This separation matters. Your agent logic lives in one place. Platform quirks stay at the edges.
Prerequisites
- Node.js
- An LLM API key (we use Anthropic in this guide — the AI SDK supports 50+ providers if you prefer a different model)
- A Redis instance (for conversation state)
- Platform credentials for whichever platforms you want to deploy to (we'll cover Slack, WhatsApp, and Telegram)
Step 1: Set Up the Project
npx create-next-app@latest my-agent --typescript --app
cd my-agentInstall the dependencies:
pnpm add ai @ai-sdk/anthropic chat @chat-adapter/slack @chat-adapter/telegram @chat-adapter/whatsapp @chat-adapter/state-redisSet up your environment variables. Each adapter auto-detects credentials from environment variables, so you just need to set them:
# .env.local
# AI
ANTHROPIC_API_KEY=sk-ant-...
# State (Redis)
REDIS_URL=redis://...
# Slack (auto-detected by @chat-adapter/slack)
SLACK_BOT_TOKEN=xoxb-...
SLACK_SIGNING_SECRET=...
# Telegram (auto-detected by @chat-adapter/telegram)
TELEGRAM_BOT_TOKEN=...
# WhatsApp (auto-detected by @chat-adapter/whatsapp)
WHATSAPP_ACCESS_TOKEN=...
WHATSAPP_VERIFY_TOKEN=...
WHATSAPP_PHONE_NUMBER_ID=...Step 2: Initialize the Chat SDK
Create lib/bot.ts — this is the core of your agent:
import { Chat } from "chat";
import { createSlackAdapter } from "@chat-adapter/slack";
import { createTelegramAdapter } from "@chat-adapter/telegram";
import { createWhatsAppAdapter } from "@chat-adapter/whatsapp";
import { createRedisState } from "@chat-adapter/state-redis";
export const bot = new Chat({
userName: "my-agent",
adapters: {
slack: createSlackAdapter(),
telegram: createTelegramAdapter(),
whatsapp: createWhatsAppAdapter(),
},
state: createRedisState(),
});Notice there's no config objects passed to the adapters — they auto-detect credentials from the environment variables we set above. Pass explicit values only when you need to override defaults.
Step 3: Register Event Handlers
Still in lib/bot.ts, add the handlers that fire when messages come in:
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const SYSTEM_PROMPT = `You are a helpful assistant deployed across multiple
communication platforms. Be concise and direct. When answering questions,
focus on being actionable rather than verbose.`;
// When someone @mentions the agent in a new thread
bot.onNewMention(async (thread, message) => {
await thread.subscribe();
await thread.startTyping();
const result = streamText({
model: anthropic("claude-sonnet-4-6"),
system: SYSTEM_PROMPT,
messages: [{ role: "user", content: message.text }],
});
// Post the stream directly — Chat SDK handles
// chunked updates and platform-native formatting
await thread.post(result.fullStream);
});
// When someone replies in a thread the agent is watching
bot.onSubscribedMessage(async (thread, message) => {
if (message.author.isBot) return;
await thread.startTyping();
// Build conversation history from the thread
const history: Array<{ role: "user" | "assistant"; content: string }> = [];
for await (const msg of thread.messages) {
history.push({
role: msg.author.isBot ? "assistant" : "user",
content: msg.text,
});
}
const result = streamText({
model: anthropic("claude-sonnet-4-6"),
system: SYSTEM_PROMPT,
messages: history,
});
await thread.post(result.fullStream);
});The key thing happening here: thread.post() accepts an AI SDK stream directly. The Chat SDK handles chunking the streamed response into real-time message updates on each platform. On Slack, the message edits in place as tokens arrive. On Telegram, it edits the message progressively as chunks arrive. On WhatsApp, it waits for the full response since WhatsApp doesn't support message editing.
The same onNewMention and onSubscribedMessage handlers fire regardless of whether the message came from Slack, WhatsApp, or Telegram. The Chat SDK abstracts the platform away.
Step 4: Wire Up the Webhook Routes
Each platform sends events to your app via webhooks. The Chat SDK provides type-safe webhook handlers keyed by adapter name:
// app/api/slack/route.ts
import { bot } from "@/lib/bot";
export async function POST(request: Request) {
return bot.webhooks.slack(request);
}// app/api/telegram/route.ts
import { bot } from "@/lib/bot";
export async function POST(request: Request) {
return bot.webhooks.telegram(request);
}// app/api/whatsapp/route.ts
import { bot } from "@/lib/bot";
export async function GET(request: Request) {
// WhatsApp webhook verification
return bot.webhooks.whatsapp(request);
}
export async function POST(request: Request) {
return bot.webhooks.whatsapp(request);
}Step 5: Add Per-Platform Formatting
This is where multi-platform agents get interesting. A code block looks great on Slack but breaks on WhatsApp. A long response works in a thread but overwhelms a Telegram or WhatsApp chat.
You can adjust the system prompt per platform using the thread's adapter:
const platformInstructions: Record<string, string> = {
slack:
"Use Slack markdown: *bold*, `code`, ```code blocks```. Use threaded replies naturally.",
telegram:
"Use Telegram markdown: **bold**, `code`, ```code blocks```. Keep responses focused and under 4096 characters.",
whatsapp:
"Keep responses short and concise. No code blocks. Use *bold* sparingly. Break long responses into short paragraphs.",
};
bot.onNewMention(async (thread, message) => {
await thread.subscribe();
await thread.startTyping();
const adapterName = thread.adapter.name;
const result = streamText({
model: anthropic("claude-sonnet-4-6"),
system: `${SYSTEM_PROMPT}\n\nFormatting: ${platformInstructions[adapterName] ?? "Be concise."}`,
messages: [{ role: "user", content: message.text }],
});
await thread.post(result.fullStream);
});Give the model platform-specific formatting instructions and it adapts its output accordingly. No post-processing needed.
Step 6: Add Tools (Optional but Powerful)
The real power of an AI agent comes from tool use. Here's an example that gives your agent access to the top stories on Hacker News — no API key needed:
import { streamText, tool, stepCountIs } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const tools = {
getTopStories: tool({
description: "Get the current top stories from Hacker News",
inputSchema: z.object({
count: z
.number()
.min(1)
.max(10)
.default(5)
.describe("Number of stories to return"),
}),
execute: async ({ count }) => {
const idsRes = await fetch(
"https://hacker-news.firebaseio.com/v0/topstories.json",
);
const ids = (await idsRes.json()).slice(0, count);
const stories = await Promise.all(
ids.map(async (id: number) => {
const res = await fetch(
`https://hacker-news.firebaseio.com/v0/item/${id}.json`,
);
const story = await res.json();
return {
title: story.title,
url: story.url,
score: story.score,
by: story.by,
};
}),
);
return { stories };
},
}),
};
bot.onNewMention(async (thread, message) => {
await thread.subscribe();
await thread.startTyping();
const result = streamText({
model: anthropic("claude-sonnet-4-6"),
system: SYSTEM_PROMPT,
messages: [{ role: "user", content: message.text }],
tools,
stopWhen: stepCountIs(3),
});
// fullStream handles intermediate tool call steps automatically
await thread.post(result.fullStream);
});No API key, no setup — just ask "what's trending on Hacker News?" from any platform and the agent fetches live data. In production, you'd add tools for your own APIs — CRM lookups, order tracking, database queries — using the same pattern.
Step 7: Deploy
This is a standard Next.js app — deploy it anywhere you'd host a Node.js server: Vercel, Railway, Fly.io, AWS, a VPS, etc. The only requirement is a publicly accessible URL for webhooks.
pnpm buildOnce deployed, configure your webhook URLs in each platform:
- Slack:
https://your-domain.com/api/slack— configure under Event Subscriptions in your Slack App settings - Telegram:
https://your-domain.com/api/telegram— set viasetWebhookin the BotFather setup - WhatsApp:
https://your-domain.com/api/whatsapp— configure in your Meta App Dashboard under WhatsApp > Configuration
That's it. One codebase, three platforms, one deployment.
Where to Go From Here
You've got a working agent on three platforms. Here are the most common next steps:
Add a knowledge base. Right now the agent only knows what's in its system prompt. Connect it to your docs, help center, or internal wiki using RAG (retrieval-augmented generation) so it can answer questions grounded in your actual content. The AI SDK's tool system makes this straightforward — create a retrieval tool that searches your vector store and returns context to the model.
Connect to your systems. The tool example in Step 6 is a starting point. In production, you'd add tools for your CRM, database, ticketing system, or whatever your users need the agent to interact with. Each tool is just a function with a schema — the model decides when to call it.
Add more platforms. The Chat SDK has adapters for Teams, Google Chat, GitHub, Linear, and more. Adding a new platform is one import and one adapter line — no changes to your agent logic.
Set up monitoring. Track which platforms get the most usage, what questions the agent can't answer, and where it's getting stuck. This data tells you what to improve next. The AI SDK supports OpenTelemetry for tracing model calls, and your state adapter already has the conversation history.
Get the Template
The full starter template is open source and available on GitHub:
github.com/SoftwareSavants/multi-platform-agent-starter
Fork it, customize it, deploy it.
If you hit the edges of what a starter can do — custom integrations, per-channel tuning, monitoring, production hardening — we can help.