Redis Streams on Upstash with the TypeScript SDK
Most data structures in Redis answer the question "what is the current value?" Streams answer a different one: "what happened, in what order, and have I seen it yet?" They're an append-only log baked into Redis, with the same primitives Kafka gives you — producers, consumers, consumer groups, acknowledgements — but reachable from a single xadd call.
Upstash ships full Redis Streams support over its serverless HTTP API, and the @upstash/redis SDK exposes every stream command directly. This post walks through how streams actually work on Upstash, what's different from vanilla Redis, and the TypeScript you'll write day to day.
What a stream actually is
A Redis stream is an append-only log keyed by a single Redis key. Every entry has:
- a unique ID of the form
<millisecondsTime>-<sequenceNumber>(e.g.1638360173533-0) - one or more field/value pairs — basically a small hash
IDs are monotonically increasing, so a stream is a totally-ordered timeline. You can append to the tail with XADD, scan ranges with XRANGE, tail new entries with XREAD, and coordinate multiple workers over the same stream with consumer groups (XGROUP + XREADGROUP + XACK).
What's different about Upstash Streams
Upstash doesn't run stock Redis — they wrote their own Redis-compatible server, which lets them tweak implementation details. The big one for streams: Upstash Streams is disk-backed, not memory-bound.
In standard Redis, a stream lives entirely in RAM. That's fine for short queues, but streaming data tends to be big and never-ending, which forces you to aggressively trim entries you'd rather keep. Upstash stores entries on disk, sorted by ID, and keeps only hot metadata (lastTrimmedId, lastWrittenId, length, consumer-group state, the Pending Entries List) in memory. The wire protocol is unchanged — XADD, XREAD, XRANGE, consumer groups, all of it — you just don't have to throw data away to keep your bill sane.
The other thing worth knowing: the SDK talks to Upstash over HTTP, not the Redis binary protocol. That's why it works in Vercel, Cloudflare Workers, and other edge runtimes — and it's why there's no long-lived BLOCK-style subscription. You poll, and you tail by passing the last ID you saw.
Setting up
import { Redis } from "@upstash/redis";
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL!,
token: process.env.UPSTASH_REDIS_REST_TOKEN!,
});
Same client you'd use for get/set — every stream command lives on it.
Producing events with XADD
XADD appends one entry. The TypeScript shape is xadd(key, id, entries, options?), where id is "*" for "let the server pick" or an explicit <ms>-<seq> if you need to control timestamps yourself.
const id = await redis.xadd("events", "*", {
type: "page_view",
userId: "u_123",
path: "/pricing",
});
// → "1731412034871-0"
The entries argument is a plain Record<string, unknown> — much friendlier than the flat field/value list you'd write at the redis-cli prompt.
Trimming as you write
Even with disk-backed streams, you usually don't want infinite retention. XADD can trim atomically on every write. The two strategies are MAXLEN (keep at most N entries) and MINID (drop everything older than a given ID), and you can opt into approximate trimming with ~ for cheaper writes:
await redis.xadd(
"events",
"*",
{ type: "click", target: "cta" },
{ trim: { type: "MAXLEN", threshold: 1000, comparison: "~" } }
);
The ~ tells Upstash "around 1000 is fine" — it trims in efficient chunks rather than guaranteeing exact length, which is almost always what you want. Use = if you need the count to be exact. The full options shape is documented on the XADD reference.
Reading by range with XRANGE
XRANGE is the "give me a slice" command. The special IDs - and + mean "the very beginning" and "the very end":
// last 50 entries, oldest → newest
const entries = await redis.xrange("events", "-", "+", 50);
This is what you reach for when rendering a feed, paginating an audit log, or backfilling a downstream system. The IDs are sortable strings, so paginating is just "use the last ID you saw plus -1 as the next start."
Tailing with XREAD
XREAD is for "give me everything newer than the ID I'm holding." You pass it a key and the last ID you've already processed:
async function tail(lastId: string) {
const result = await redis.xread("events", lastId, { count: 100 });
return result;
}
The response shape is documented as an array of `[streamKey, entries] pairs, where each entry is [id, [field, value, field, value, ...]]` — see the XREAD docs. You can also pass arrays for key and id to fan out across multiple streams in a single round trip, and $ as the ID to mean "only entries added after now."
A tailing loop is just: read, remember the last ID, sleep a bit, read again. No persistent connection, no ordering surprises.
Consumer groups: the part that makes streams interesting
Single-reader tailing is fine for analytics. The reason to actually use streams over a list is consumer groups: multiple workers cooperate on one stream, each entry is delivered to exactly one of them, and unacknowledged messages stay in a per-consumer Pending Entries List until somebody acks them. That's at-least-once delivery without you writing the bookkeeping.
Three commands carry the workflow:
XGROUP CREATE— register the group once.XREADGROUP— read new (or pending) entries as a specific consumer.XACK— mark an entry as processed so it leaves the PEL.
Creating a group
await redis.xgroup("events", {
type: "CREATE",
group: "workers",
id: "$",
options: { MKSTREAM: true },
});
id: "$" says "start from messages added after this group exists" (use "0" to replay from the beginning). MKSTREAM: true creates the stream if it doesn't exist yet, which is handy on first deploy. The full subcommand surface — CREATECONSUMER, DELCONSUMER, DESTROY, SETID — is on the XGROUP reference.
A worker loop
type StreamRead = [stream: string, messages: [id: string, fields: string[]][]][];
async function worker(consumerName: string) {
const result = (await redis.xreadgroup(
"workers",
consumerName,
"events",
">",
{ count: 10 }
)) as StreamRead | null;
if (!result) return;
for (const [stream, messages] of result) {
for (const [id /*, fields */] of messages) {
// ... process the message ...
await redis.xack(stream, "workers", id);
}
}
}
A few things going on here:
">"means "give me messages this group has never delivered to anyone." Pass an explicit ID instead and you'll get messages from this consumer's pending list — that's how you recover after a crash.- The SDK types
xreadgroupasPromise<unknown[]>, so you cast to the documented shape. Future SDK versions may tighten this. XACKis what removes the entry from the Pending Entries List. Skip it and the message will be redelivered (to this same consumer) on the next pending-list read.
If a consumer dies mid-process, XAUTOCLAIM and XCLAIM let another consumer take over its pending entries — that's the recovery path for at-least-once. Combine with XPENDING to see who's stuck.
There's also a NOACK option on XREADGROUP for fire-and-forget consumption where you don't care about retry semantics. Skip it unless you're sure.
A mental model
If you've used Kafka, the mapping is direct: a stream is a (single-partition) topic, a consumer group is a consumer group, an entry ID is the offset, and XACK is your commit. The thing you give up vs. Kafka is partitioning — one stream is one ordered log — and the thing you gain is that it's just Redis, with no broker to run, and on Upstash it's an HTTP call that works from any runtime.
For most "I need a durable queue with replay" jobs — webhook fan-out, event sourcing for a small service, background work with retries — streams on Upstash hit a sweet spot: stronger guarantees than a list, less infrastructure than Kafka, no memory ceiling to worry about.
Where to go next
The full command surface is on the TypeScript SDK overview — including XAUTOCLAIM, XCLAIM, XPENDING, XINFO, and XTRIM, which are the operational commands you'll need once a stream is in production. Start with XADD + XRANGE to feel out the data model, then graduate to consumer groups when you actually need work distribution.