Distributed Systems · March 10, 2025 · 9 min read

Scaling WebSocket Clusters with Redis Pub/Sub

A look at landing millions of concurrent connections with horizontal scaling and a shared message bus — without the footguns.

WebSockets are stateful. That’s the core problem when you try to scale them horizontally. A connection lands on Server A, but the message it needs to receive was published by a client connected to Server B. Without a shared broadcast layer, those two clients will never talk.

Redis Pub/Sub is the standard answer — and it’s the right one, if you instrument it correctly.

The Architecture

                 ┌──────────────┐
Client A ──────► │  WS Server 1 │──┐
                 └──────────────┘  │  SUBSCRIBE/PUBLISH
                                   ▼
                              ┌─────────┐
                              │  Redis  │
                              └─────────┘
                                   ▲
                 ┌──────────────┐  │  SUBSCRIBE/PUBLISH
Client B ──────► │  WS Server 2 │──┘
                 └──────────────┘

Each WebSocket server maintains its own in-memory map of active connections. When a message arrives, the server publishes it to a Redis channel. Every server — including the one that published — receives it, looks up which of its own connections are subscribed to that channel, and fans out locally.

The Go Implementation

type Hub struct {
    clients   map[string]*Client
    subscribe chan *Client
    mu        sync.RWMutex
    redis     *redis.Client
}

func (h *Hub) ListenAndBroadcast(ctx context.Context, channel string) {
    sub := h.redis.Subscribe(ctx, channel)
    defer sub.Close()

    for msg := range sub.Channel() {
        h.mu.RLock()
        for _, c := range h.clients {
            select {
            case c.send <- []byte(msg.Payload):
            default:
                // Client is slow; drop or buffer — your call
            }
        }
        h.mu.RUnlock()
    }
}

The important thing here is the sync.RWMutex — reads (fan-out) are far more frequent than writes (client joins/leaves), so an RWMutex avoids contention on the hot path.

What Redis Pub/Sub Does Not Give You

Redis Pub/Sub is fire and forget. There is no persistence, no acknowledgement, and no replay. If a subscriber is momentarily disconnected, those messages are gone.

For a chat application where missed messages are acceptable (live cursor positions, ephemeral notifications), this is fine. For anything where delivery guarantees matter, you need Redis Streams or a proper message broker like Kafka.

Connection Lifecycle and Memory Pressure

The second footgun is connection lifecycle. WebSocket connections are long-lived, and Go’s goroutine-per-connection model is cheap — but not free. At 100k connections per node you’re looking at roughly 8–10 GB of stack memory before you’ve stored a byte of application state.

Profile early. Use pprof with a realistic connection count in staging, not a synthetic benchmark:

go func() {
    log.Println(http.ListenAndServe("localhost:6060", nil))
}()
// Then: go tool pprof http://localhost:6060/debug/pprof/heap

Key Takeaways

  • Redis Pub/Sub solves the cross-node fan-out problem cleanly for fire-and-forget messaging.
  • Keep your in-memory connection map behind an RWMutex; reads dominate.
  • Profile heap allocation early — WebSocket connections are cheap, but not invisible.
  • If you need delivery guarantees, step up to Redis Streams or Kafka before you’re in production.