Backend · Cloud

Threat Intelligence Pipeline

High-throughput social media event ingestion at scale

Key Impact
10x
Latency reduction
GoGCPPub/SubMicroservicesPython Migration

Overview

Replaced a legacy Python API with a Go microservice on Google Cloud Pub/Sub, processing millions of social media events daily for a threat intelligence platform. Cut ingestion latency by 10x and accelerated downstream review and analysis workflows.

The Problem

The existing Python service was a bottleneck. It could not keep pace with the ingestion rate at peak volume, causing queue backpressure that delayed analyst workflows by minutes. A backlog of unprocessed events directly impacted the platform’s ability to surface real-time threats.

The Solution

The rewrite in Go leveraged Pub/Sub’s push model and Go’s concurrency primitives to process events in parallel with controlled fan-out. Key design decisions:

  • Structured concurrency — a worker pool with bounded goroutines to prevent runaway memory under burst load
  • Idempotent processing — event IDs used as deduplication keys so redelivered messages don’t cause double-processing
  • Dead-letter routing — malformed events are routed to a DLQ for inspection rather than silently dropped

Results

  • 10x reduction in p99 ingestion latency
  • Zero data loss during migration via dual-write transition window
  • Downstream analyst review time cut from minutes to seconds at peak load

Technical Stack

Go · Google Cloud Pub/Sub · gRPC · PostgreSQL · Docker · CI/CD