Skip to content

Stack

Every service the platform depends on. Each one earned its place against alternatives; the per-service pages cover what we use it for, why we picked it over the alternatives, and what it costs us.

The stack divides into four groups: compute (where the code runs), storage (what we persist and where), async (how work crosses request boundaries), and AI providers (the LLM and embedding services).

Two principles drive every choice on this page:

Project-first, not tech-stack-first. Every tool earns its place against alternatives. There’s no vendor loyalty. If a Cloudflare service is wrong for the job, we use something else. Today every stack choice happens to be Cloudflare or a major AI provider — but that’s because each call was made on its own merits, not because we started with “let’s use Cloudflare for everything.”

Own the unique parts; reuse the rest. The platform’s value is the agent runtime, the six-layer context model, the security boundary, and the YAML agent format. Every other thing on this page is bought, not built. We don’t reinvent queues, vector indexes, or LLM SDKs.

The result is a stack that’s small, focused, and demonstrably production-capable — Phase 1 ran end-to-end against every service on this page during the demo verification.