Skip to content

Glossary

A flat list of terms, alphabetized. Each entry is short — under three sentences. Click through to the page that introduces the concept properly when you want depth.

Architecture Decision Record. A short document capturing one design decision: what was decided, why, what trade-offs were accepted. The platform has 33 ADRs as of Phase 1; the index is in the Tech > ADRs section (commit 3).

A YAML-defined entity that an LLM “plays” during a run. Has its own system prompt, tools, sub-agents, memory configuration, and autonomy bounds. See Concepts > Agents.

The TypeScript type representing a parsed YAML agent. Loaded once per Worker isolate, frozen via Object.freeze, shared across requests.

The structured output of a completed agent turn: status, summary, metadata, optional tool results. Returned from /run.

The @agent-platform/llm-anthropic package. Wraps Anthropic’s SDK and implements the platform’s ModelAdapter interface so agents can use Claude without coupling to the SDK’s specifics.

A vertical-specific bundle of agents and tools. The first business pack is e-commerce (@agent-platform/shopify plus the order-triage agent YAMLs). The platform itself is business- agnostic; packs are added per vertical.

Layer 2 of an agent’s context. Personality, decision style, tone. Immutable per the priority model. See Concepts > Memory.

A SQL database service. The platform uses D1 as the source-of-truth store for long-term memory rows. Vectorize holds the embeddings; D1 holds the content.

A stateful, single-instance Worker. The platform uses a Durable Object (AgentJob) to run async jobs queued via /jobs.

Static-site hosting on Cloudflare’s edge. This documentation site is hosted on Pages.

Async message queues. The platform uses two: human-review (for escalations) and shopify-actions (for mutations).

A vector index service. The platform uses it for semantic search over long-term memory entries.

Edge-deployed serverless function. The platform’s main runtime (apps/worker) is a Worker.

Layer 1 of an agent’s context. The system prompt and hard constraints from the YAML. Immutable; the highest priority layer.

Calling a sub-agent. Implemented as a tool — the parent agent’s runtime exposes a delegate_to_<sub_agent_name> tool whose input is the task spec for the sub-agent.

The path of agent calls from a top-level entry to a leaf agent. In the order-triage scenario: triage → refund_decision → communication. Maximum chain depth is bounded per agent via the YAML’s autonomy.max_delegation_depth field.

The @agent-platform/embeddings interface plus its implementations. Today the only production implementation is OpenAI’s text-embedding-3-small via @agent-platform/embeddings-openai.

A structured intent emitted by an agent for downstream processing. Always crosses a queue boundary; never executes inline. Two topics exist today: human_review and shopify_actions. See Concepts > Events.

The @agent-platform/event-bus package. Wraps Cloudflare Queues with a typed publish interface and per-topic schema validation.

Layer 6 of an agent’s context. Persistent vector search per agent, backed by Vectorize plus D1. Opt-in via the agent’s YAML (memory_config.long_term_enabled).

The six-layer priority model the runtime assembles for every agent turn. See Concepts > Memory.

A typed interface for LLM providers. Implementations exist for Anthropic; OpenAI is on the roadmap. Lets the platform swap LLM providers without touching agent or runtime code.

The platform ships in phases. Phase 1 delivered the order-triage demo end-to-end (this milestone). Phase 2 is Shopify mutations plus human-in-the-loop approval gates.

The @agent-platform/runtime package. Drives an agent’s turn: assembles context, decides which tool to call (or whether to stop), invokes the LLM, parses the response, executes tools, recurses on delegation. The “engine” of an agent.

A complete multi-agent flow. The order-triage scenario is one; the merchandising agent (running on a weekly cron) is another. A scenario is the unit of business value — what the platform does, not how it’s built.

Layer 3 of an agent’s context. Read-only data shared across agents in a single request (e.g., the current date, the tenant id).

An agent referenced from another agent’s sub_agents list. A sub-agent receives delegated tasks, runs its own loop, returns a structured report to its parent. Sub-agents have their own independent context, tools, and memory configuration.

The structured input passed to an agent: instructions, payload, constraints, expected output schema. The runtime constructs a Task from the HTTP request body for top-level agents, and from the parent’s delegation-tool call for sub-agents.

A typed function an agent can invoke. Has a name, description, input schema (Zod), and a handler. Three categories: built-in, business-pack, delegation. See Concepts > Tools.

The @agent-platform/tools package. The lookup layer between an agent’s tool name and its handler. Immutable per request.

See Cloudflare Vectorize.

Layer 5 of an agent’s context. The current conversation: messages exchanged so far in this turn, tool calls and their results. A sliding window; size is configured per agent in YAML.

See Cloudflare Worker.

The .yaml files in apps/worker/agents/ that define each agent. ADR-0031 covers the format spec. Loaded by @agent-platform/agent-loader and bundled into the Worker at deploy time per ADR-0033.