Every AI invocation is a typed loop.

LSEM is an open specification for typed agent loop invocation, composition, and tracing. LSR is the reference TypeScript runtime — open source, single-container, runs in 60 seconds.

$ docker run -p 3000:3000 ghcr.io/loopstacks/lsr:latest

No API keys required — LSR ships with a mock backend so you can see the runtime work immediately.

A runtime primitive, not a framework.

Most agent frameworks let you wire up LLM calls, tools, and orchestration logic in code. That works for prototypes. It breaks down when you need to trace a production failure through five nested calls, swap the model for one step without breaking the rest, or compose loops written by someone else without rewriting them to fit your framework's idioms.

LSEM (Loop Stack Execution Model) takes a different approach. It identifies a single primitive — the typed loop — and treats it as first-class. Every prompt, every tool call, every RAG lookup, every agent handoff is a loop with a defined input schema, a defined output schema, a lifecycle, and a tracing surface. Composition, observability, and reliability become runtime concerns rather than glue-code concerns.

LSEM is intentionally small. The spec defines the primitive and its lifecycle. It does not define a coordination protocol, a deployment model, or a user interface — those belong to layers above. The reference runtime, LSR, makes opinionated choices about each.

Three surfaces, one runtime.

Define a loop in JSON. Run it from the CLI, the REST API, or the web UI. Watch every event in the lifecycle stream in real time.

Web UI

Define loops, run them, watch traces stream over WebSocket.

[ runner view ]

CLI

Run loops from your terminal with full trace output.

$ lsr load examples/hello.json$ lsr run hello --input '{"name":"world"}' --watch

REST + WebSocket API

Embed LSR in any system. POST a loop, subscribe to its trace.

curl -X POST localhost:3000/api/runs \ -d '{"loopId":"hello","input":{"name":"world"}}'

What you get when the loop is the primitive.

Composition by construction

RAG is a composite loop. Multi-agent is a composite loop. Tool-calling is a composite loop. One primitive, many patterns. Loops calling loops, validated end to end.

First-class tracing

Every loop call emits a structured event stream — call.started, input.validated, backend.requested, backend.responded, output.validated, call.completed. Stream it to a UI, persist to OpenTelemetry, or render as a tree in your terminal.

Schema-enforced contracts

Input and output schemas are part of the loop definition, not an afterthought. The runtime validates both. You stop guessing what your agent returned.

Pluggable backends

OpenAI, Anthropic, and a Mock backend ship by default. Add your own with a small interface. Swap the backend on a single step of a multi-step pipeline without touching anything else.

What's next.

v0.1(today)
  • Core types, executor, tracer
  • OpenAI / Anthropic / Mock backends
  • CLI, REST + WebSocket server
  • Web UI for define / run / trace
  • Composite loops
v0.2(next)
  • Visual loop composer (drag-and-drop)
  • Persistent run history
  • OpenTelemetry trace export
v0.3(after that)
  • Eval harness — define test cases, run against any loop, compare across model backends
  • Policy-based routing primitives

Coordination patterns at scale — cross-process, cross-cluster — live in LoopStacks Platform, a separate project.