Use this file to discover all available pages before exploring further.
This guide shows how to integrate ReasonBlocks from a custom harness — your own agent loop, an in-house evaluation runner, or any non-Python stack — using only HTTP calls.If you’re on LangChain, LangGraph, the OpenAI Agents SDK, or the Claude Messages API, use the Python SDK instead. This guide is for everyone else.
A run of your harness produces a sequence of agent steps. ReasonBlocks plugs in at three points around them:
┌──────────────────────────────┐ (pre-task) │ POST /v1/traces/retrieve │ patterns to inject │ └──────────────────────────────┘ into the prompt ▼ ┌──────────┐ │ step 1 │ per-step ─► POST /v1/monitor/runs/{id}/steps (telemetry + ├──────────┤ server-side │ step 2 │ scoring) ├──────────┤ │ step N │ └──────────┘ │ ▼ (post-task) POST /v1/traces store full ─ or ─ trace for POST /v1/monitors/evaluate (mid-task) distillation
You don’t have to use all three. Start with retrieval; add telemetry when your loop is stable.
fields is the source of truth — render it into your prompt however suits you. An empty traces: [] is a normal response (no patterns matched, or you’re over your monthly intervention cap).
When fired is non-empty, the run has tripped a monitor. That’s the signal to call /v1/monitors/evaluate (next section) for an intervention to inject on the next step.When the run finishes:
This is what trains the reasoning library on your runs. Submit the completed trace; the server distills it asynchronously and may generate new patterns for future retrieval.There are two body shapes. Most custom harnesses want the legacy shape:
Use the v2 shape (manifest + calls) if you can capture full LLM call records — it preserves more context for the distiller. See the TraceManifest and TraceCallRecord schemas in the live OpenAPI spec for field details. v2 responses are 202 Accepted (the distillation pipeline runs asynchronously).
Instead of waiting for the trace to finish, ask the server to score the trajectory so far and, if a failure is forming, hand back a rendered intervention to inject as a system message: