Documentation Index
Fetch the complete documentation index at: https://docs.reasonblocks.com/llms.txt
Use this file to discover all available pages before exploring further.
run_messages_agent_loop is a batteries-included driver for Anthropic’s Messages tool-use loop. Pair it with a SteeringSession from rb.claude_messages_session() to run the full ReasonBlocks pipeline at every turn — FSM step scoring, server-side monitor steering, E1 / E2 / E3 injection, model routing, and live telemetry — all without LangChain in the dependency graph.
This is the parity path with the LangChain middleware against an Anthropic-native client.
Walkthrough
Initialize the client and an Anthropic SDK client
model_routing may use the LangChain-style "anthropic:..." prefix for parity; the loop strips it before forwarding to the Messages API. Bare slugs like "claude-haiku-4-5-20251001" work too.Build the codebase memory tools
make_claude_tools returns Anthropic-shaped tool specs plus a dispatch callable that runs the tools when the agent invokes them.Create a steering session and run the loop
rb.claude_messages_session(...) builds a SteeringSession wired with the same FSM, monitors, and injections that rb.middleware() would produce. Pass it through run_messages_agent_loop(..., session=...).- Pulls the last assistant text out of the message history (the “thought”).
- Calls
session.begin_step(...)to score it, advance the FSM, and run server-side monitor evaluation + E-trace retrieval. - Appends a
[REASONBLOCKS]block tosystemif anything fired. - Strips any
provider:prefix from the routed model id. - Calls
client.messages.create(...). - Calls
session.end_step(...)with the response’s token count, tool-call names, and per-call latency.
What run_messages_agent_loop returns
session= is passed, the same data is mirrored into session.step_log per step.
Without steering
session=None (the default) runs the loop as a plain Messages API driver — no scoring, no monitors, no injection. This was the original behavior and remains supported for callers who only want the tool-use convenience.
Streaming
Streaming responses are not supported in the first cut.client.messages.create is called synchronously per turn. If you need streaming, drop down to the LangChain middleware (which supports streaming responses through LangChain’s runtime) or hand-roll the loop and call session.begin_step / session.end_step yourself.
Related
Claude tools reference
make_claude_tools and run_messages_agent_loop API surface.SteeringSession reference
The shared core driving every framework integration.

