ReasonBlocks integrates with the Claude Agent SDK through theDocumentation Index
Fetch the complete documentation index at: https://docs.reasonblocks.com/llms.txt
Use this file to discover all available pages before exploring further.
make_claude_agent_sdk_tools factory. The Claude Agent SDK runs the agent loop inside the Claude Code CLI, so the per-step LangChain middleware (FSM scoring, monitor steering, E-trace injection, model routing) does not apply on this path. What you get is the codebase memory layer — recall_findings, store_finding, and an optional impact_analysis — registered as Claude Agent SDK tools.
For a Claude Messages API integration with a hand-rolled agent loop (where ReasonBlocks ships a turn-by-turn driver too), see the Claude tools reference.
Prerequisites
- Python 3.10+
pip install reasonblocks claude-agent-sdk- A working
claudeCLI installation (Claude Code) ANTHROPIC_API_KEYset in the environment- A reachable rb-api endpoint (default
https://rb-api.reasonblocks.com; setREASONBLOCKS_BASE_URLto point elsewhere)
Walkthrough
Create a CodebaseMemory client
CodebaseMemory is the per-repo findings store. Pick a stable codebase_id for your repository (commit-pinned or branch-pinned, depending on your invalidation strategy).Build the tool list
make_claude_agent_sdk_tools returns a list of @tool-decorated async callables ready to pass to claude_agent_sdk.query.ImportGraph to add impact_analysis:Run a query
Pass the tools through the
options dict on claude_agent_sdk.query. The agent loop runs inside Claude Code.Tool factory parameters
Required. The findings-store client. Without it, no tools are returned.
Optional. When supplied alongside
enable_impact=True, adds an impact_analysis tool that calls graph.format_impact(file_path).Top-k cutoff passed through to
memory.format_recall(...).Minimum similarity score for a result to be included in
recall_findings output.Whether to register the
store_finding tool. Set False for a read-only recall workflow.Whether to register
impact_analysis when graph is supplied.Unlike
make_langchain_tools and make_openai_tools, this factory has no enable_recall flag. recall_findings is always registered when memory is provided.Telemetry to the dashboard
rb.claude_agent_telemetry(...) returns an adapter you wrap around query() to emit run_start, per-tool step, and run_finish events to the dashboard. No steering injection happens — the agent loop is owned by the Claude Code CLI process — but you get visibility into which tools fired, in what order, with what observations, and how long each took.
async with block are recorded as failure: <ExceptionType> on run_finish. To override the default success outcome on a clean exit, call tele.mark_failure(reason="...") before leaving the block.
What you don’t get on this path
The Claude Agent SDK runs the agent loop inside the Claude Code CLI, which does not expose the per-step hooks the steering pipeline needs. On this path,ReasonBlocks does not:
- Score the agent’s reasoning steps for difficulty
- Advance the difficulty FSM
- Evaluate trajectory monitors and inject steering text
- Retrieve E1, E2, or E3 patterns from the pattern store
- Route the model based on FSM state
run_messages_agent_loop gives you full turn-by-turn control inside Python and runs the entire steering pipeline. The LangChain middleware also drives Anthropic models if you want to layer LangChain’s tool-binding shape on top.
Related
Claude tools reference
make_claude_tools, make_claude_agent_sdk_tools, and run_messages_agent_loop API surface.Codebase memory
Storing, recalling, and invalidating findings across runs.

