This guide walks you through adding ReasonBlocks to an existing LangChain agent. By the end, your agent will have step scoring, E-trace injection, and health monitoring active on every run, and each run will appear in the ReasonBlocks dashboard.Documentation Index
Fetch the complete documentation index at: https://reasonblocks.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Install the SDK
Install ReasonBlocks from PyPI. It requires Python 3.10 or later.ReasonBlocks depends on
langchain>=1.0 and httpx>=0.27. These are installed automatically.Get your API key
Log in to the ReasonBlocks dashboard and copy your API key from the Quickstart page. It starts with
rb_live_.Set it as an environment variable so you don’t hardcode it in your source:Initialize ReasonBlocks
Import and initialize the The
ReasonBlocks client. Pass your API key directly or read it from the environment.ReasonBlocks object is reusable across runs. Create it once at startup, then call rb.middleware() once per agent invocation.Add middleware to your agent
Pass ReasonBlocks hooks into two points in the agent loop:
rb.middleware() in the middleware list when you create your agent. No other changes are required.- before_model — scores the last step, updates FSM state, runs monitors, retrieves E-traces, and injects steering signals
- wrap_model_call — optionally overrides the model based on FSM state and tracks token usage
Tag the run for the dashboard
Call These values appear on the Runs table in the dashboard. Use
rb.middleware() with metadata parameters to label the run in the dashboard. All parameters are optional — omit any you don’t need.run_id to correlate a dashboard row with a specific invocation in your logs.When your API key is a per-customer
rb_live_* key bound to an org, the ReasonBlocks API automatically overrides org_id and project_id with the key’s authoritative scope. Most users can leave those two fields at their defaults.Run your agent
Invoke your agent as you normally would. ReasonBlocks operates transparently — your agent code is unchanged, and steering happens inside the middleware layer.After the run completes, open the dashboard to see the scored steps, FSM state transitions, and any monitor signals that fired.
Complete example
The following is a minimal but complete working example using a LangChain agent with mock tools:Next steps
Installation options
Configure the base URL for self-hosted deployments and explore all init parameters
How it works
Understand FSM states, E-traces, and the monitoring pipeline in depth
Model routing
Route to cheaper models on easy steps and more powerful models when the agent is stuck
Full LangChain guide
In-depth guide covering LangGraph agents, async patterns, and configuration