Documentation Index
Fetch the complete documentation index at: https://docs.reasonblocks.com/llms.txt
Use this file to discover all available pages before exploring further.
SteeringSession is the shared core that runs the per-step ReasonBlocks pipeline outside of LangChain’s middleware lifecycle. The Claude Messages helper (run_messages_agent_loop(..., session=...)) and the OpenAI Agents Model adapter (rb.openai_model(...)) both wrap a session and call into it on each turn.
Most users don’t construct one directly — call rb.claude_messages_session() or rb.openai_model(...) and the SDK builds it for you. Construct one yourself only when you’re hand-rolling a third-party agent loop and want the same scoring + injection + telemetry shape.
Construction
Heuristic that returns a
[0, 1] difficulty score for a thought string. Most callers pass ReasonBlocks.score_step.The difficulty FSM. Construct with
DifficultyFSM(**fsm_thresholds) to apply caller-supplied thresholds.Per-run state tracker. Hosts the difficulty history, current FSM state, recorded
StepRecord list, and total tokens.The list of injection sources. Build with
reasonblocks.injections.create_injections(api, monitors, e_traces_enabled=...).Optional FSM-state-to-model-id map. The session surfaces an override on each
StepDecision; the calling integration is responsible for actually swapping the model.Optional live-telemetry emitter. When set, the session emits
run_start once, a step event on each end_step, and run_finish on context exit.Identifier for this run. Defaults to the trace id on
state_manager.Caller-supplied identifying tags (
agent_name, task, framework, model, codebase_id, org_id, project_id, task_profile, plus arbitrary extras). Forwarded into the run_start payload.begin_step
StepDecision carrying the resolved FSM state, pending injections, optional model override, the rendered injection text, and a fresh StepLogEntry to be finalized by end_step.
Pass thought=None on the very first call (when no assistant turn has happened yet) to take the first-call path — only E3 universal injections fire, the FSM stays at INIT, and no scoring runs.
end_step
model_id, tokens, tool_calls, and latency_ms onto the entry that decision carried, append it to step_log, and emit a live step telemetry event. Pass observation= if you only learned the tool result after the LLM call returned (it lands on the most recent StepRecord).
Lifecycle
start()
Emit the
run_start telemetry event. Idempotent. Called automatically by begin_step on first use and by the context-manager protocol on __enter__.finish(*, outcome_status='success')
Emit the
run_finish event. Idempotent. mark_failure and a propagating exception in __exit__ both override the default 'success'.close(*, timeout=5.0)
Stop the live telemetry worker thread and drain pending events. Safe to call multiple times. Skip it in normal use — the emitter thread is a daemon.
mark_failure(*, reason='failure')
Override the default outcome on clean exit. Useful when the agent returned successfully but the caller knows the run logically failed.
Properties
Append-only list of finalized step entries. One entry per
end_step call.Context manager
SteeringSession is both a sync and async context manager:
__exit__ fires run_finish with outcome success on a clean exit, failure: <ExceptionType> if an exception escapes, or whatever was set via mark_failure. It also calls close() to drain the emitter thread.
StepDecision
Returned bybegin_step. The integration’s job is to read it, compose the system prompt, optionally swap the model, and pass it back to end_step.
The resolved FSM state for this step.
All retrieved-but-not-yet-rendered injections (monitor steering + E1 + E2 + E3, gated by the same rules as the LangChain middleware).
The model id from
model_routing[state] if mapped, else None. The integration uses this to actually swap the model.The mutable per-step entry, populated through
end_step.The pre-rendered
[REASONBLOCKS] body — every pending injection joined into one block. Empty string when nothing fired.compose_system_prompt
[REASONBLOCKS]\n{rendered_injection_text} to base_system when injection text is present; otherwise return the base unchanged. Useful for plain-string system prompts (Claude Messages, OpenAI Agents). Returns an empty string if both base_system is empty and no injection fired.
Why a session and not a middleware
LangChain has its own middleware lifecycle (before_agent, before_model, wrap_model_call, after_agent) — ReasonBlocksMiddleware hooks those. The Anthropic Messages API loop and the OpenAI Agents Model interface have different lifecycles, so reaching for an AgentMiddleware shape there would be awkward. SteeringSession is the same pipeline expressed as a plain Python object the integration calls into manually.
The two paths are equivalent. Step entries from a session look the same as step_log entries from the middleware; telemetry payloads are identical; the FSM, monitors, and injection pipeline are bit-for-bit the same code.
