Skip to main content

Documentation Index

Fetch the complete documentation index at: https://reasonblocks.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

make_openai_tools wraps a CodebaseMemory instance and an optional ImportGraph in @function_tool-decorated callables that the OpenAI Agents SDK understands natively. Pass the returned list directly to Agent(tools=[...]) alongside your own tools. The tools share the same memory and graph objects you provide, so findings stored during one step are immediately available on the next recall.
from reasonblocks.integrations import make_openai_tools

Installation

The OpenAI Agents integration requires the openai-agents package:
pip install reasonblocks openai-agents

make_openai_tools

make_openai_tools(
    memory: CodebaseMemory | None = None,
    graph: ImportGraph | None = None,
    *,
    recall_top_k: int = 5,
    recall_threshold: float = 0.25,
    enable_recall: bool = True,
    enable_store: bool = True,
    enable_impact: bool = True,
) -> list
Returns a list of @function_tool-decorated callables ready for Agent(tools=[...]). The list contains up to three tools depending on the arguments and flags you provide.

Parameters

memory
CodebaseMemory | None
default:"None"
The CodebaseMemory instance the tools read from and write to. When None, both recall_findings and store_finding are omitted from the returned list regardless of enable_recall and enable_store.
graph
ImportGraph | None
default:"None"
Optional ImportGraph for the repository. When provided and enable_impact=True, an impact_analysis tool is added to the returned list.
recall_top_k
int
default:"5"
Maximum number of findings to return from a single recall_findings call. Increasing this value provides more context at the cost of additional tokens.
recall_threshold
float
default:"0.25"
Minimum similarity score (0–1) a finding must reach to appear in recall results. Lower values return more results with potentially lower relevance.
enable_recall
bool
default:"true"
Include the recall_findings tool. Set to False to produce a write-only or impact-only tool set.
enable_store
bool
default:"true"
Include the store_finding tool. Set to False for read-only scenarios where the agent should not persist new observations.
enable_impact
bool
default:"true"
Include the impact_analysis tool when a graph is provided. Set to False to suppress it even when graph is not None.

Returns

tools
list
A list of @function_tool-decorated callables. Safe to spread into an Agent tool list: tools=[*rb_tools, *your_tools].

Tools

recall_findings(query)

Searches CodebaseMemory for findings relevant to query. The agent should call this before reading a file — if findings already exist, it can skip the file read entirely and avoid wasting tokens.
ParameterTypeDescription
querystrNatural-language description of what you are looking for
Returns a formatted string of matching findings, or a message indicating nothing was found.

store_finding(content, file_path, finding_type)

Persists a new finding to CodebaseMemory so future agent runs can recall it. Store small, self-contained facts. Avoid storing long paragraphs; keep each entry tight and factual.
ParameterTypeDefaultDescription
contentstrThe finding text (under 8 000 characters)
file_pathstr""Repo-relative path the finding is about, if applicable
finding_typestr"note"Short tag: bug, behavior, pattern, or note
Returns "stored (id=<fid>)" on success or "store failed" on error.

impact_analysis(file_path)

Queries the ImportGraph to return the dependents (files that import this file) and dependencies (files this file imports). Use it to judge the blast radius of a proposed change before modifying a file.
ParameterTypeDescription
file_pathstrRepo-relative path, e.g. "pydantic/main.py"
Returns a formatted string listing dependents and dependencies.
impact_analysis is only present when you pass a non-None graph and enable_impact=True. Check len(rb_tools) rather than assuming a fixed index if you build your tool list conditionally.

Complete example

import asyncio
from agents import Agent, Runner
from reasonblocks import ReasonBlocks
from reasonblocks.codebase_memory import CodebaseMemory
from reasonblocks.import_graph import ImportGraph
from reasonblocks.integrations import make_openai_tools

import pathlib

rb     = ReasonBlocks(api_key="rb_live_...")
memory = CodebaseMemory(codebase_id="my-repo")
graph  = ImportGraph().build_from_files(
    {str(p): p.read_text() for p in pathlib.Path("myrepo").rglob("*.py")}
)

rb_tools = make_openai_tools(memory, graph)

agent = Agent(
    name="code-reviewer",
    instructions="You are a senior engineer reviewing Python codebases.",
    tools=[*rb_tools, *your_tools],
)

hooks = rb.openai_hooks(
    run_id="run-1",
    agent_name="code-reviewer",
    task="Review auth module for security issues",
)

async def main():
    async with hooks:
        result = await Runner.run(
            agent,
            input="Review auth/session.py for security issues",
            hooks=hooks,
        )
    print(result.final_output)

asyncio.run(main())

Telemetry: openai_hooks

make_openai_tools handles tool wiring only. ReasonBlocks telemetry — step scoring, FSM state tracking, and E-trace injection — is provided separately by rb.openai_hooks(), which returns a ReasonBlocksHooks object you pass to Runner.run(hooks=...).
If you pass tools to Agent but omit hooks from Runner.run, your agent will use CodebaseMemory normally but ReasonBlocks will not emit any telemetry or inject E-trace guidance. Both are needed for full ReasonBlocks functionality.
For the full openai_hooks reference and lifecycle details, see the OpenAI Agents integration guide.