Skip to content

Commit f8031bb

Browse files
docs: sync code blocks and generate API reference (#317)
Synced from agent-sdk ref: main Co-authored-by: xingyaoww <[email protected]>
1 parent 9a0f583 commit f8031bb

File tree

5 files changed

+169
-5
lines changed

5 files changed

+169
-5
lines changed

sdk/api-reference/openhands.sdk.agent.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,14 @@ description: API reference for openhands.sdk.agent module
66

77
### class Agent
88

9-
Bases: [`AgentBase`](#class-agentbase)
9+
Bases: `CriticMixin`, [`AgentBase`](#class-agentbase)
1010

1111
Main agent implementation for OpenHands.
1212

1313
The Agent class provides the core functionality for running AI agents that can
1414
interact with tools, process messages, and execute actions. It inherits from
15-
AgentBase and implements the agent execution logic.
15+
AgentBase and implements the agent execution logic. Critic-related functionality
16+
is provided by CriticMixin.
1617

1718
#### Example
1819

sdk/api-reference/openhands.sdk.conversation.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -237,6 +237,7 @@ Bases: `OpenHandsModel`
237237

238238
- `activated_knowledge_skills`: list[str]
239239
- `agent`: AgentBase
240+
- `agent_state`: dict[str, Any]
240241
- `blocked_actions`: dict[str, str]
241242
- `blocked_messages`: dict[str, str]
242243
- `confirmation_policy`: ConfirmationPolicyBase

sdk/api-reference/openhands.sdk.llm.mdx

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -305,6 +305,67 @@ Whether this model uses the OpenAI Responses API path.
305305

306306
#### vision_is_active()
307307

308+
### class LLMProfileStore
309+
310+
Bases: `object`
311+
312+
Standalone utility for persisting LLM configurations.
313+
314+
#### Methods
315+
316+
#### __init__()
317+
318+
Initialize the profile store.
319+
320+
* Parameters:
321+
`base_dir` – Path to the directory where the profiles are stored.
322+
If None is provided, the default directory is used, i.e.,
323+
~/.openhands/profiles.
324+
325+
#### delete()
326+
327+
Delete an existing profile.
328+
329+
If the profile is not present in the profile directory, it does nothing.
330+
331+
* Parameters:
332+
`name` – Name of the profile to delete.
333+
* Raises:
334+
`TimeoutError` – If the lock cannot be acquired.
335+
336+
#### list()
337+
338+
Returns a list of all profiles stored.
339+
340+
* Returns:
341+
List of profile filenames (e.g., [“default.json”, “gpt4.json”]).
342+
343+
#### load()
344+
345+
Load an LLM instance from the given profile name.
346+
347+
* Parameters:
348+
`name` – Name of the profile to load.
349+
* Returns:
350+
An LLM instance constructed from the profile configuration.
351+
* Raises:
352+
* `FileNotFoundError` – If the profile name does not exist.
353+
* `ValueError` – If the profile file is corrupted or invalid.
354+
* `TimeoutError` – If the lock cannot be acquired.
355+
356+
#### save()
357+
358+
Save a profile to the profile directory.
359+
360+
Note that if a profile name already exists, it will be overwritten.
361+
362+
* Parameters:
363+
* `name` – Name of the profile to save.
364+
* `llm` – LLM instance to save
365+
* `include_secrets` – Whether to include the profile secrets. Defaults to False.
366+
* Raises:
367+
`TimeoutError` – If the lock cannot be acquired.
368+
308369
### class LLMRegistry
309370

310371
Bases: `object`

sdk/guides/critic.mdx

Lines changed: 100 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -193,8 +193,8 @@ from openhands.tools.terminal import TerminalTool
193193

194194

195195
# Configuration
196-
# Higher threshold (70%) makes it more likely the agent needs multiple iterations
197-
# to demonstrate the how iterative refinement works.
196+
# Higher threshold (70%) makes it more likely the agent needs multiple iterations,
197+
# which better demonstrates how iterative refinement works.
198198
# Adjust as needed to see different behaviors.
199199
SUCCESS_THRESHOLD = float(os.getenv("CRITIC_SUCCESS_THRESHOLD", "0.7"))
200200
MAX_ITERATIONS = int(os.getenv("MAX_ITERATIONS", "3"))
@@ -330,6 +330,104 @@ The task is complete ONLY when:
330330
"""
331331

332332

333+
llm_api_key = get_required_env("LLM_API_KEY")
334+
llm = LLM(
335+
# Use a weaker model to increase likelihood of needing multiple iterations
336+
model="anthropic/claude-haiku-4-5",
337+
api_key=llm_api_key,
338+
top_p=0.95,
339+
base_url=os.getenv("LLM_BASE_URL", None),
340+
)
341+
342+
# Setup critic with iterative refinement config
343+
# The IterativeRefinementConfig tells Conversation.run() to automatically
344+
# retry the task if the critic score is below the threshold
345+
iterative_config = IterativeRefinementConfig(
346+
success_threshold=SUCCESS_THRESHOLD,
347+
max_iterations=MAX_ITERATIONS,
348+
)
349+
350+
# Auto-configure critic for All-Hands proxy or use explicit env vars
351+
critic = get_default_critic(llm)
352+
if critic is None:
353+
print("⚠️ No All-Hands LLM proxy detected, trying explicit env vars...")
354+
critic = APIBasedCritic(
355+
server_url=get_required_env("CRITIC_SERVER_URL"),
356+
api_key=get_required_env("CRITIC_API_KEY"),
357+
model_name=get_required_env("CRITIC_MODEL_NAME"),
358+
iterative_refinement=iterative_config,
359+
)
360+
else:
361+
# Add iterative refinement config to the auto-configured critic
362+
critic = critic.model_copy(update={"iterative_refinement": iterative_config})
363+
364+
# Create agent with critic (iterative refinement is built into the critic)
365+
agent = Agent(
366+
llm=llm,
367+
tools=[
368+
Tool(name=TerminalTool.name),
369+
Tool(name=FileEditorTool.name),
370+
Tool(name=TaskTrackerTool.name),
371+
],
372+
critic=critic,
373+
)
374+
375+
# Create workspace
376+
workspace = Path(tempfile.mkdtemp(prefix="critic_demo_"))
377+
print(f"📁 Created workspace: {workspace}")
378+
379+
# Create conversation - iterative refinement is handled automatically
380+
# by Conversation.run() based on the critic's config
381+
conversation = Conversation(
382+
agent=agent,
383+
workspace=str(workspace),
384+
)
385+
386+
print("\n" + "=" * 70)
387+
print("🚀 Starting Iterative Refinement with Critic Model")
388+
print("=" * 70)
389+
print(f"Success threshold: {SUCCESS_THRESHOLD:.0%}")
390+
print(f"Max iterations: {MAX_ITERATIONS}")
391+
392+
# Send the task and run - Conversation.run() handles retries automatically
393+
conversation.send_message(INITIAL_TASK_PROMPT)
394+
conversation.run()
395+
396+
# Print additional info about created files
397+
print("\nCreated files:")
398+
for path in sorted(workspace.rglob("*")):
399+
if path.is_file():
400+
relative = path.relative_to(workspace)
401+
print(f" - {relative}")
402+
403+
# Report cost
404+
cost = llm.metrics.accumulated_cost
405+
print(f"\nEXAMPLE_COST: {cost:.4f}")
406+
```
407+
Hello world!
408+
This is a well-known test file.
409+
410+
It has 5 lines, including empty ones.
411+
Numbers like 42 and 3.14 don't count as words.
412+
```
413+
414+
2. Run: `python wordstats/cli.py sample.txt`
415+
Expected output:
416+
- Lines: 5
417+
- Words: 21
418+
- Chars: 130
419+
- Unique words: 21
420+
421+
3. Run the tests: `python -m pytest wordstats/tests/ -v`
422+
ALL tests must pass.
423+
424+
The task is complete ONLY when:
425+
- All files exist
426+
- The CLI outputs the correct stats for sample.txt
427+
- All 5+ tests pass
428+
"""
429+
430+
333431
llm_api_key = get_required_env("LLM_API_KEY")
334432
llm = LLM(
335433
# Use a weaker model to increase likelihood of needing multiple iterations

sdk/guides/llm-profile-store.mdx

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ from openhands.sdk import LLM, LLMProfileStore
113113
store = LLMProfileStore(base_dir=tempfile.mkdtemp())
114114

115115

116-
# 1. Create to LLM profiles with different usage
116+
# 1. Create two LLM profiles with different usage
117117

118118
api_key = os.getenv("LLM_API_KEY")
119119
assert api_key is not None, "LLM_API_KEY environment variable is not set."
@@ -152,6 +152,7 @@ print(f"Stored profiles: {store.list()}")
152152
# 4. Load a profile
153153

154154
loaded = store.load("fast")
155+
assert isinstance(loaded, LLM)
155156
print(
156157
"Loaded profile. "
157158
f"usage:{loaded.usage_id}, "
@@ -163,6 +164,8 @@ print(
163164

164165
store.delete("creative")
165166
print(f"After deletion: {store.list()}")
167+
168+
print("EXAMPLE_COST: 0")
166169
```
167170

168171
<RunExampleCode path_to_script="examples/01_standalone_sdk/37_llm_profile_store.py"/>

0 commit comments

Comments
 (0)