Conversation
…authored-by: openhands <[email protected]>
…sation startup\n\n- ProfileManager manages ~/.openhands/llm-profiles/*.json (load/save/list/register)\n- LocalConversation now calls ProfileManager.register_all to eagerly populate LLMRegistry\n\nCo-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
- embed profile lifecycle APIs into the registry - update persistence helpers, docs, and examples to use registry - replace profile manager tests with registry profile coverage Co-authored-by: openhands <[email protected]>
- note that LLMRegistry is the unified entry point for disk and runtime profiles - mention how to override the profile directory when embedding the SDK Co-authored-by: openhands <[email protected]>
- rename payload helpers to resolve_llm_profiles/compact_llm_profiles - update conversation state to use clearer helper names - drop the optional agent_settings convenience module and its tests Co-authored-by: openhands <[email protected]>
- replace the _transform flag with dedicated _compact/_resolve helpers - make compact_llm_profiles/resolve_llm_profiles easier to follow by delegating to the new helpers Co-authored-by: openhands <[email protected]>
Bring in new package layout and port LLM profile switching support.
Revert the in-progress switch_llm helpers and tests; agent-sdk-18 branch now only contains LLM profile persistence.
Example 25 now performs a read/write/delete workflow and verifies the persisted profile reference.
- move inline/profile compaction into LLM serializer/validator - use model_dump_json context in ConversationState persistence - add persistence settings module and cover profile reference tests - document persistence comparison and recommendations
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
1a984af to
f754097
Compare
Follow-up on /codereview-roasted + applied fixesThis PR started with the right direction (persist LLM configs as profiles) but it had a few contract/constraint issues that would make it brittle in practice. Based on the review + discussion, the implementation now establishes explicit invariants and aligns with existing SDK patterns. What we changed (mapping to the earlier review items)(1) Stop overloading
|
OpenHands-Opus Code Review 🔥Hi there! I'm OpenHands-Opus, and I've been asked to review this PR with the engineering mindset of Linus Torvalds. Brace yourselves. Taste Rating: 🟡 Acceptable - Works but could be cleanerLinus's Three Questions Applied:
[CRITICAL ISSUES] (Must fix - these break fundamental principles)[openhands-sdk/openhands/sdk/llm/llm.py, Lines 393-412] 🔴 Hidden Control Flow Explosion@model_validator(mode="before")
def _coerce_inputs(cls, data: Any, info: ValidationInfo):
if d.get("kind") == "profile_ref":
if info.context is None or "llm_registry" not in info.context:
raise ValueError(
"LLM registry required in context to load profile references."
)
registry = info.context["llm_registry"]
llm = registry.load_profile(profile_id)
...You've turned a Pydantic validator into a file I/O operation with side effects. This is magic hiding behind model validation. When someone does The fundamental issue: Validation should validate. Loading should load. You've conflated two distinct responsibilities into one opaque validator. Someone debugging "why is my LLM instantiation slow" will never guess it's doing file I/O inside [openhands-sdk/openhands/sdk/conversation/state.py, Lines 262-310] 🔴 Complexity Explosion During ResumeThe resume path now has 34 lines of conditional logic with multiple nested If you need more than 3 levels of indentation conceptually, you're screwed. This needs redesign - pull the profile resolution into a named helper so the resume path reads like English. [openhands-sdk/openhands/sdk/llm/llm_registry.py, Lines 67-81] 🔴 Silent chmod Failuretry:
path.chmod(0o600)
except Exception as e: # best-effort on non-POSIX systems
logger.debug(f"Unable to chmod profile file {path}: {e}")"Best effort" permission setting on files containing API keys? Really? If you can't secure the file, you should fail loudly, not debug-log it into oblivion. A user on Windows with an insecure profile containing their $500/month API key won't know their credentials are world-readable. [IMPROVEMENT OPPORTUNITIES] (Should fix - violates good taste)[openhands-sdk/openhands/sdk/llm/llm_registry.py]
|
OpenHands-Opus Follow-up 🙇After reviewing the conversation history and getting additional context, I need to revise my earlier assessment. What I MissedMy review came in after @smolpaws had already done a roasted review and applied fixes. Several of my "critical issues" were actually criticizing deliberate design improvements that addressed earlier concerns:
Context I Didn't Have
Revised Taste Rating: 🟢 Good tasteThe design correctly separates concerns:
Remaining Minor NotesThe The validator-does-I/O pattern is acceptable when it's:
Updated Verdict: ✅ Worth mergingThe post-fix implementation establishes clear contracts and aligns with SDK patterns. My earlier architectural criticism was based on misunderstanding the feature's purpose. Apologies for the noise - lesson learned to check conversation history before going full Torvalds. 😅 -- OpenHands-Opus 🤖 |
|
@OpenHands CI is failing. Please check what we can do for those two jobs, and fix it. |
|
I'm on it! enyst can track my progress at all-hands.dev |
all-hands-bot
left a comment
There was a problem hiding this comment.
Comprehensive implementation of LLM profiles with good test coverage. Found several security and maintainability concerns that should be addressed.
| return self._profile_store.profile_dir | ||
|
|
||
| def subscribe(self, callback: Callable[[RegistryEvent], None]) -> None: | ||
| """Subscribe to registry events. |
There was a problem hiding this comment.
🔴 Critical: Default include_secrets=True is dangerous. Users may inadvertently persist API keys to disk without realizing it. Consider:
- Defaulting to
include_secrets=Falsefor better security - Adding a prominent warning in docstring about secret persistence
- Requiring explicit opt-in for secret storage
The example file comment says "New profiles include API keys by default" which could lead to leaked credentials if users aren't careful.
There was a problem hiding this comment.
I think this is the status quo for credentials in ~/.openhands/` though. It's the user home, and many applications have only save.
| except FileNotFoundError: | ||
| base_text = None | ||
|
|
||
| context: dict[str, object] = {} | ||
| registry = llm_registry | ||
| if registry is None: | ||
| from openhands.sdk.llm.llm_registry import LLMRegistry | ||
|
|
||
| registry = LLMRegistry() | ||
| context["llm_registry"] = registry | ||
|
|
||
| # Ensure we have a registry available during both dump and validate. | ||
| # | ||
| # We do NOT implicitly write profile files here. Instead, persistence will | ||
| # store a profile reference only when the runtime LLM already has an | ||
| # explicit ``profile_id``. | ||
|
|
||
| # ---- Resume path ---- | ||
| if base_text: | ||
| # Use cipher context for decrypting secrets if provided | ||
| context = {"cipher": cipher} if cipher else None | ||
| state = cls.model_validate(json.loads(base_text), context=context) | ||
| base_payload = json.loads(base_text) | ||
| # Add cipher context for decrypting secrets if provided | ||
| if cipher: | ||
| context["cipher"] = cipher | ||
|
|
||
| # Restore the conversation with the same id | ||
| if state.id != id: | ||
| persisted_id = ConversationID(base_payload.get("id")) | ||
| if persisted_id != id: | ||
| raise ValueError( | ||
| f"Conversation ID mismatch: provided {id}, " | ||
| f"but persisted state has {state.id}" | ||
| f"but persisted state has {persisted_id}" | ||
| ) | ||
|
|
||
| persisted_agent_payload = base_payload.get("agent") | ||
| if persisted_agent_payload is None: | ||
| raise ValueError("Persisted conversation is missing agent state") | ||
|
|
||
| # Attach event log early so we can read history for tool verification | ||
| event_log = EventLog(file_store, dir_path=EVENTS_DIR) | ||
|
|
||
| persisted_agent = AgentBase.model_validate( | ||
| persisted_agent_payload, | ||
| context={"llm_registry": registry}, | ||
| ) | ||
| agent.verify(persisted_agent, events=event_log) | ||
|
|
||
| # Use runtime-provided Agent directly (PR #1542 / issue #1451) | ||
| # | ||
| # Persist LLMs as profile references only when an explicit profile_id is | ||
| # set on the runtime LLM. | ||
| agent_payload = agent.model_dump( | ||
| mode="json", | ||
| exclude_none=True, | ||
| context={"expose_secrets": True}, | ||
| ) | ||
| llm_payload = agent_payload.get("llm") | ||
| if isinstance(llm_payload, dict) and llm_payload.get("profile_id"): | ||
| llm = agent.llm | ||
| agent_payload["llm"] = llm.to_profile_ref() | ||
|
|
||
| base_payload["agent"] = agent_payload | ||
| base_payload["workspace"] = workspace.model_dump(mode="json") | ||
| base_payload["max_iterations"] = max_iterations |
There was a problem hiding this comment.
🟠 Important: The create() method has become quite complex with the profile reference logic. Consider extracting the resume logic into a separate _resume_from_persistence() method to improve readability.
The multiple payload mutations (expanding profile refs, injecting runtime agent, converting back to profile refs) make this hard to follow and maintain.
| Set ``LLM_PROFILE_NAME`` to choose which profile file to load. | ||
|
|
||
| Notes on credentials: | ||
| - New profiles include API keys by default when saved | ||
| - To omit secrets on disk, pass include_secrets=False to LLMRegistry.save_profile |
There was a problem hiding this comment.
🔴 Critical: This documentation is misleading and dangerous. The current default behavior (include_secrets=True) could lead users to accidentally commit API keys.
| Set ``LLM_PROFILE_NAME`` to choose which profile file to load. | |
| Notes on credentials: | |
| - New profiles include API keys by default when saved | |
| - To omit secrets on disk, pass include_secrets=False to LLMRegistry.save_profile | |
| Set ``LLM_PROFILE_NAME`` to choose which profile file to load. | |
| Security Best Practice: | |
| - Profiles should be saved WITHOUT secrets (include_secrets=False) | |
| - Provide API keys via environment variables (LLM_API_KEY, AWS_ACCESS_KEY_ID, etc.) | |
| - Never commit profile files containing secrets to version control | |
| - Add *.json to .gitignore if storing profiles in your project directory |
| _PROFILE_ID_PATTERN = re.compile(r"^[A-Za-z0-9._-]+$") | ||
|
|
||
|
|
||
| class _LLMProfileStore: |
There was a problem hiding this comment.
This is because of an attempt to separate two responsibilities:
LLMRegistry remains primarily a runtime registry; persistence behavior is explicitly contained.
Personally, I'm not sure it's necessary. They're close enough, while separating them means that below (line 220 and below) we are going to need to delegate to the LLMProfileStore methods, because the public API is still in the LLMRegistry.
…on.py Co-authored-by: OpenHands Bot <[email protected]>
| context={"expose_secrets": True}, | ||
| ) | ||
| llm_payload = agent_payload.get("llm") | ||
| if isinstance(llm_payload, dict) and llm_payload.get("profile_id"): |
There was a problem hiding this comment.
@OpenHands Why are we using profile_id, didn't we have a kind profile_ref? Please investigate the entire diff, and understand the code patterns and fix.
There was a problem hiding this comment.
I'm on it! enyst can track my progress at all-hands.dev
There was a problem hiding this comment.
Final update:
Summary (new since last update):
- No additional changes since the previous summary; branch is up to date and already pushed.
Checklist:
- Confirmed the request is fully addressed.
- Verified there are no extra or unrelated changes (diff vs. remote is empty).
- Changes are concise and already pushed to the PR branch.
If you want any further adjustments or additional tests run, just let me know.
Co-authored-by: openhands <[email protected]>
Co-authored-by: openhands <[email protected]>
Break the main function into script-level code to save a level of indentation. Also document this preference in AGENTS.md. Co-authored-by: openhands <[email protected]>
|
Looks like there are a few issues preventing this PR from being merged!
If you'd like me to help, just leave a comment, like Feel free to include any additional details that might help me get this PR into a better state. You can manage your notification settings |
HUMAN:
LLM Profiles behavior
usage_idprofile_idis set by the user, and corresponds tollm_profiles/profile_id.jsoninpersistence_dir(if set)LLM_PROFILES_DIRSummary
LLMRegistry, exposing list/load/save/register/validate helpers with configurable profile directoriesTesting
Related
Agent Server images for this PR
• GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server
Variants & Base Images
eclipse-temurin:17-jdknikolaik/python-nodejs:python3.13-nodejs22golang:1.21-bookwormPull (multi-arch manifest)
# Each variant is a multi-arch manifest supporting both amd64 and arm64 docker pull ghcr.io/openhands/agent-server:b4c376a-pythonRun
All tags pushed for this build
About Multi-Architecture Support
b4c376a-python) is a multi-arch manifest supporting both amd64 and arm64b4c376a-python-amd64) are also available if needed