Skip to content

[FEATURE] Agent-Specific Reasoning Level Configuration #103

@NamedIdentity

Description

@NamedIdentity

Feature Request: Agent-Specific Reasoning Level Configuration

Summary

On modern OpenCode (v1.0.210+), there is no clear or working method to configure specific agents to use specific reasoning levels (e.g., reasoningEffort: "high"). The documentation and codebase were reviewed by both the user and an AI assistant (Claude Code), and neither could determine a working approach.

Environment

  • OpenCode version: 1.1.12 (modern, v1.0.210+)
  • Plugin version: [email protected]
  • Platform: Windows

Problem Description

Goal

Configure agents in opencode.json to always use a specific reasoning level when invoked as subagents (where TUI variant selection is not available).

Example use case:

"principal-chatgpt": {
  "model": "openai/gpt-5.2",
  "prompt": "{file:./agent/principal-chatgpt.md}"
  // Need this agent to ALWAYS use reasoningEffort: "high"
}

What We Tried

  1. Created separate model entry with options block (per docs/configuration.md Pattern 2):

    "gpt-5.2-high": {
      "name": "GPT 5.2 High (OAuth)",
      "limit": { "context": 272000, "output": 128000 },
      "options": {
        "reasoningEffort": "high",
        "reasoningSummary": "detailed"
      }
    }

    Then referenced in agent: "model": "openai/gpt-5.2-high"

    Result: OpenCode reports model "is not valid"

  2. Added id field to map custom model to base model (similar to working Google provider pattern):

    "gpt-5.2-high": {
      "id": "gpt-5.2",
      "name": "GPT 5.2 High (OAuth)",
      ...
    }

    Result: Still reports model "is not valid"

  3. Reviewed config/opencode-legacy.json which defines separate model entries like gpt-5.2-high with options blocks - but documentation states this config is for OpenCode v1.0.209 and below only.

Documentation Findings

The documentation provides conflicting or unclear guidance:

  1. config/README.md states:

    • Modern OpenCode (v1.0.210+) should use opencode-modern.json with variants
    • Legacy OpenCode (v1.0.209-) should use opencode-legacy.json with separate model entries
    • "Use the config file appropriate for your OpenCode version"
  2. docs/configuration.md shows "Per-Agent Models" example:

    "agent": {
      "commit": { "model": "openai/gpt-5.1-codex-low" },
      "review": { "model": "openai/gpt-5.1-codex-high" }
    }

    But doesn't specify which OpenCode version this works on.

  3. docs/configuration.md Pattern 2 shows per-model options overrides, but these don't work on modern OpenCode when referenced by agents.

  4. The variants system works for interactive use (--variant=high or TUI selection), but agents/subagents cannot access TUI and there's no variant field in agent configuration.

Codebase Review

Reviewed the following files:

  • index.ts - Plugin loader extracts userConfig from provider config
  • lib/request/request-transformer.ts - getModelConfig() merges global + model-specific options
  • lib/request/helpers/model-map.ts - Maps model names including gpt-5.2-highgpt-5.2
  • lib/types.ts - UserConfig structure supports per-model options

The plugin code appears to support per-model options, but OpenCode's validation rejects custom model names before the plugin can process them.

Request

One of the following:

  1. Documentation clarification: If there IS a working method to configure agent-specific reasoning levels on modern OpenCode, please document it clearly with a complete example.

  2. Feature addition: If no method currently exists, please add support for configuring reasoning levels per-agent. Implementation approach is at developer discretion.

Impact

This limitation affects users who:

  • Use multi-agent configurations with different reasoning requirements
  • Run agents as subagents (no TUI access for variant selection)
  • Want consistent, predictable reasoning behavior for specific agents
  • Are on modern OpenCode and cannot downgrade to legacy

Additional Context

The Google provider's authentication plugin (opencode-google-antigravity-auth) appears to work with custom model entries that have an id field mapping to base models (e.g., gemini-3-flash-high with id: "gemini-3-flash"). The same pattern does not work for the OpenAI provider.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions