Releases: numman-ali/opencode-openai-codex-auth
v4.4.0
v4.3.1
v4.3.1 (2026-01-08)
Installer safety release: JSONC support, safe uninstall, and minimal reasoning clamp.
Added
- JSONC-aware installer with comment/formatting preservation and
.jsoncpriority. - Safe uninstall:
--uninstallremoves only plugin entries + our model presets;--allremoves tokens/logs/cache. - Installer tests covering JSONC parsing, precedence, uninstall safety, and artifact cleanup.
Changed
- Default config path when creating new configs:
~/.config/opencode/opencode.jsonc. - Added
jsonc-parser(MIT, 0 deps) for robust JSONC handling.
Fixed
- Normalizes
minimal→lowfor GPT‑5.x requests to avoid backend rejection.
v4.3.0
v4.3.0 (2026-01-04)
Feature + reliability release: variants support, one-command installer, and auth/error handling fixes.
Added
- One-command installer/update:
npx -y opencode-openai-codex-auth@latest(global config, backup, cache clear) with--legacyfor OpenCode v1.0.209 and below. - Modern variants config:
config/opencode-modern.jsonfor OpenCode v1.0.210+; legacy presets remain inconfig/opencode-legacy.json. - Installer CLI bundled as package bin for cross-platform use (Windows/macOS/Linux).
Changed
- Variants-aware request config respects host-supplied
body.reasoning/providerOptions.openaibefore falling back to defaults. - OpenCode prompt source updated to the current upstream repository (
anomalyco/opencode). - Docs/README reorganized to an install-first layout with explicit legacy path.
Fixed
- Headless login fallback when
xdg-openis missing; manual URL paste remains available. - Error handling alignment: refresh failures throw; usage-limit 404s map to retryable 429s where appropriate.
- AGENTS.md preservation via protected instruction markers.
- Tool-call integrity: orphan outputs match
local_shell_callandcustom_tool_call(Codex CLI parity); unmatched outputs preserved as assistant messages. - Logging noise gated behind debug flags.
v4.2.0
Feature release: GPT 5.2 Codex support and prompt alignment with latest Codex CLI.
Added
- GPT 5.2 Codex model family: Full support for
gpt-5.2-codexwith presets:gpt-5.2-codex-low- Fast GPT 5.2 Codex responsesgpt-5.2-codex-medium- Balanced GPT 5.2 Codex tasksgpt-5.2-codex-high- Complex GPT 5.2 Codex reasoning & toolsgpt-5.2-codex-xhigh- Deep GPT 5.2 Codex long-horizon work
- New model family prompt:
gpt-5.2-codex_prompt.mdfetched from the latest Codex CLI release with its own cache file. - Test coverage: Added unit tests for GPT 5.2 Codex normalization, family selection, and reasoning behavior.
Changed
- Prompt selection alignment: GPT 5.2 general now uses
gpt_5_2_prompt.md(Codex CLI parity). - Reasoning configuration: GPT 5.2 Codex supports
xhighbut does not support"none";"none"auto-upgrades to"low"and"minimal"normalizes to"low". - Config presets:
config/full-opencode.jsonnow includes 22 pre-configured variants (adds GPT 5.2 Codex). - Docs: Updated README/AGENTS/config docs to include GPT 5.2 Codex and new model family behavior.
v4.1.1
What's New
"None" Reasoning Effort Support
GPT-5.2 and GPT-5.1 general purpose models now support reasoning_effort: "none" which disables the reasoning phase entirely. This can result in faster responses when reasoning is not needed.
- gpt-5.2-none - GPT-5.2 with reasoning disabled
- gpt-5.1-none - GPT-5.1 with reasoning disabled
Note: Codex variants do NOT support "none" - it auto-converts to "low" for Codex/Codex Max, or "medium" for Codex Mini.
Bug Fixes
-
Fixed orphaned function_call_output 400 errors - Previously, when conversation history contained
item_referencepointing to stored function calls, orphanedfunction_call_outputitems could cause API errors. Now handles orphans regardless of tools presence and converts them to assistant messages to preserve context while avoiding errors. -
Fixed OAuth HTML version display - Updated the version shown in the OAuth success page from 1.0.4 to 4.1.0.
Reasoning Effort Support Matrix
| Model | none |
low |
medium |
high |
xhigh |
|---|---|---|---|---|---|
| GPT-5.2 | ✅ | ✅ | ✅ | ✅ | ✅ |
| GPT-5.1 | ✅ | ✅ | ✅ | ✅ | ❌ |
| GPT-5.1-Codex | ❌→low | ✅ | ✅ | ✅ | ❌ |
| GPT-5.1-Codex-Max | ❌→low | ✅ | ✅ | ✅ | ✅ |
| GPT-5.1-Codex-Mini | ❌→medium | ❌→medium | ✅ | ✅ | ❌→high |
Model Presets
This release includes 18 pre-configured model variants in the full configuration:
- GPT-5.2: none, low, medium, high, xhigh
- GPT-5.1-Codex-Max: low, medium, high, xhigh
- GPT-5.1-Codex: low, medium, high
- GPT-5.1-Codex-Mini: medium, high
- GPT-5.1: none, low, medium, high
Upgrade
Update your opencode.json:
"plugin": ["[email protected]"]Then copy the updated configuration from config/full-opencode.json.
Test Coverage
- 197 unit tests (4 new tests for "none" reasoning behavior)
- All tests passing
v4.1.0 - GPT 5.2 Support & Full Image Input
🚀 GPT 5.2 Support & Full Image Input Capabilities
This release adds support for OpenAI's latest GPT 5.2 model and enables full multimodal image input across all 16 model variants.
✨ New Features
GPT 5.2 Model Family
- 4 new model presets with full reasoning support:
gpt-5.2-low- Fast responses with light reasoninggpt-5.2-medium- Balanced reasoning for general tasksgpt-5.2-high- Complex reasoning and analysisgpt-5.2-xhigh- Deep multi-hour analysis (same capabilities as Codex Max)
🖼️ Full Image Input Support
- All 16 models now support image input via
modalities.input: ["text", "image"] - Read screenshots, diagrams, UI mockups, and any image directly in OpenCode
- No additional configuration required - just use the full config
📝 Changes
- Model ordering: Config now prioritizes newer models (GPT 5.2 → Codex Max → Codex → Codex Mini → GPT 5.1)
- Explicit reasoning levels: Removed default presets without reasoning suffix to enforce explicit selection
- Test coverage: 193 unit tests + 16 integration tests (all passing)
- Security: Updated
@opencode-ai/pluginto^1.0.150(0 vulnerabilities)
📦 Installation
Update your opencode.json to use the new version:
{
"plugin": ["[email protected]"]
}Then copy the full config from config/full-opencode.json.
🔧 Usage
# GPT 5.2 models
opencode run "analyze this" --model=openai/gpt-5.2-high
opencode run "deep research" --model=openai/gpt-5.2-xhigh
# With image input (automatic - no extra config needed)
# Just reference images in your prompts and OpenCode will handle themFull Changelog: v4.0.2...v4.1.0
v4.0.2 - Fix Compaction & Agent Creation
Bugfix Release
Fixes compaction context loss, agent creation, and SSE/JSON response handling.
Fixed
- Compaction losing context: v4.0.1 was too aggressive in filtering tool calls - it removed ALL
function_call/function_call_outputitems when tools weren't present. Now only orphaned outputs (without matching calls) are filtered, preserving matched pairs for compaction context. - Agent creation failing: The
/agent createcommand was failing with "Invalid JSON response" because we were returning SSE streams instead of JSON forgenerateText()requests. - SSE/JSON response handling: Properly detect original request intent -
streamText()requests get SSE passthrough,generateText()requests get SSE→JSON conversion.
Added
gpt-5.1-chat-latestmodel support: Added to model map, normalizes togpt-5.1.
Technical Details
- Compaction fix: OpenCode sends
item_referencewithfc_*IDs for function calls. We filter these for stateless mode, but v4.0.1 then removed ALL tool items. Now we only remove orphanedfunction_call_outputitems (where no matchingfunction_callexists). - Agent creation fix: We were forcing
stream: truefor all requests and returning SSE for all responses. Now we capture originalstreamvalue before transformation and convert SSE→JSON only when original request wasn't streaming. - The Codex API always receives
stream: true(required), but response handling is based on original intent.
Upgrade
Update your opencode.json:
{
"plugin": ["[email protected]"]
}If stuck on an old version, clear the cache:
rm -rf ~/.cache/opencode/node_modules ~/.cache/opencode/bun.lockFull Changelog: v4.0.1...v4.0.2
v4.0.1 - Bugfix Release
Bugfix Release
Fixes API errors during summary/compaction and GitHub rate limiting.
Fixed
- Orphaned
function_call_outputerrors: Fixed 400 errors during summary/compaction requests when OpenCode sendsitem_referencepointers to server-stored function calls. The plugin now filters outfunction_callandfunction_call_outputitems when no tools are present in the request. - GitHub API rate limiting: Added fallback mechanism when fetching Codex instructions from GitHub. If the API returns 403 (rate limit), the plugin now falls back to parsing the HTML releases page.
Technical Details
- Root cause: OpenCode's secondary model (gpt-5-nano) uses
item_referencewithfc_*IDs to reference stored function calls. Our plugin filtersitem_referencefor stateless mode (store: false), leavingfunction_call_outputorphaned. The Codex API rejects requests with orphaned outputs. - Fix: When
hasTools === false, filter out allfunction_callandfunction_call_outputitems from the input array. - GitHub fallback chain: API endpoint → HTML page → redirect URL parsing → HTML regex parsing.
Upgrade
Update your opencode.json:
{
"plugin": ["[email protected]"]
}If stuck on an old version, clear the cache:
rm -rf ~/.cache/opencode/node_modules ~/.cache/opencode/bun.lockFull Changelog: v4.0.0...v4.0.1
v4.0.0 - 🎉 Major Release: Full Codex Max Support & Prompt Engineering Overhaul
This release brings full GPT-5.1 Codex Max support with dedicated prompts, plus complete parity with Codex CLI's prompt selection logic.
🚀 Highlights
- Full Codex Max support with dedicated prompt including frontend design guidelines
- Model-specific prompts matching Codex CLI's prompt selection logic
- GPT-5.0 → GPT-5.1 migration as legacy models are phased out by OpenAI
✨ Model-Specific System Prompts
The plugin now fetches the correct Codex prompt based on model family, matching Codex CLI's model_family.rs logic:
| Model Family | Prompt File | Lines | Use Case |
|---|---|---|---|
gpt-5.1-codex-max* |
gpt-5.1-codex-max_prompt.md |
117 | Codex Max with frontend design guidelines |
gpt-5.1-codex*, codex-* |
gpt_5_codex_prompt.md |
105 | Focused coding prompt |
gpt-5.1* |
gpt_5_1_prompt.md |
368 | Full behavioral guidance |
🔄 Legacy GPT-5.0 → GPT-5.1 Migration
All legacy GPT-5.0 models automatically normalize to GPT-5.1 equivalents:
gpt-5-codex→gpt-5.1-codexgpt-5→gpt-5.1gpt-5-mini,gpt-5-nano→gpt-5.1codex-mini-latest→gpt-5.1-codex-mini
🔧 Technical Improvements
- New
ModelFamilytype:"codex-max" | "codex" | "gpt-5.1" - Lazy instruction loading: Instructions fetched per-request based on model
- Separate caching per family: Better cache efficiency
- Model family logging: Debug with
modelFamilyfield in logs
🧪 Test Coverage
- 191 unit tests (16 new for model family detection)
- 13 integration tests with family verification
- All tests passing ✅
📝 Full Changelog
See CHANGELOG.md for complete details.
Installation:
```bash
npm install [email protected]
```
v3.3.0: GPT 5.1 Enforcement + Configuration Standardization
This release enforces GPT 5.1 model identifiers across all configurations and documentation, removes deprecated GPT 5.0 models, and establishes config/full-opencode.json as the only officially supported configuration. These changes address GPT 5 model temperamental behavior and ensure users have a reliable, tested setup that works consistently with OpenCode features.
🏷️ Model Naming & Deprecation
GPT 5.1 Standardization
Impact: 🟡 MEDIUM - Configuration update required
Changes:
All model identifiers updated to GPT 5.1 naming convention:
- ✅
gpt-5-codex-low→gpt-5.1-codex-low - ✅
gpt-5-codex-medium→gpt-5.1-codex-medium - ✅
gpt-5-codex-high→gpt-5.1-codex-high - ✅
gpt-5-codex-mini-medium→gpt-5.1-codex-mini-medium - ✅
gpt-5-codex-mini-high→gpt-5.1-codex-mini-high - ✅
gpt-5-low→gpt-5.1-low - ✅
gpt-5-medium→gpt-5.1-medium - ✅
gpt-5-high→gpt-5.1-high
File: config/full-opencode.json
Display names updated:
- "GPT 5 Codex Low (OAuth)" → "GPT 5.1 Codex Low (OAuth)"
- All variants now clearly show "5.1" in the TUI
Deprecated Models Removed
Impact: 🔴 HIGH - Breaking change for users on GPT 5.0 models
Removed from config:
- ❌
gpt-5-minimal- No longer supported - ❌
gpt-5-mini- No longer supported - ❌
gpt-5-nano- No longer supported
Reason:
OpenAI is phasing out GPT 5.0 models. These models exhibited unreliable behavior and are being replaced by the GPT 5.1 family.
Migration:
Users on deprecated models should switch to:
gpt-5-minimal→gpt-5.1-low(similar fast performance)gpt-5-mini→gpt-5.1-low(lightweight reasoning)gpt-5-nano→gpt-5.1-low(minimal reasoning)
File: config/full-opencode.json - Now ships with 8 verified GPT 5.1 variants instead of 11 mixed 5.0/5.1 models
Codex Mini Context Limits Corrected
Impact: 🟢 LOW - Improves accuracy
Problem:
Codex Mini was configured with incorrect context limits (200k/100k), which didn't match actual API specifications.
Fix:
Updated Codex Mini limits to correct values:
- Context: 200k → 272k tokens
- Output: 100k → 128k tokens
Impact:
- ✅ OpenCode now displays accurate token usage for Codex Mini variants
- ✅ Auto-compaction works correctly with proper limits
- ✅ Matches actual API behavior
File: config/full-opencode.json:69-70, 85-86
⚠️ Configuration Enforcement
Full Config Now Required
Impact: 🔴 CRITICAL - Affects all users
What Changed:
The plugin now strongly enforces config/full-opencode.json as the only officially supported configuration.
Why This Matters:
GPT 5 models have proven to be temperamental:
- Some variants work reliably
- Some don't respond correctly
- Some may give errors unexpectedly
The full configuration has been thoroughly tested and verified to work consistently. Minimal configurations lack critical metadata and may fail unpredictably.
Documentation Updates:
README.md:
- Changed "Recommended: Full Configuration" → "
⚠️ REQUIRED: Full Configuration (Only Supported Setup)" - Added explicit warning: "GPT 5 models can be temperamental - some work, some don't, some may error"
- Marked minimal config section as "❌ NOT RECOMMENDED - DO NOT USE"
- Added detailed "Why this doesn't work" section explaining:
- Missing model metadata breaks OpenCode features
- No support for usage limits or context compaction
- Cannot guarantee stable operation
docs/getting-started.md:
- Removed "Option B: Minimal Configuration"
- Replaced with "❌ Minimal Configuration (NOT SUPPORTED - DO NOT USE)"
- Added comprehensive warnings about GPT 5 models requiring proper configuration
docs/configuration.md:
- Added warnings throughout about using official
full-opencode.json - Updated "Recommended" → "
⚠️ REQUIRED: Use Pre-Configured File" - Added migration guide showing GPT 5.0 → GPT 5.1 upgrade path
config/README.md:
- Complete restructure from "Configuration Examples" → "Configuration"
- Added "
⚠️ REQUIRED Configuration File" section - Marked
minimal-opencode.jsonas NOT SUPPORTED - Marked
full-opencode-gpt5.jsonas DEPRECATED - Clear "❌ Other Configurations (NOT SUPPORTED)" section
Impact:
- ✅ Users get reliable, tested configuration
- ✅ OpenCode features (auto-compaction, usage sidebar) work properly
- ✅ Reduces support issues from misconfiguration
⚠️ Users must migrate from minimal configs to full config
Why Minimal Configs Don't Work
Missing Metadata:
Minimal configs lack per-model limit metadata that OpenCode requires for:
- Token usage display
- Automatic context compaction
- Usage sidebar widgets
GPT 5 Temperamental Behavior:
Without proper configuration:
- Some model variants may fail
- Error messages may be unclear
- Behavior is unpredictable
No Support Guarantee:
The plugin team cannot guarantee stable operation with custom or minimal configs. Only full-opencode.json is tested and verified.
📝 Documentation Overhaul
Comprehensive Warning System
All documentation now includes:
⚠️ Prominent warnings about GPT 5 model temperamental behavior- 🔴 Clear "DO NOT USE" sections for unsupported configs
- ✅ Migration paths from deprecated models to GPT 5.1
- 📋 Detailed explanations of why full config is required
Files Updated:
README.md- Main plugin documentationdocs/getting-started.md- Installation guidedocs/configuration.md- Configuration referencedocs/index.md- Documentation homeconfig/README.md- Configuration directory guideAGENTS.md- Developer agent guidance
Model Variant Table Updated
README.md Model Table:
All 8 GPT 5.1 variants now clearly listed:
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
|---|---|---|---|
gpt-5.1-codex-low |
GPT 5.1 Codex Low (OAuth) | Low | Fast code generation |
gpt-5.1-codex-medium |
GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code tasks |
gpt-5.1-codex-high |
GPT 5.1 Codex High (OAuth) | High | Complex code & tools |
gpt-5.1-codex-mini-medium |
GPT 5.1 Codex Mini Medium (OAuth) | Medium | Latest Codex mini tier |
gpt-5.1-codex-mini-high |
GPT 5.1 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
gpt-5.1-low |
GPT 5.1 Low (OAuth) | Low | Faster responses with light reasoning |
gpt-5.1-medium |
GPT 5.1 Medium (OAuth) | Medium | Balanced general-purpose tasks |
gpt-5.1-high |
GPT 5.1 High (OAuth) | High | Deep reasoning, complex problems |
Added warning:
⚠️ Important: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured infull-opencode.jsonfor best results.
Usage Examples Updated
All code examples now use GPT 5.1 naming:
# Use different reasoning levels for gpt-5.1-codex
opencode run "simple task" --model=openai/gpt-5.1-codex-low
opencode run "complex task" --model=openai/gpt-5.1-codex-high
# Use different reasoning levels for gpt-5.1
opencode run "quick question" --model=openai/gpt-5.1-low
opencode run "deep analysis" --model=openai/gpt-5.1-high
# Use Codex Mini variants
opencode run "balanced task" --model=openai/gpt-5.1-codex-mini-medium
opencode run "complex code" --model=openai/gpt-5.1-codex-mini-highFiles Updated:
README.md- All examples use 5.1 namingdocs/getting-started.md- Installation examples updateddocs/configuration.md- Configuration examples updatedconfig/README.md- Usage examples updated
🔧 Technical Improvements
Model Normalization Map
New file: lib/request/helpers/model-map.ts
Purpose:
Centralized model normalization logic for consistent handling of all GPT 5.1 variants.
Features:
- Explicit mapping of all known model variants
- Fallback pattern matching for custom names
- Support for both 5.0 (deprecated) and 5.1 families
- Handles provider prefixes (
openai/gpt-5.1-codex-low)
Impact:
- ✅ More maintainable code
- ✅ Easier to add new model variants
- ✅ Clear documentation of supported models
Model Validation Script
New file: scripts/validate-model-map.sh
Purpose:
Automated validation that ensures:
- All models in config are recognized by normalization
- No orphaned model definitions
- Config and code stay in sync
Usage:
./scripts/validate-model-map.shImpact:
- ✅ Catches configuration errors early
- ✅ Prevents regression when adding new models
- ✅ Automated quality assurance
Enhanced Test Coverage
File: test/request-transformer.test.ts
New tests added:
- GPT 5.1 model normalization
- Deprecated model handling
- Codex Mini limit verification
- Model variant recognition
Impact:
- ✅ Ensures 5.1 migration works correctly
- ✅ Verifies limit metadata accuracy
- ✅ Prevents regression
📋 CHANGELOG Updates
File: CHANGELOG.md
Added comprehensive entry for v3.3.0 documenting:
- GPT 5.1 standardization
- Deprecated model removal
- Configuration enforcement
- Documentation overhaul
🎯 Verification
Configuration verified:
- ✅ All 8 GPT 5.1 variants defined in
full-opencode.json - ✅ Correct context limits (272k/128k) for all models
- ✅ Proper reasoning effort settings per variant
- ✅ Required options (
store: false,include: ["reasoning.encrypted_content"])
Documentation verified:
- ✅ All examples use GPT 5.1 naming
- ✅ Warnings about temperamental behavior consistent across docs
- ✅ Migration paths from GPT 5.0 to GPT 5.1 clear
- ✅ Full config enf...