You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ All notable changes to this project are documented here. Dates use the ISO forma
18
18
### Changed
19
19
-**Prompt selection alignment**: GPT 5.2 general now uses `gpt_5_2_prompt.md` (Codex CLI parity).
20
20
-**Reasoning configuration**: GPT 5.2 Codex supports `xhigh` but does **not** support `"none"`; `"none"` auto-upgrades to `"low"` and `"minimal"` normalizes to `"low"`.
-**Config presets**: `config/opencode-legacy.json` includes the 22 pre-configured presets (adds GPT 5.2 Codex); `config/opencode-modern.json` provides the variant-based setup.
22
22
-**Docs**: Updated README/AGENTS/config docs to include GPT 5.2 Codex and new model family behavior.
23
23
24
24
## [4.1.1] - 2025-12-17
@@ -161,12 +161,12 @@ This release brings full parity with Codex CLI's prompt engineering:
161
161
162
162
## [3.2.0] - 2025-11-14
163
163
### Added
164
-
- GPT 5.1 model family support: normalization for `gpt-5.1`, `gpt-5.1-codex`, and `gpt-5.1-codex-mini` plus new GPT 5.1-only presets in the canonical `config/full-opencode.json`.
164
+
- GPT 5.1 model family support: normalization for `gpt-5.1`, `gpt-5.1-codex`, and `gpt-5.1-codex-mini` plus new GPT 5.1-only presets in the canonical `config/opencode-legacy.json`.
165
165
- Documentation updates (README, docs, AGENTS) describing the 5.1 families, their reasoning defaults, and how they map to ChatGPT slugs and token limits.
166
166
167
167
### Changed
168
168
- Model normalization docs and tests now explicitly cover both 5.0 and 5.1 Codex/general families and the two Codex Mini tiers.
169
-
- The legacy GPT 5.0 full configuration is now published as `config/full-opencode-gpt5.json`; new installs should prefer the 5.1 presets.
169
+
- The legacy GPT 5.0 full configuration is now published separately; new installs should prefer the 5.1 presets in `config/opencode-legacy.json`.
170
170
171
171
## [3.1.0] - 2025-11-11
172
172
### Added
@@ -179,7 +179,7 @@ This release brings full parity with Codex CLI's prompt engineering:
179
179
## [3.0.0] - 2025-11-04
180
180
### Added
181
181
- Codex-style usage-limit messaging that mirrors the 5-hour and weekly windows reported by the Codex CLI.
182
-
- Documentation guidance noting that OpenCode's context auto-compaction and usage sidebar require the canonical `config/full-opencode.json`.
182
+
- Documentation guidance noting that OpenCode's context auto-compaction and usage sidebar require the canonical `config/opencode-legacy.json`.
183
183
184
184
### Changed
185
185
- Prompt caching now relies solely on the host-supplied `prompt_cache_key`; conversation/session headers are forwarded only when OpenCode provides one.
> **Note**: If using a project-local config, replace the target path with `<project>/.opencode.json`.
124
128
125
129
---
126
130
127
-
#### ⚠️ REQUIRED: Full Configuration (Only Supported Setup)
131
+
#### ⚠️ REQUIRED: Use the Supported Configuration
128
132
129
-
**IMPORTANT**: You MUST use the full configuration from [`config/full-opencode.json`](./config/full-opencode.json). Other configurations are not officially supported and may not work reliably.
133
+
**Pick the config file that matches your OpenCode version:**
All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5.1 High (OAuth)", etc.
192
+
All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5.1 High (OAuth)", etc.
200
193
194
+
> **⚠️ IMPORTANT:** Use the config file above. Minimal configs are NOT supported and may fail unpredictably.
201
195
### Prompt caching & usage limits
202
196
203
197
Codex backend caching is enabled automatically. When OpenCode supplies a `prompt_cache_key` (its session identifier), the plugin forwards it unchanged so Codex can reuse work between turns. The plugin no longer synthesizes its own cache IDs—if the host omits `prompt_cache_key`, Codex will treat the turn as uncached. The bundled CODEX_MODE bridge prompt is synchronized with the latest Codex CLI release, so opencode and Codex stay in lock-step on tool availability. When your ChatGPT subscription nears a limit, opencode surfaces the plugin's friendly error message with the 5-hour and weekly windows, mirroring the Codex CLI summary.
@@ -245,27 +239,25 @@ If you're on SSH/WSL/remote and the browser callback fails, choose **"ChatGPT Pl
245
239
246
240
## Usage
247
241
248
-
If using the full configuration, select from the model picker in opencode, or specify via command line:
242
+
If using the supported configuration, select from the model picker in opencode, or specify via command line.
249
243
250
244
```bash
251
-
#Use different reasoning levels for gpt-5.1-codex
252
-
opencode run "simple task" --model=openai/gpt-5.1-codex-low
253
-
opencode run "complex task" --model=openai/gpt-5.1-codex-high
254
-
opencode run "large refactor" --model=openai/gpt-5.1-codex-max-high
255
-
opencode run "research-grade analysis" --model=openai/gpt-5.1-codex-max-xhigh
245
+
#Modern config (v1.0.210+): use --variant
246
+
opencode run "simple task" --model=openai/gpt-5.1-codex --variant=low
247
+
opencode run "complex task" --model=openai/gpt-5.1-codex --variant=high
248
+
opencode run "large refactor" --model=openai/gpt-5.1-codex-max --variant=high
249
+
opencode run "research-grade analysis" --model=openai/gpt-5.1-codex-max --variant=xhigh
256
250
257
-
#Use different reasoning levels for gpt-5.1
251
+
#Legacy config: use model names
258
252
opencode run "quick question" --model=openai/gpt-5.1-low
259
253
opencode run "deep analysis" --model=openai/gpt-5.1-high
260
-
261
-
# Use Codex Mini variants
262
-
opencode run "balanced task" --model=openai/gpt-5.1-codex-mini-medium
263
-
opencode run "complex code" --model=openai/gpt-5.1-codex-mini-high
264
254
```
265
255
266
-
### Available Model Variants (Full Config)
256
+
### Available Model Variants (Legacy Config)
267
257
268
-
When using [`config/full-opencode.json`](./config/full-opencode.json), you get these pre-configured variants:
258
+
When using [`config/opencode-legacy.json`](./config/opencode-legacy.json), you get these pre-configured variants:
259
+
260
+
For the modern config (`opencode-modern.json`), use the same variant names via `--variant` or `Ctrl+T` in the TUI (e.g., `--model=openai/gpt-5.2 --variant=high`).
269
261
270
262
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
@@ -299,7 +291,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
299
291
>
300
292
> **Note**: GPT 5.2, GPT 5.2 Codex, and Codex Max all support `xhigh` reasoning. Use explicit reasoning levels (e.g., `gpt-5.2-high`, `gpt-5.2-codex-xhigh`, `gpt-5.1-codex-max-xhigh`) for precise control.
301
293
302
-
> **⚠️ Important**: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured in `full-opencode.json` for best results.
294
+
> **⚠️ Important**: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured in `opencode-legacy.json` or the variants in `opencode-modern.json` for best results.
303
295
304
296
All accessed via your ChatGPT Plus/Pro subscription.
305
297
@@ -339,19 +331,21 @@ These defaults are tuned for Codex CLI-style usage and can be customized (see Co
339
331
340
332
## Configuration
341
333
342
-
### ⚠️ REQUIRED: Use Pre-Configured File
334
+
### ⚠️ REQUIRED: Use a Supported Config File
335
+
336
+
Choose the config file that matches your OpenCode version:
343
337
344
-
**YOU MUST use [`config/full-opencode.json`](./config/full-opencode.json)** - this is the only officially supported configuration:
0 commit comments