Add qwen3-coder-next to expected models#1892
Conversation
Add model litellm_proxy/together_ai/Qwen/Qwen3-Coder-Next-FP8 with id qwen3-coder-next to the MODELS dictionary in resolve_model_config.py and add it to the EXPECTED_MODELS list in the test file. Fixes #1891
all-hands-bot
left a comment
There was a problem hiding this comment.
Code Review
Taste Rating: 🟡 Acceptable - Works correctly, minor naming inconsistency noted
The PR correctly adds the new model configuration and includes proper test coverage. The structure follows existing patterns perfectly. One minor naming inconsistency to consider.
Verdict: ✅ Worth merging - Core change is solid, minor improvement suggested below
| "llm_config": {"model": "litellm_proxy/openrouter/z-ai/glm-4.7"}, | ||
| }, | ||
| "qwen3-coder-next": { | ||
| "id": "qwen3-coder-next", |
There was a problem hiding this comment.
🟡 Suggestion: Naming inconsistency detected.
This PR uses "qwen3-coder-next" (no hyphen between "qwen" and "3"), but line 102 has "qwen-3-coder" (with hyphen). This creates inconsistency:
- Existing:
qwen-3-coder - New:
qwen3-coder-next
Options:
- Keep as-is if "Qwen3" is the official model name from Together.ai (which appears to be the case from the model path)
- Change to
qwen-3-coder-nextfor consistency with the existing entry
The model path shows Qwen3-Coder-Next-FP8, suggesting "Qwen3" is indeed the official name. However, users might find it confusing when browsing the model list. Consider whether consistency with the existing entry or consistency with the provider's naming is more important.
| }, | ||
| "qwen3-coder-next": { | ||
| "id": "qwen3-coder-next", | ||
| "display_name": "Qwen3 Coder Next", |
There was a problem hiding this comment.
🟢 Nit: Same naming inconsistency in display name.
Compare:
- Line 104:
"Qwen 3 Coder"(space between Qwen and 3) - Line 124:
"Qwen3 Coder Next"(no space)
Consider "Qwen 3 Coder Next" for consistency, unless "Qwen3" is the official branding.
| @@ -135,6 +135,7 @@ def test_find_models_by_id_preserves_full_config(): | |||
| "deepseek-v3.2-reasoner", | |||
| "qwen-3-coder", | |||
| "glm-4.7", | |||
There was a problem hiding this comment.
✅ Good: Test coverage included.
The model ID was correctly added to EXPECTED_MODELS, ensuring the new model config will be validated by existing tests (test_all_expected_models_present, test_expected_models_have_required_fields, etc.).
Summary
Add model
litellm_proxy/together_ai/Qwen/Qwen3-Coder-Next-FP8with idqwen3-coder-nextto the expected models inresolve_model_config.py.Fixes #1891
Checklist
@juanmichelini can click here to continue refining the PR
Agent Server images for this PR
• GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server
Variants & Base Images
eclipse-temurin:17-jdknikolaik/python-nodejs:python3.13-nodejs22golang:1.21-bookwormPull (multi-arch manifest)
# Each variant is a multi-arch manifest supporting both amd64 and arm64 docker pull ghcr.io/openhands/agent-server:c0698be-pythonRun
All tags pushed for this build
About Multi-Architecture Support
c0698be-python) is a multi-arch manifest supporting both amd64 and arm64c0698be-python-amd64) are also available if needed