Skip to content

feat: upgrade MiniMax default model to M2.7#1291

Merged
CaralHsi merged 9 commits intoMemTensor:dev-20260323-v2.0.11from
octo-patch:feature/upgrade-minimax-m27
Mar 26, 2026
Merged

feat: upgrade MiniMax default model to M2.7#1291
CaralHsi merged 9 commits intoMemTensor:dev-20260323-v2.0.11from
octo-patch:feature/upgrade-minimax-m27

Conversation

@octo-patch
Copy link
Copy Markdown
Contributor

Summary

  • Upgrade MiniMax default model from M2.5 to M2.7 (latest flagship with enhanced reasoning and coding)
  • Add MiniMax-M2.7-highspeed as the fast variant for low-latency scenarios
  • Keep all previous models (M2.5, M2.5-highspeed) as available alternatives

Changes

  • Update default model in API config (minimax_config()) to MiniMax-M2.7
  • Update example code (Scenario 7) to use M2.7 as default
  • Update unit tests to reference M2.7 and M2.7-highspeed models
  • All 10 related tests passing

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities, superseding M2.5 as the recommended default.

Testing

  • All unit tests updated and passing (10/10)
  • Backward compatible: users can still specify M2.5 models via config

octo-patch and others added 3 commits March 16, 2026 08:48
Add MiniMax LLM support via the OpenAI-compatible API, following the
same pattern as the existing Qwen and DeepSeek providers.

Changes:
- Add MinimaxLLMConfig with api_key, api_base, extra_body fields
- Add MinimaxLLM class inheriting from OpenAILLM
- Register minimax backend in LLMFactory and LLMConfigFactory
- Add minimax_config() to APIConfig with env var support
  (MINIMAX_API_KEY, MINIMAX_API_BASE)
- Add minimax to backend_model dicts in product/user config
- Add MiniMax example scenario in examples/basic_modules/llm.py
- Add unit tests for config and LLM (generate, stream, think prefix)
- Update .env.example and README with MiniMax provider info

MiniMax API: https://api.minimax.io/v1 (OpenAI-compatible)
Models: MiniMax-M2.5, MiniMax-M2.5-highspeed (204K context)
- Update default model from MiniMax-M2.5 to MiniMax-M2.7 in API config
- Update example code to use MiniMax-M2.7 as default with M2.7-highspeed listed
- Update unit tests to reference M2.7 and M2.7-highspeed models
- Keep all previous models (M2.5, M2.5-highspeed) as available alternatives
@CaralHsi CaralHsi changed the base branch from main to dev-20260323-v2.0.11 March 25, 2026 13:18
Copy link
Copy Markdown
Collaborator

@CaralHsi CaralHsi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding MiniMax as a first-class LLM provider! The implementation is clean — inheriting from OpenAILLM is the right call, and the config/factory/test coverage all follow existing patterns nicely.

I made a small fix to keep the backup_* fields in OpenAILLMConfig — they were unintentionally removed during the dev branch merge, which broke OpenAILLM.init() for MinimaxLLM and the related tests. All 3 failing tests pass now.

Great contribution!

@CaralHsi CaralHsi merged commit 83ea72e into MemTensor:dev-20260323-v2.0.11 Mar 26, 2026
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants