-
Notifications
You must be signed in to change notification settings - Fork 162
Update default model to gemini-3-pro-preview (fixes #51)
#54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Switch default pro model to `gemini-3-pro-preview`
Update docs
Update for new default model
Updated the default model to 'gemini-3-pro-preview' in the changelog.
Summary of ChangesHello @lukehutch, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on upgrading the application's default AI model to Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request successfully updates the default model to gemini-3-pro-preview across the codebase and documentation. The changes are largely correct and align with the PR's goal. However, I've identified a critical issue where the quota-exceeded fallback logic will likely fail due to some hardcoded constants not being updated. Additionally, I've noted a few inconsistencies in the documentation updates that should be addressed. Please see my detailed comments for specifics.
| // Models | ||
| export const MODELS = { | ||
| PRO: "gemini-2.5-pro", | ||
| PRO: "gemini-3-pro-preview", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While you've updated the default PRO model here, the related constants for handling quota errors in ERROR_MESSAGES and STATUS_MESSAGES are still hardcoded with "Gemini 2.5 Pro". This will break the automatic fallback to the Flash model when the gemini-3-pro-preview quota is exceeded, as the error message check in src/utils/geminiExecutor.ts will fail.
To fix this, you should make these constants more generic. For example:
// in src/constants.ts
export const ERROR_MESSAGES = {
QUOTA_EXCEEDED: "RESOURCE_EXHAUSTED",
QUOTA_EXCEEDED_SHORT: "⚠️ Gemini Pro daily quota exceeded. Please retry with model: 'gemini-2.5-flash'",
//...
};
export const STATUS_MESSAGES = {
QUOTA_SWITCHING: "🚫 Gemini Pro quota exceeded, switching to Flash model...",
//...
};Using "RESOURCE_EXHAUSTED" for QUOTA_EXCEEDED would align with the check already present in src/utils/commandExecutor.ts and make the fallback logic robust.
| **Problem**: Error shows response of 45,735 tokens even for small prompts | ||
|
|
||
| **Root cause**: Model-specific bug in `gemini-2.5-pro` (default model) | ||
| **Root cause**: Model-specific bug in `gemini-3-pro-preview` (default model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You've correctly updated the model name in this line. However, other parts of this document still refer to gemini-2.5-pro in the context of the same token limit bug. For consistency, please update these other occurrences as well.
Specifically:
- Line 214:
❌ gemini-2.5-pro - Always returns 45k+ tokensshould probably refer togemini-3-pro-preview. - Line 226: The example command still uses
gemini-2.5-pro. - Line 295: The section heading
### Gemini-2.5-Pro Issuesshould be updated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR updates the default Gemini model from gemini-2.5-pro to gemini-3-pro-preview in response to issue #51. The change updates the model constant and propagates this change through documentation and code comments.
Changes:
- Updated the default PRO model constant from
gemini-2.5-protogemini-3-pro-preview - Updated documentation across README, tool descriptions, and troubleshooting guides to reflect the new default model
- Updated a comment in the command executor to be more generic ("Gemini Pro" instead of "gemini-2.5-pro")
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| src/constants.ts | Changed the default PRO model constant to gemini-3-pro-preview |
| src/tools/ask-gemini.tool.ts | Updated model parameter description to reflect new default |
| src/tools/brainstorm.tool.ts | Updated model parameter description to reflect new default |
| src/utils/commandExecutor.ts | Made comment more generic (removed specific version reference) |
| docs/resources/troubleshooting.md | Updated troubleshooting documentation to reference new default model |
| docs/index.md | Updated feature description to mention new model name |
| README.md | Updated tool documentation to reflect new default model |
| CHANGELOG.md | Added changelog entry for the model update |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| **Problem**: Error shows response of 45,735 tokens even for small prompts | ||
|
|
||
| **Root cause**: Model-specific bug in `gemini-2.5-pro` (default model) | ||
| **Root cause**: Model-specific bug in `gemini-3-pro-preview` (default model) |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The troubleshooting documentation states that the bug is now in "gemini-3-pro-preview" (line 210), but line 214 still lists "gemini-2.5-pro" as a model with issues. This is inconsistent - if gemini-2.5-pro was the problematic model and has been replaced by gemini-3-pro-preview, then line 214 should be updated to reflect the new model name for consistency. Additionally, line 226 shows an example using the old model name.
- Make ERROR_MESSAGES and STATUS_MESSAGES generic by using "RESOURCE_EXHAUSTED" and removing version-specific model names - Update troubleshooting.md to reference gemini-3-pro-preview consistently (lines 214, 226, 295) - Update models.md heading to Gemini-3-Pro-Preview - Update CHANGELOG.md to use generic "Gemini Pro" reference Fixes review comments from PR jamubc#54 regarding inconsistent documentation and hardcoded constants that would break automatic fallback to Flash.
Description
Brief description of changes
Type of Change
Testing