Skip to content

Conversation

@lukehutch
Copy link

Description

Brief description of changes

Type of Change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation update

Testing

  • I have tested these changes locally
  • My code follows the project's style
  • I have updated the documentation if needed

Copilot AI review requested due to automatic review settings January 24, 2026 09:46
@gemini-code-assist
Copy link

Summary of Changes

Hello @lukehutch, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on upgrading the application's default AI model to gemini-3-pro-preview. This change ensures that the system utilizes the most current preview version of Gemini, impacting how tools interact with the model and how it's referenced in documentation and internal configurations. The update aims to keep the application aligned with the latest advancements in Google's Gemini models.

Highlights

  • Default Model Update: The default Gemini model used throughout the application has been updated from gemini-2.5-pro to gemini-3-pro-preview to leverage the latest preview capabilities.
  • Documentation Alignment: All relevant documentation, including the README.md, docs/index.md, and docs/resources/troubleshooting.md, has been updated to reflect the new default model and its implications.
  • Codebase Consistency: Internal constants and tool descriptions have been adjusted to reference gemini-3-pro-preview as the primary 'PRO' model, ensuring consistency across the codebase.
  • Minor Formatting Fixes: Several files had missing newlines at the end, which have been added for better code hygiene and consistency.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully updates the default model to gemini-3-pro-preview across the codebase and documentation. The changes are largely correct and align with the PR's goal. However, I've identified a critical issue where the quota-exceeded fallback logic will likely fail due to some hardcoded constants not being updated. Additionally, I've noted a few inconsistencies in the documentation updates that should be addressed. Please see my detailed comments for specifics.

// Models
export const MODELS = {
PRO: "gemini-2.5-pro",
PRO: "gemini-3-pro-preview",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

While you've updated the default PRO model here, the related constants for handling quota errors in ERROR_MESSAGES and STATUS_MESSAGES are still hardcoded with "Gemini 2.5 Pro". This will break the automatic fallback to the Flash model when the gemini-3-pro-preview quota is exceeded, as the error message check in src/utils/geminiExecutor.ts will fail.

To fix this, you should make these constants more generic. For example:

// in src/constants.ts
export const ERROR_MESSAGES = {
  QUOTA_EXCEEDED: "RESOURCE_EXHAUSTED",
  QUOTA_EXCEEDED_SHORT: "⚠️ Gemini Pro daily quota exceeded. Please retry with model: 'gemini-2.5-flash'",
  //...
};

export const STATUS_MESSAGES = {
  QUOTA_SWITCHING: "🚫 Gemini Pro quota exceeded, switching to Flash model...",
  //...
};

Using "RESOURCE_EXHAUSTED" for QUOTA_EXCEEDED would align with the check already present in src/utils/commandExecutor.ts and make the fallback logic robust.

**Problem**: Error shows response of 45,735 tokens even for small prompts

**Root cause**: Model-specific bug in `gemini-2.5-pro` (default model)
**Root cause**: Model-specific bug in `gemini-3-pro-preview` (default model)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

You've correctly updated the model name in this line. However, other parts of this document still refer to gemini-2.5-pro in the context of the same token limit bug. For consistency, please update these other occurrences as well.

Specifically:

  • Line 214: ❌ gemini-2.5-pro - Always returns 45k+ tokens should probably refer to gemini-3-pro-preview.
  • Line 226: The example command still uses gemini-2.5-pro.
  • Line 295: The section heading ### Gemini-2.5-Pro Issues should be updated.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the default Gemini model from gemini-2.5-pro to gemini-3-pro-preview in response to issue #51. The change updates the model constant and propagates this change through documentation and code comments.

Changes:

  • Updated the default PRO model constant from gemini-2.5-pro to gemini-3-pro-preview
  • Updated documentation across README, tool descriptions, and troubleshooting guides to reflect the new default model
  • Updated a comment in the command executor to be more generic ("Gemini Pro" instead of "gemini-2.5-pro")

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
src/constants.ts Changed the default PRO model constant to gemini-3-pro-preview
src/tools/ask-gemini.tool.ts Updated model parameter description to reflect new default
src/tools/brainstorm.tool.ts Updated model parameter description to reflect new default
src/utils/commandExecutor.ts Made comment more generic (removed specific version reference)
docs/resources/troubleshooting.md Updated troubleshooting documentation to reference new default model
docs/index.md Updated feature description to mention new model name
README.md Updated tool documentation to reflect new default model
CHANGELOG.md Added changelog entry for the model update

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

**Problem**: Error shows response of 45,735 tokens even for small prompts

**Root cause**: Model-specific bug in `gemini-2.5-pro` (default model)
**Root cause**: Model-specific bug in `gemini-3-pro-preview` (default model)
Copy link

Copilot AI Jan 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The troubleshooting documentation states that the bug is now in "gemini-3-pro-preview" (line 210), but line 214 still lists "gemini-2.5-pro" as a model with issues. This is inconsistent - if gemini-2.5-pro was the problematic model and has been replaced by gemini-3-pro-preview, then line 214 should be updated to reflect the new model name for consistency. Additionally, line 226 shows an example using the old model name.

Copilot uses AI. Check for mistakes.
- Make ERROR_MESSAGES and STATUS_MESSAGES generic by using
  "RESOURCE_EXHAUSTED" and removing version-specific model names
- Update troubleshooting.md to reference gemini-3-pro-preview
  consistently (lines 214, 226, 295)
- Update models.md heading to Gemini-3-Pro-Preview
- Update CHANGELOG.md to use generic "Gemini Pro" reference

Fixes review comments from PR jamubc#54 regarding inconsistent documentation
and hardcoded constants that would break automatic fallback to Flash.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants