Skip to content

Conversation

@G26karthik
Copy link

Summary

Fixes #3111
Fixes #3162

Enhanced the FinalResponseMatchV2Evaluator LLM-as-judge prompt to explicitly support non-English languages, addressing evaluation failures for Thai, Chinese, and other non-Latin scripts.

Problem

The evaluator was returning score=0 for identical strings in non-English languages (Thai, Chinese, Japanese, Korean, Arabic, etc.), even when the agent response and expected response were byte-for-byte identical. This occurred because the LLM judge was not explicitly instructed to handle Unicode characters and language-specific conventions.

Solution

Enhanced the evaluation prompt with:

  • Explicit support for ALL languages including Chinese, Thai, Japanese, Korean, Arabic, Hebrew, Hindi, and other non-Latin scripts
  • Instructions to treat identical strings in ANY language as valid matches
  • Recognition of language-specific punctuation variations (e.g., 。vs. . in Chinese/Japanese, ؟ in Arabic)
  • Guidance on Unicode and character encoding awareness

Changes

  • Modified src/google/adk/evaluation/final_response_match_v2.py
  • Added 9 lines to the _FINAL_RESPONSE_MATCH_V2_PROMPT template with i18n guidance
  • No changes to code logic or evaluation algorithm

Testing Plan

This fix enhances the LLM-as-judge prompt with explicit i18n instructions. The prompt modification instructs the evaluator to properly handle non-English text.

Manual Testing:
Can be verified by reproducing the original issues:

Unit Tests:
Existing test suite in tests/unittests/evaluation/test_final_response_match_v2.py verifies the evaluator's core functionality. The prompt enhancement preserves existing English evaluation behavior while adding i18n support.

Impact

Enhanced the LLM-as-judge prompt to explicitly handle non-English languages including Chinese, Thai, Japanese, Korean, Arabic, Hebrew, Hindi, and other non-Latin scripts.

The evaluator now:
- Recognizes identical strings in any language as valid matches
- Handles Unicode and character encoding differences
- Accepts language-specific punctuation variations (e.g., 。 vs . in Chinese)
- Treats all languages with equal evaluation standards

Fixes google#3111
Fixes google#3162
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @G26karthik, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the FinalResponseMatchV2Evaluator by updating its underlying LLM prompt to explicitly handle and correctly evaluate responses in non-English languages. This change resolves critical issues where the evaluator failed to recognize identical strings in various international scripts, ensuring consistent and accurate evaluation across a global linguistic spectrum without altering any core functionality.

Highlights

  • Internationalization Support: The FinalResponseMatchV2Evaluator now explicitly supports all languages, including Chinese, Thai, Japanese, Korean, Arabic, Hebrew, Hindi, and other non-Latin scripts, by enhancing its LLM-as-judge prompt.
  • Evaluation Accuracy for Non-English Text: The evaluator will now correctly score identical strings in any language as valid matches, addressing previous failures where non-English text (e.g., Thai, Chinese) received a score=0 even when byte-for-byte identical.
  • Language-Specific Nuances: The prompt includes instructions to recognize language-specific punctuation variations (e.g., Chinese/Japanese '。' vs. English '.', Arabic '؟') and guidance on Unicode and character encoding awareness.
  • Minimal Code Changes: The enhancement is purely a prompt modification, adding 9 lines to the _FINAL_RESPONSE_MATCH_V2_PROMPT template, with no changes to core code logic or the evaluation algorithm.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the eval [Component] This issue is related to evaluation label Nov 12, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great enhancement to the FinalResponseMatchV2Evaluator prompt, adding explicit support for non-English languages. The new instructions are comprehensive and should effectively address the reported issues with evaluating strings in languages like Thai and Chinese. I have one minor suggestion to further improve the clarity of the prompt for the LLM.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@ryanaiagent
Copy link
Collaborator

Hi @G26karthik , Thank you for your contribution! We appreciate you taking the time to submit this pull request.
Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share.

@G26karthik
Copy link
Author

Hi @G26karthik , Thank you for your contribution! We appreciate you taking the time to submit this pull request. Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share.

Thanks for the update!
Let me know if there's anything I should adjust or clarify during the review - happy to iterate quickly.

@ryanaiagent
Copy link
Collaborator

Hi @G26karthik , Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share.

@ryanaiagent ryanaiagent added the needs-review [Status] The PR is awaiting review from the maintainer label Dec 3, 2025
@ryanaiagent
Copy link
Collaborator

ryanaiagent commented Dec 3, 2025

Hi @ankursharmas , can you please review this. LGTM.

@G26karthik
Copy link
Author

@ankursharmas
Please Review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

eval [Component] This issue is related to evaluation needs-review [Status] The PR is awaiting review from the maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Eval fails for non-English languages FinalResponseMatchV2Evaluator returns a score of 0 for identical Chinese strings

3 participants