Skip to content

Conversation

@Kylie-dot-s
Copy link
Contributor

  • Remove conditional import of ModelRes (now deleted)
  • Use EvalDetail as the only return type
  • Update LLMKeywordMatcher to return EvalDetail
  • Update LLMResumeOptimizer to return EvalDetail
  • Remove USE_EVAL_DETAIL flag and related branching logic
  • Simplify code by removing backward compatibility code

This fixes import errors after ModelRes was removed from the codebase.

- Remove conditional import of ModelRes (now deleted)
- Use EvalDetail as the only return type
- Update LLMKeywordMatcher to return EvalDetail
- Update LLMResumeOptimizer to return EvalDetail
- Remove USE_EVAL_DETAIL flag and related branching logic
- Simplify code by removing backward compatibility code

This fixes import errors after ModelRes was removed from the codebase.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Kylie-dot-s, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on streamlining the resume processing modules by fully migrating them to use the EvalDetail return type. It systematically removes all references to the deprecated ModelRes class, along with the associated conditional logic and feature flags. This results in a cleaner, more consistent, and stable codebase, resolving critical import errors and enhancing maintainability.

Highlights

  • Deprecation of ModelRes: The ModelRes class, which was previously used as a return type in resume modules, has been completely removed from the codebase.
  • Standardization to EvalDetail: All resume-related modules, specifically LLMKeywordMatcher and LLMResumeOptimizer, now exclusively use EvalDetail as their return type, ensuring consistency.
  • Code Simplification: Conditional import logic for EvalDetail and ModelRes, along with the USE_EVAL_DETAIL flag and associated branching logic, have been removed, simplifying the codebase and eliminating backward compatibility overhead.
  • Import Error Resolution: This change directly addresses and fixes import errors that arose after the prior removal of the ModelRes class.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request does a great job of removing the deprecated ModelRes references and standardizing on EvalDetail as the return type. The changes simplify the code by removing backward compatibility logic, which is a solid improvement. My review includes a few suggestions to further enhance maintainability by reducing code duplication and using Pydantic models more idiomatically. I've also identified a potential high-severity issue where data assigned to an extra field on an EvalDetail instance might be silently lost, which could cause issues downstream.

I am having trouble creating individual review comments. Click here to see my feedback.

dingo/model/llm/llm_resume_optimizer.py (207-215)

high

There's a potential issue here. The EvalDetail model (as defined in dingo/io/output/eval_detail.py) does not have an optimized_content field. Since Pydantic's BaseModel by default ignores extra fields, the data assigned to result.optimized_content will be lost. This could lead to silent failures downstream where this data is expected.

To fix this, you should consider adding optimized_content: Optional[Any] = None to the EvalDetail model definition.

Additionally, for better readability, you can initialize the EvalDetail object using its constructor.

        # Return EvalDetail result
        return EvalDetail(
            metric=cls.__name__,
            status=False,
            label=[QualityLabel.QUALITY_GOOD],
            reason=[reason],
            # Store full response for downstream use (using extra field)
            optimized_content=response_json,
        )

dingo/model/llm/llm_keyword_matcher.py (210-221)

medium

For better readability and to follow a more idiomatic Pydantic usage, you can initialize the EvalDetail object directly with its attributes using the constructor, rather than setting them one by one. This also allows for simplifying the conditional logic.

        # Return EvalDetail result
        status = score < cls.threshold
        label = [f"QUALITY_BAD.{cls.__name__}"] if status else [QualityLabel.QUALITY_GOOD]
        return EvalDetail(
            metric=cls.__name__,
            score=score,
            reason=[reason],
            status=status,
            label=label,
        )

dingo/model/llm/llm_keyword_matcher.py (331-348)

medium

The validation logic for input_data.content and input_data.prompt is very similar and creates some code duplication. You can refactor this method to be more DRY (Don't Repeat Yourself) by combining the checks and creating the EvalDetail object only once for any validation failure.

        # Validate that content (resume) and prompt (JD) are provided
        if not input_data.content:
            reason = "Resume text (content) is required but was not provided"
        elif not input_data.prompt:
            reason = "Job description (prompt) is required but was not provided"
        else:
            # Call parent eval method
            return super().eval(input_data)

        return EvalDetail(
            metric=cls.__name__,
            status=True,
            label=[f"QUALITY_BAD.{cls.__name__}"],
            reason=[reason],
        )

dingo/model/llm/llm_resume_optimizer.py (277-282)

medium

To make the code more concise and idiomatic, you can initialize the EvalDetail object directly using its constructor instead of creating an instance and then setting its attributes.

        if not input_data.content:
            return EvalDetail(
                metric=cls.__name__,
                status=True,
                label=[f"QUALITY_BAD.{cls.__name__}"],
                reason=["Resume text (content) is required but was not provided"],
            )

@shijinpjlab shijinpjlab merged commit eb02338 into MigoXLab:main Dec 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants