-
Notifications
You must be signed in to change notification settings - Fork 62
fix: Remove ModelRes references from resume modules #328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Remove ModelRes references from resume modules #328
Conversation
- Remove conditional import of ModelRes (now deleted) - Use EvalDetail as the only return type - Update LLMKeywordMatcher to return EvalDetail - Update LLMResumeOptimizer to return EvalDetail - Remove USE_EVAL_DETAIL flag and related branching logic - Simplify code by removing backward compatibility code This fixes import errors after ModelRes was removed from the codebase.
Summary of ChangesHello @Kylie-dot-s, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on streamlining the resume processing modules by fully migrating them to use the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request does a great job of removing the deprecated ModelRes references and standardizing on EvalDetail as the return type. The changes simplify the code by removing backward compatibility logic, which is a solid improvement. My review includes a few suggestions to further enhance maintainability by reducing code duplication and using Pydantic models more idiomatically. I've also identified a potential high-severity issue where data assigned to an extra field on an EvalDetail instance might be silently lost, which could cause issues downstream.
I am having trouble creating individual review comments. Click here to see my feedback.
dingo/model/llm/llm_resume_optimizer.py (207-215)
There's a potential issue here. The EvalDetail model (as defined in dingo/io/output/eval_detail.py) does not have an optimized_content field. Since Pydantic's BaseModel by default ignores extra fields, the data assigned to result.optimized_content will be lost. This could lead to silent failures downstream where this data is expected.
To fix this, you should consider adding optimized_content: Optional[Any] = None to the EvalDetail model definition.
Additionally, for better readability, you can initialize the EvalDetail object using its constructor.
# Return EvalDetail result
return EvalDetail(
metric=cls.__name__,
status=False,
label=[QualityLabel.QUALITY_GOOD],
reason=[reason],
# Store full response for downstream use (using extra field)
optimized_content=response_json,
)
dingo/model/llm/llm_keyword_matcher.py (210-221)
For better readability and to follow a more idiomatic Pydantic usage, you can initialize the EvalDetail object directly with its attributes using the constructor, rather than setting them one by one. This also allows for simplifying the conditional logic.
# Return EvalDetail result
status = score < cls.threshold
label = [f"QUALITY_BAD.{cls.__name__}"] if status else [QualityLabel.QUALITY_GOOD]
return EvalDetail(
metric=cls.__name__,
score=score,
reason=[reason],
status=status,
label=label,
)
dingo/model/llm/llm_keyword_matcher.py (331-348)
The validation logic for input_data.content and input_data.prompt is very similar and creates some code duplication. You can refactor this method to be more DRY (Don't Repeat Yourself) by combining the checks and creating the EvalDetail object only once for any validation failure.
# Validate that content (resume) and prompt (JD) are provided
if not input_data.content:
reason = "Resume text (content) is required but was not provided"
elif not input_data.prompt:
reason = "Job description (prompt) is required but was not provided"
else:
# Call parent eval method
return super().eval(input_data)
return EvalDetail(
metric=cls.__name__,
status=True,
label=[f"QUALITY_BAD.{cls.__name__}"],
reason=[reason],
)
dingo/model/llm/llm_resume_optimizer.py (277-282)
To make the code more concise and idiomatic, you can initialize the EvalDetail object directly using its constructor instead of creating an instance and then setting its attributes.
if not input_data.content:
return EvalDetail(
metric=cls.__name__,
status=True,
label=[f"QUALITY_BAD.{cls.__name__}"],
reason=["Resume text (content) is required but was not provided"],
)
This fixes import errors after ModelRes was removed from the codebase.