Skip to content

Conversation

@kunalkushwahatg
Copy link

Description

This PR implements a ReAct agent that analyzes resumes based on a given job description and suggests improvements.

Key Changes

  • Updated the analyze_resume tool to use a JSON-based prompt, extracting structured title and description of improvements.
  • Modified the search_yt_videos tool to return the top 3 YouTube video links in JSON format.
  • Integrated a prebuilt ReAct template from LangChainHub.
  • Added logging and exception handling to track errors effectively.

Workflow

  1. The helper functions extract and combines the text from both the resume and job description.
  2. ReAct Agent calls the analyze_resume tool, which extracts a structured JSON output containing the title and description of suggested improvements.
  3. The extracted list of improvement titles is then passed to the search_youtube_videos tool.
  4. The search_youtube_videos tool retrieves the top 3 YouTube videos for each suggested improvement.
  5. The dictionary data from both the tools are accessed from response (as intermediate_steps).

Example Output

This output is for job description - Microsoft AI and Systems Specialist with a Focus on AI Projects

Improvement Areas

[
  {
    "Title": "Improvement Area 1: AI and Microsoft Ecosystem",
    "Description": "User's experience with Microsoft applications (e.g., CRM Dynamics, SharePoint, Azure) and a keen interest in AI developments."
  },
  {
    "Title": "Improvement Area 2: Large Language Models and Chatbots",
    "Description": "User's proficiency in programming languages, particularly Python and React, and experience with large language models (LLMs) from providers like OpenAI, Anthropic, or Gemini."
  },
  {
    "Title": "Improvement Area 3: Data Architectures and ETL Processes",
    "Description": "User's strong understanding of data architectures, including ETL processes and relational database management systems (e.g., PostgreSQL)."
  }
]

Youtube Links

{
  "AI and Microsoft Ecosystem": [
    "https://www.youtube.com/watch?v=acI_B8akL5o",
    "https://www.youtube.com/watch?v=OS3qhkfToQY",
    "https://www.youtube.com/watch?v=SsxD59Dycug"
  ],
  "Large Language Models and Chatbots": [
    "https://www.youtube.com/watch?v=X-AWdfSFCHQ",
    "https://www.youtube.com/watch?v=5sLYAQS9sWQ",
    "https://www.youtube.com/watch?v=LPZh9BOjkQs"
  ],
  "Data Architectures and ETL Processes": [
    "https://www.youtube.com/watch?v=Tq8oCFjP6kQ",
    "https://www.youtube.com/watch?v=_Nk0v9qUWk4",
    "https://www.youtube.com/watch?v=kGT4PcTEPP8"
  ]
}

@OmkarAmlan
Copy link
Contributor

Convert to a Flask endpoint, instead of the console output and we're good to go.

@kunalkushwahatg kunalkushwahatg force-pushed the feature-ReAct-agent/123EE0291 branch from b3898fe to 6991bee Compare March 20, 2025 14:07
@kunalkushwahatg
Copy link
Author

changes

  • Added a Flask endpoint to the code and removed console output.
  • Updated requirements.txt to include Flask installation.
  • The API now exposes an /analyze endpoint that returns a JSON response with the keys improvement_areas and youtube_links

To test API , start it and run below code

import requests
url = "http://127.0.0.1:5000/analyze"
files = {
    "resume": open("resume.pdf", "rb"),
    "jd": open("jd.pdf", "rb")
}
response = requests.post(url, files=files)
improvement_areas = response.json()["improvement_areas"]
youtube_links = response.json()["youtube_links"]

print("\nIMPROVEMENT AREAS:\n", improvement_areas)
print("\nYOUTUBE LINKS:\n", youtube_links)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants