A clean, production-style FastAPI + GenAI microservice for delivering personalized scholarship recommendations using a simple vector search and plug-and-play LLM pipeline (RAG pattern).
β Accepts a student's academic profile (name, major, GPA, interests) β Retrieves relevant scholarships using a vector-like filter (mocked β ready to swap with FAISS or Pinecone) β Generates tailored advice using a mock LLM call (plug in OpenAI, Bedrock, or LangChain easily) β Provides auto-generated Swagger UI for live testing and easy integration
- Python 3.13
- FastAPI β modern, async Python web framework
- Uvicorn β lightning-fast ASGI server
- Pydantic β for robust data validation
- Vector store mock β JSON-based; can be replaced with Pinecone, FAISS, or your own embeddings
- LangChain-ready logic β for future LLM orchestration
Clean modular structure, .gitignore for safe commits, .env.example for environment configs, and clear README for fast onboarding.
Perfect for demonstrating RAG-like design, GenAI microservices, and practical API engineering in interviews or real-world POCs.