Aristotle is a documentation and codebase grounded agentic developer assistant that delivers precise, up-to-date answers about software libraries and codebases. The project targets a persistent industry pain point: developers lose significant time context-switching between official docs, forums, and scattered examples-especially as frameworks evolve rapidly and teams juggle both public and internal repositories. Aristotle combines retrieval-augmented generation (RAG) with dual sources of truth-documentation pages and the source code itself-to return concise, citation-backed responses such as required function arguments, error fixes, and idiomatic usage patterns (Python first, language-extensible). The system continuously live-updates its index to reflect edits in the user’s working tree and can ingest local projects or any public Git repository, minimizing hallucinations by grounding outputs in verifiable artifacts. Delivered as a VS Code extension, it provides an accessible chat UI and example snippets tailored to the active workspace. While enterprise privacy controls (e.g., air-gapped or firewall deployments) are outside this project’s current scope, the architecture is compatible with such setups for organizations that require private code search. By replacing manual exploration with targeted, evidence-linked retrieval, Aristotle improves developer productivity, shortens time-to-fix, and preserves flow, offering a pragmatic path from “vibe coding” to trustworthy, documentation-backed assistance.
We had an amazing time working on this project, each minute detail is taken care of, you can straight away run this repository to use it for your own private code base as a developer, or use this to learn coding as a student by deploying it as your personal local offline coding tutor, as a VSCode Extension.
Generic Non-Code generating Model: Llama3.1:8B's response for a question:
Aristotle agentic framework's Response for the same question, when connected with the same generic non-coding model(Llama3.1:8B) which is not pre-trained for code generation. Graph Search based result(Bottom), Aristotle's Accurate Response(On the Right) Vector Search based result(Bottom), Aristotle's Accurate Response(On the Right)| Official Full Name (Alphabetical order) | Student ID | Work Items | |
|---|---|---|---|
| Nobert Oliver | A0328685M | Complete design, implementation and code review of all modules | [email protected] |
| Sharvesh Subhash | A0327428Y | Complete design, implementation and code review of all modules | [email protected] |
Google Drive Links Business and Demo Video: https://drive.google.com/file/d/1vEvXu0z60OLU17FVT2GGo2MKZWNI1SU2/view?usp=sharing
System Design Explanation: https://drive.google.com/file/d/1dGQHlmblqo_PG_7xxAmArm4Koo0XAOLG/view?usp=sharing
Spin up the server, use the VS Code extension, and start contributing — in minutes. If this helps you, please ⭐ star the repo and share feedback!
# Server (Windows + WSL2)
# 1) Install prerequisites: Docker Desktop (with WSL2 integration), Ollama, Neo4j
# 2) Open a terminal in the repo's `system/` folder
cp .env.example .env # then edit values as needed
docker build -t aristotle:latest .
docker run -p 8000:8000 --add-host=host.docker.internal:host-gateway aristotle:latest
# Visit the API: http://localhost:8000# UI (VS Code Extension)
# Requires Node v20+ (tip: use nvm), VS Code, and `vsce`
cd ui
npm i
npm i -g vsce
vsce package
# In VS Code: Extensions → … menu → Install from VSIX → pick the generated .vsixThen in VS Code chat: Ctrl + Shift + I and type:
@aristotle Explain this file
Aristotle is an agentic Developer assistant built for Python-centric workflows. It combines vector search + knowledge graphs with lightweight LLMs to minimize hallucinations and give accurate, grounded answers from codebases and docs.
- ⚡ Works locally with 8B-class models via Ollama
- 🧠 Uses Neo4j for graph-enhanced understanding
- 🐳 One-command Docker server
- 🧩 VS Code extension for in-editor chat and commands
If you like the approach, please star the repository - it really helps the project grow!
- Docker Desktop for Windows with WSL2 integration enabled
- Ollama installed and running (verify with
ollama list) - Neo4j installed, started, and reachable (note your URI, user, password)
From the repo’s system/ directory:
cp .env.example .envOpen .env and fill in the required values. Typical variables include (examples):
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=your_password
# Add any model/provider keys/settings your setup requires
Make sure your Neo4j DB is running and credentials match what you put here.
docker build -t aristotle:latest .docker run -p 8000:8000 \
--add-host=host.docker.internal:host-gateway \
aristotle:latest-
This exposes the API at http://localhost:8000.
-
If your app reads environment variables at runtime and you prefer not to bake them into the image, you can run with:
docker run -p 8000:8000 --env-file .env \ --add-host=host.docker.internal:host-gateway \ aristotle:latest
Open http://localhost:8000.
- If a health route or docs page is available (e.g.,
/health,/docs), check them to confirm everything is wired up. - Ensure Ollama is running locally (default service runs on port 11434) and that the model(s) you intend to use are available.
- Node.js v20+ (recommended via
nvm install --lts) - VS Code
- vsce packager installed globally:
npm i -g vsce
cd ui
npm i
vsce package # produces a .vsixOpen VS Code → Extensions panel → … (top-right) → Install from VSIX… → select the generated .vsix file.
-
Login with GitHub Copilot (if your workflow requires it)
-
Open the Chat panel (or press Ctrl + Shift + I)
-
Start interacting:
@aristotle "Explain this function" @aristotle "Summarize this module and list its public APIs" @aristotle "Suggest tests for this file" @aristotle "Use this codebase and tell me how to resolve this using the Pandas library documentations"
- Grounded answers from your code & docs (vector + graph retrieval)
- Low compute, high utility - runs with 8B models via Ollama
- Familiar tools - Docker, Neo4j, VS Code
- Privacy-first local workflows
If this resonates, star the repo and help more developers discover it!
We welcome issues, discussions, and pull requests.
Best way to start:
-
Star ⭐ the repo and watch for updates
-
Try the quickstart and open issues for any rough edges
-
Look for good first issue or ask for one
-
Submit a PR with:
- Clear scope & checklist
- Before/After notes or screenshots where relevant
Tip: If you’re proposing architectural changes (e.g., retriever choice, indexing strategy, model routing), consider opening a short design proposal first so we can collaborate early.
Uvicorn doesn’t start / server exits
- Check container logs:
docker logs <container> - Confirm
.envvalues and that Neo4j is reachable - Ensure Ollama is running and models are available
- Port 8000 in use? Try
-p 8080:8000and visithttp://localhost:8080
Ollama not reachable from container
- Make sure Ollama service is running (default 11434) on the host
- The flag
--add-host=host.docker.internal:host-gatewaylets the container reach the host - If your app expects a base URL, set it accordingly in
.env
Neo4j auth/connection errors
- Verify
NEO4J_URI,NEO4J_USER,NEO4J_PASSWORD - Ensure the DB is running and accepts Bolt connections (7687 by default)
vsce not found / packaging fails
- Re-open terminal after
npm i -g vsce - Ensure Node v20+ (
node -v)
Do I need a GPU? No. Aristotle targets small local models; CPU works, GPU is optional for speed.
Can I use my own models? If they run on Ollama (or your adapter), point the config to them.
Where do I put secrets?
Use the .env file for local dev. For deployments, prefer your platform’s secret store.
- Improved graph-based retrieval ops and evaluators
- More language server-style actions in the VS Code UI
- Templates for common project setups and datasets
If you want any of these, please upvote or comment on the corresponding issues - or open a PR!
Every star, issue, and PR helps Aristotle get better. If you shipped something with it, we’d love to hear your story!
Refer to project report at Github Folder: ProjectReport
Recommended Sections for Project Report / Paper:
- Executive Summary / Paper Abstract
- Sponsor Company Introduction (if applicable)
- Business Problem Background
- Market Research
- Project Objectives & Success Measurements
- Project Solution (To detail domain modelling & system design.)
- Project Implementation (To detail system development & testing approach.)
- Project Performance & Validation (To prove project objectives are met.)
- Project Conclusions: Findings & Recommendation
- Appendix of report: Project Proposal
- Appendix of report: Mapped System Functionalities against knowledge, techniques and skills of modular courses: MR, RS, CGS
- Appendix of report: Installation and User Guide
- Appendix of report: 1-2 pages individual project report per project member, including: Individual reflection of project journey: (1) personal contribution to group project (2) what learnt is most useful for you (3) how you can apply the knowledge and skills in other situations or your workplaces
- Appendix of report: List of Abbreviations (if applicable)
- Appendix of report: References (if applicable)






