AI Interview Coach is a full-stack practice environment that pairs a FastAPI backend with a modern React/Vite frontend. Candidates can run mock interviews by role, record their answers, receive rubric-driven AI feedback, and review previous sessions.
- Role-aware interview flows for Software Developer, Full-Stack Developer, Data Engineer, and Cyber Analyst.
- AI evaluation service (OpenAI-compatible) with offline heuristics fallback and tiered readiness scoring.
- Session history with detailed summaries, timing metrics, and improvement suggestions.
- Consent & privacy guardrails (local storage disclosure, educational disclaimer, no hiring promises).
- Tooling: FastAPI + SQLAlchemy + Alembic, React + TypeScript + Tailwind, React Query, Vitest, PyTest, Ruff, Black, ESLint, Prettier, GitHub Actions, Docker Compose.
[ React + Vite + Tailwind ]
| (REST via Axios / React Query)
v
[ FastAPI Service ] --> domain services --> [ LLM Evaluation (OpenAI) ]
|
v
[ SQLAlchemy ORM ]
|
v
[ SQLite (dev) / PostgreSQL (prod) ]
- Python 3.10+ (for backend).
- Node.js 18.17+ (Node 20 recommended for modern tooling).
- Docker + Docker Compose (optional but recommended for one-command startup).
# 1) Clone repo & install prerequisites
git clone <repo>
# 2) Copy env files
cp .env.example .env
cp backend/.env.example backend/.env
cp frontend/.env.example frontend/.env
# 3) One-command dev stack (backend + frontend + live reload)
make dev
# or, manually:
# make backend
# make frontendThe frontend runs on http://127.0.0.1:5173 and the API on http://127.0.0.1:8000.
cd backend
python -m pip install .[dev] # requires Python 3.10+
python -m app.db.init_db # seed roles & questions
uvicorn app.main:app --reloadcd frontend
npm install
npm run dev -- --host 0.0.0.0 --port 5173# Frontend
npm run lint # ESLint + Tailwind rules
npm run test # Vitest + Testing Library
npm run build # Type-check + production build
# Backend (requires Python >= 3.10)
python -m pytest
ruff check . # static analysis
black . # formattingNote: The repository was authored against Python 3.11. Running the backend on Python < 3.10 is not supported.
- Seed data lives in
backend/app/seeds/seed_roles_and_questions.json. To add new roles or questions, edit that file and rerunpython -m app.db.init_db(idempotent). - Alembic migrations are stored in
backend/alembic/versions. Runalembic upgrade headfor schema changes.
| Variable | Location | Description |
|---|---|---|
OPENAI_API_KEY |
backend .env |
Optional. Enables full LLM evaluation via OpenAI SDK. |
LLM_PROVIDER |
backend .env |
"openai" (default) or "custom" for LM Studio/Ollama style endpoints. |
LLM_BASE_URL |
backend .env |
Base URL for the custom provider (e.g. http://127.0.0.1:1234/v1). |
LLM_API_KEY |
backend .env |
Optional key/token for the custom provider (falls back to OPENAI_API_KEY). |
DATABASE_URL |
backend .env |
Defaults to SQLite. Use postgresql+psycopg://.... |
APP_ENV |
backend .env |
development, local, production, or ci. |
EVAL_MODEL |
backend .env |
Model alias passed to the provider (default: gpt-4o-mini). |
EVAL_MAX_OUTPUT_TOKENS |
backend .env |
Max tokens returned from the evaluator (default: 256). |
EVAL_MAX_INPUT_CHARS |
backend .env |
Max characters sent to the evaluator prompt (default: 4000). |
EVAL_COOLDOWN_SECONDS |
backend .env |
Cooldown after quota errors before retrying the LLM (default: 300). |
ALLOW_LLM_FOR_CODE |
backend .env |
Allow LLM grading even when answers include code snippets (default: true). |
CODE_DETECTION_THRESHOLD |
backend .env |
Heuristic score threshold for treating content as code when LLM-for-code is disabled. |
DEBUG_FORCE_LLM |
backend .env |
Force LLM usage locally for a single evaluation (do not enable in production). |
VITE_API_BASE_URL |
frontend .env |
Base URL for REST calls (default: local FastAPI). |
Without an OPENAI_API_KEY, the evaluation service falls back to a deterministic heuristic so the app remains usable offline.
To evaluate against LM Studio or another OpenAI-compatible gateway, set:
LLM_PROVIDER=custom
LLM_BASE_URL=http://127.0.0.1:1234/v1
LLM_API_KEY=<optional token>
EVAL_MODEL=<model name exposed by the gateway>
The backend logs and /api/v1/diagnostics endpoint will confirm which provider is active.
GitHub Actions (.github/workflows/ci.yml) runs:
- Backend lint (ruff), format check (black), and pytest.
- Frontend lint, vitest, and production build.
Pre-commit (.pre-commit-config.yaml) mirrors these checks locally (pre-commit install recommended).
- Speech-to-text integration (Whisper or similar).
- Optional authentication to persist history across devices.
- Analytics dashboard (progress charts, trending tiers).
- Export session summary to PDF.
- Interface prominently states: "Educational practice tool. No guarantee of hiring decisions. Feedback may be imperfect."
- Consent modal explains local storage usage and optional API key handling before first use.
- Ready tier wording ("Would you be hired?" prompt) is framed as a non-binding readiness signal.
- Question bank contains original, generic prompts (no proprietary interview content).
- Node 20 warning: Vite 5 targets Node >=18.17. Warnings can appear on older Node 18 minors; upgrading to the latest LTS resolves them.
- Backend install fails: Ensure Python 3.10+ and up-to-date pip (
python -m pip install --upgrade pip). - LLM errors: Without an API key the evaluator logs a warning and falls back to heuristics. Supply a key to restore full AI feedback.
- Evaluator path unclear? Call
GET /api/v1/diagnosticsto confirm the active path (llm,llm_code_allowed,heuristic,code_question_forced), then tail the backend logs forevaluation_path=entries.
- Fork & branch from
main. - Run
pre-commit run --all-filesbefore pushing. - Open a PR with screenshots or recordings if the UI changes.
Personalized landing screen
Select interview level, role, and focus areas before starting a new mock session.
Technical prompt deep-dive
Technical interview response with structured evaluation and coaching tips.
Interview recap modal
End-of-interview recap with aggregate scores and recommendations for next steps.
Session analytics dashboard
Running summary of all mock interviews in the current session, highlighting trends.