Skip to content

AI-powered mock interview app built with React, FastAPI, and Codex. Users can practice for roles like Software Developer or Data Engineer with timed questions, AI evaluation, and personalized feedback to improve interview readiness.

Notifications You must be signed in to change notification settings

jason-conklin/ai-interview-coach

Repository files navigation

AI Interview Coach

home_screen AI Interview Coach is a full-stack practice environment that pairs a FastAPI backend with a modern React/Vite frontend. Candidates can run mock interviews by role, record their answers, receive rubric-driven AI feedback, and review previous sessions.

Highlights

  • Role-aware interview flows for Software Developer, Full-Stack Developer, Data Engineer, and Cyber Analyst.
  • AI evaluation service (OpenAI-compatible) with offline heuristics fallback and tiered readiness scoring.
  • Session history with detailed summaries, timing metrics, and improvement suggestions.
  • Consent & privacy guardrails (local storage disclosure, educational disclaimer, no hiring promises).
  • Tooling: FastAPI + SQLAlchemy + Alembic, React + TypeScript + Tailwind, React Query, Vitest, PyTest, Ruff, Black, ESLint, Prettier, GitHub Actions, Docker Compose.

Architecture

[ React + Vite + Tailwind ]
          |  (REST via Axios / React Query)
          v
[ FastAPI Service ] --> domain services --> [ LLM Evaluation (OpenAI) ]
          |
          v
[ SQLAlchemy ORM ]
          |
          v
[ SQLite (dev) / PostgreSQL (prod) ]

Prerequisites

  • Python 3.10+ (for backend).
  • Node.js 18.17+ (Node 20 recommended for modern tooling).
  • Docker + Docker Compose (optional but recommended for one-command startup).

Quick Start

# 1) Clone repo & install prerequisites
git clone <repo>

# 2) Copy env files
cp .env.example .env
cp backend/.env.example backend/.env
cp frontend/.env.example frontend/.env

# 3) One-command dev stack (backend + frontend + live reload)
make dev
# or, manually:
#   make backend
#   make frontend

The frontend runs on http://127.0.0.1:5173 and the API on http://127.0.0.1:8000.

Backend Setup (manual)

cd backend
python -m pip install .[dev]  # requires Python 3.10+
python -m app.db.init_db      # seed roles & questions
uvicorn app.main:app --reload

Frontend Setup (manual)

cd frontend
npm install
npm run dev -- --host 0.0.0.0 --port 5173

Testing & Linting

# Frontend
npm run lint         # ESLint + Tailwind rules
npm run test         # Vitest + Testing Library
npm run build        # Type-check + production build

# Backend (requires Python >= 3.10)
python -m pytest
ruff check .         # static analysis
black .              # formatting

Note: The repository was authored against Python 3.11. Running the backend on Python < 3.10 is not supported.

Database & Seeding

  • Seed data lives in backend/app/seeds/seed_roles_and_questions.json. To add new roles or questions, edit that file and rerun python -m app.db.init_db (idempotent).
  • Alembic migrations are stored in backend/alembic/versions. Run alembic upgrade head for schema changes.

Environment

Variable Location Description
OPENAI_API_KEY backend .env Optional. Enables full LLM evaluation via OpenAI SDK.
LLM_PROVIDER backend .env "openai" (default) or "custom" for LM Studio/Ollama style endpoints.
LLM_BASE_URL backend .env Base URL for the custom provider (e.g. http://127.0.0.1:1234/v1).
LLM_API_KEY backend .env Optional key/token for the custom provider (falls back to OPENAI_API_KEY).
DATABASE_URL backend .env Defaults to SQLite. Use postgresql+psycopg://....
APP_ENV backend .env development, local, production, or ci.
EVAL_MODEL backend .env Model alias passed to the provider (default: gpt-4o-mini).
EVAL_MAX_OUTPUT_TOKENS backend .env Max tokens returned from the evaluator (default: 256).
EVAL_MAX_INPUT_CHARS backend .env Max characters sent to the evaluator prompt (default: 4000).
EVAL_COOLDOWN_SECONDS backend .env Cooldown after quota errors before retrying the LLM (default: 300).
ALLOW_LLM_FOR_CODE backend .env Allow LLM grading even when answers include code snippets (default: true).
CODE_DETECTION_THRESHOLD backend .env Heuristic score threshold for treating content as code when LLM-for-code is disabled.
DEBUG_FORCE_LLM backend .env Force LLM usage locally for a single evaluation (do not enable in production).
VITE_API_BASE_URL frontend .env Base URL for REST calls (default: local FastAPI).

Without an OPENAI_API_KEY, the evaluation service falls back to a deterministic heuristic so the app remains usable offline.

To evaluate against LM Studio or another OpenAI-compatible gateway, set:

LLM_PROVIDER=custom
LLM_BASE_URL=http://127.0.0.1:1234/v1
LLM_API_KEY=<optional token>
EVAL_MODEL=<model name exposed by the gateway>

The backend logs and /api/v1/diagnostics endpoint will confirm which provider is active.

CI/CD

GitHub Actions (.github/workflows/ci.yml) runs:

  1. Backend lint (ruff), format check (black), and pytest.
  2. Frontend lint, vitest, and production build.

Pre-commit (.pre-commit-config.yaml) mirrors these checks locally (pre-commit install recommended).

Roadmap

  1. Speech-to-text integration (Whisper or similar).
  2. Optional authentication to persist history across devices.
  3. Analytics dashboard (progress charts, trending tiers).
  4. Export session summary to PDF.

Privacy & Ethics

  • Interface prominently states: "Educational practice tool. No guarantee of hiring decisions. Feedback may be imperfect."
  • Consent modal explains local storage usage and optional API key handling before first use.
  • Ready tier wording ("Would you be hired?" prompt) is framed as a non-binding readiness signal.
  • Question bank contains original, generic prompts (no proprietary interview content).

Troubleshooting

  • Node 20 warning: Vite 5 targets Node >=18.17. Warnings can appear on older Node 18 minors; upgrading to the latest LTS resolves them.
  • Backend install fails: Ensure Python 3.10+ and up-to-date pip (python -m pip install --upgrade pip).
  • LLM errors: Without an API key the evaluator logs a warning and falls back to heuristics. Supply a key to restore full AI feedback.
  • Evaluator path unclear? Call GET /api/v1/diagnostics to confirm the active path (llm, llm_code_allowed, heuristic, code_question_forced), then tail the backend logs for evaluation_path= entries.

Contributing

  1. Fork & branch from main.
  2. Run pre-commit run --all-files before pushing.
  3. Open a PR with screenshots or recordings if the UI changes.

Screenshots

home_screen Personalized landing screen Select interview level, role, and focus areas before starting a new mock session. interview_q2 Technical prompt deep-dive Technical interview response with structured evaluation and coaching tips. interview_recap Interview recap modal End-of-interview recap with aggregate scores and recommendations for next steps. session_summary Session analytics dashboard Running summary of all mock interviews in the current session, highlighting trends.

About

AI-powered mock interview app built with React, FastAPI, and Codex. Users can practice for roles like Software Developer or Data Engineer with timed questions, AI evaluation, and personalized feedback to improve interview readiness.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published