- Structured Assessment: 2 MCQs → 1 Pseudo-code → 1 Complete Code
- Time Management: Individual time limits for each question type
- Question Types:
- 📝 Multiple Choice (2 minutes each) - Medium difficulty with instant feedback
- 🧠 Pseudo-code Problems (5 minutes) - Algorithm design and logic
- 💻 Complete Code Challenges (10 minutes) - Real-world implementation
- Real-time Timer: Visual countdown for each question
- Progress Tracking: Assessment progress bar and completion status
- Smart Input: Button-based MCQ selection + text input options
- Instant Feedback: Immediate scoring for multiple choice questions
- Auto-submission: Time-expired question handling
- Performance Metrics: MCQ accuracy, average time per question
- Detailed Reporting: Question-by-question breakdown with timing data
- Smart Recommendations: AI-powered candidate assessment and suggestions
- Export Options: Detailed JSON reports with performance analysis
- Create Project Structure:
mkdir talentscout-v2
cd talentscout-v2
mkdir config models services handlers ui utils
touch config/__init__.py models/__init__.py services/__init__.py
touch handlers/__init__.py ui/__init__.py utils/__init__.py- Install Dependencies:
pip install streamlit>=1.28.0 openai>=0.28.0 python-dotenv>=1.0.0- Environment Setup:
# Create .env file
echo "OPENAI_API_KEY=your_api_key_here" > .env-
File Deployment:
- Copy each module to its respective directory
- Ensure all
__init__.pyfiles are present - Place
main.pyin the root directory
-
Run Application:
streamlit run main.py- Single Responsibility: Each module handles one specific aspect
- Easy Testing: Individual components can be unit tested
- Scalable: Add new question types or LLM providers easily
- Maintainable: Clear separation between UI, business logic, and data
- AI-Powered: LLM generates personalized questions based on tech stack
- Fallback System: Pre-built questions when AI is unavailable
- Type-Specific: Different prompts for MCQ, pseudo-code, and coding questions
- Difficulty Scaling: Questions adapt to candidate experience level
- Timer Integration: Real-time question timing and auto-submission
- Progress Tracking: Complete assessment flow monitoring
- Data Persistence: Comprehensive response storage and analytics
- Session Recovery: Robust state handling with error recovery
- Multi-format Data: JSON export with detailed metrics
- Performance Analysis: Automated strengths/weaknesses identification
- Recommendation Engine: AI-powered hiring recommendations
- Visual Feedback: Charts and progress indicators
| Question Type | Count | Time Limit | Purpose |
|---|---|---|---|
| Multiple Choice | 2 | 2 min each | Test theoretical knowledge |
| Pseudo-code | 1 | 5 minutes | Assess algorithmic thinking |
| Complete Code | 1 | 10 minutes | Evaluate implementation skills |
- Personal Info Collection (2-3 minutes)
- Technical Skills Identification (1-2 minutes)
- Structured Technical Assessment (15-17 minutes)
- AI-Generated Summary & Next Steps (1 minute)
Total Time: ~20 minutes for comprehensive technical screening
# In config/settings.py
QUESTION_STRUCTURE.append({
"type": QuestionType.NEW_TYPE,
"count": 1,
"time_limit": 480
})# In services/llm_service.py
def setup_custom_llm(self):
# Add your custom LLM integration
pass# In models/candidate.py
def get_custom_metrics(self):
# Add your custom performance metrics
passThis enhanced modular structure provides a professional, scalable foundation for technical hiring assessments with sophisticated question management, real-time interaction, and comprehensive candidate evaluation.