A powerful, modular AI development assistant with memory and multi-model support. Built for professional developers who need a versatile AI assistant with local-first capabilities and enterprise-grade features.
CodexContinue offers a range of powerful capabilities:
-
YouTube Transcription: Convert YouTube videos to text and summaries with local processing
- Transcribe videos in multiple languages with automatic language detection
- Generate summaries using Ollama models
- Completely local processing for privacy and security
- Simple interface for quick transcription tasks
- High-quality transcripts using OpenAI's Whisper model (running locally)
-
Custom Ollama Models: Specialized models for software development tasks
- Built on Llama3, optimized for code generation
- Technical problem-solving expertise
- Advanced reasoning for development workflows
-
Knowledge Integration: Easy integration of new knowledge and capabilities
- Vector store for efficient knowledge retrieval
- Custom knowledge bases for domain-specific information
- Integration with external data sources
-
Domain Adaptation: Ability to customize the system for specific domains
- See DOMAIN_CUSTOMIZATION.md for details
CodexContinue follows a modern containerized microservices architecture that ensures:
- Modularity: Each component is isolated and independently deployable
- Scalability: Services can be scaled based on demand
- Maintainability: Well-defined interfaces between components
- Flexibility: Easy to add new capabilities or replace existing ones
The system consists of these core services:
- Backend API: FastAPI-based REST API handling business logic
- Frontend UI: Streamlit-based user interface
- ML Service: Machine learning service with LLM integration
- Redis: In-memory data store for caching and messaging
- Ollama: Local LLM service for privacy-focused AI capabilities
# Clone the repository
git clone https://github.com/msalsouri/CodexContinue.git
cd CodexContinue
# Start the development environment
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d- Frontend UI: http://localhost:8501
- Backend API: http://localhost:8000/docs
- ML Service API: http://localhost:5000/docs
For contribution guidelines, development workflow, and best practices, see:
- CONTRIBUTING.md - How to contribute to the project
- DEVELOPMENT_WORKFLOW.md - Development workflow and processes
- troubleshooting-guide.md - Troubleshooting common issues
The following features are planned for future development:
- Batch YouTube Transcription: Process multiple YouTube videos at once
- Enhanced Summarization Options: More control over summary generation
- Knowledge Base Integration: Save transcriptions to knowledge base
- Transcription Annotation: Add notes and annotations to transcriptions
For more information on upcoming features and development roadmap, see NEXT_STEPS.md.
This project is licensed under the MIT License - see the LICENSE file for details.