A powerful Django-based web application that uses Qwen3-VL AI model to detect and recognize license plates in images with advanced OCR capabilities.
π¨ Important Stability Notice: For production deployments, we strongly recommend using tagged releases instead of the mainline branch. The mainline may contain experimental features and be under active development. See the Production Deployment section for guidance on using stable tagged versions.
Try the live demo of Open LPR at: https://rest-openlpr.computedsynergy.com/
Experience the license plate recognition system in action without any installation required!
| Feature | Preview |
|---|---|
| Main Interface | ![]() |
| Detection Results | ![]() |
| Detection Details | ![]() |
| Processed Image | ![]() |
- π€ AI-Powered Detection: Uses qwen3-vl-4b-instruct vision-language model for accurate license plate recognition
- π Advanced OCR Integration: Extracts text from detected license plates with confidence scores
- π― Bounding Box Visualization: Draws colored boxes around detected plates and OCR text
- π€ Drag & Drop Upload: Modern, user-friendly file upload interface
- πΎ Permanent Storage: All uploaded and processed images are saved permanently
- π Side-by-Side Comparison: View original and processed images together
- π Search & Filter: Browse and search through processing history
- π± Responsive Design: Works on desktop, tablet, and mobile devices
- π³ Docker Support: Easy deployment with Docker and Docker Compose
- π REST API: Full API for programmatic access
Click to expand
The quickest way to get started is with Docker using the new profile-based compose file, which includes everything needed for local inference without requiring any external API endpoints.
π¨ Stability Notice: For production environments, we strongly recommend using tagged releases instead of the mainline branch. See the Production Deployment section for stable version instructions.
π¨ Important Notice: Individual compose files (
docker-compose-llamacpp-*.yml) are now deprecated. Please use the new profile-based approach with the maindocker-compose.ymlfile.
For users with AMD GPUs that support Vulkan:
# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr
# Create environment file from template
cp .env.llamacpp.example .env.llamacpp
# Edit the environment file with your settings
nano .env.llamacpp
# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles
# Start the application with AMD Vulkan GPU support
docker compose --profile core --profile amd-vulkan up -d
# Check the logs to ensure everything is running correctly
docker compose logs -fFor users without compatible GPUs or for testing purposes:
# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr
# Create environment file from template
cp .env.llamacpp.example .env.llamacpp
# Edit the environment file with your settings
nano .env.llamacpp
# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles
# Start the application with CPU support
docker compose --profile core --profile cpu up -d
# Check the logs to ensure everything is running correctly
docker compose logs -fFor users with NVIDIA GPUs that support CUDA:
# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr
# Create environment file from template
cp .env.llamacpp.example .env.llamacpp
# Edit the environment file with your settings
nano .env.llamacpp
# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles
# Start the application with NVIDIA CUDA GPU support
docker compose --profile core --profile nvidia-cuda up -d
# Check the logs to ensure everything is running correctly
docker compose logs -fFor users who want to use an external OpenAI-compatible API endpoint:
# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr
# Create environment file from template
cp .env.example .env
# Edit the environment file with your API settings
nano .env
# Create necessary directories
mkdir -p container-data container-media staticfiles
# Start the application (core services only)
docker compose --profile core up -d
# Check the logs to ensure everything is running correctly
docker compose logs -fπ¨ Deprecation Notice: Individual compose files (
docker-compose-llamacpp-*.yml) are now deprecated and will be removed in a future release. Please migrate to the new profile-based approach using the maindocker-compose.ymlfile.
The main docker-compose.yml now uses the merge design pattern with profiles for flexible deployment:
Profiles Available:
- core: Core infrastructure (Traefik, OpenLPR, Prometheus, Grafana, Blackbox Exporter, Canary)
- cpu: CPU-based LlamaCpp inference
- amd-vulkan: AMD Vulkan GPU inference
- nvidia-cuda: NVIDIA CUDA GPU inference
Usage Examples:
# Core infrastructure + CPU inference
docker compose --profile core --profile cpu up -d
# Core infrastructure + NVIDIA inference
docker compose --profile core --profile nvidia-cuda up -d
# Core infrastructure + AMD Vulkan inference
docker compose --profile core --profile amd-vulkan up -d
# Core services only (for external API)
docker compose --profile core up -d
# Stop all services
docker compose downAccess Points:
- OpenLPR App: http://lpr.localhost
- Traefik Dashboard: http://traefik.localhost
- Prometheus: http://prometheus.localhost
- Grafana: http://grafana.localhost (admin/admin)
- Blackbox Exporter: http://blackbox.localhost
- Canary Service: http://canary.localhost
For detailed profile documentation, see README-DOCKER-PROFILES.md.
β οΈ Deprecated: The following compose files are deprecated and will be removed in a future release. Please migrate to the profile-based approach above.
-
docker-compose-llamacpp-amd-vulcan.yml (Deprecated)
- Replaced by:
docker compose --profile core --profile amd-vulkan up -d - Was: Full local deployment with AMD GPU acceleration using Vulkan
- Replaced by:
-
docker-compose-llamacpp-cpu.yml (Deprecated)
- Replaced by:
docker compose --profile core --profile cpu up -d - Was: Full local deployment using CPU for inference
- Replaced by:
-
docker-compose.yml (Legacy external API mode)
- Replaced by:
docker compose --profile core up -d - Was: OpenLPR deployment with external API endpoint
- Replaced by:
For development or custom deployments:
-
Prerequisites
- Python 3.8+
- pip package manager
- Qwen3-VL API access
-
Clone the repository
For production/stable deployments, use a tagged release:
# List available releases git ls-remote --tags https://github.com/faisalthaheem/open-lpr.git # Clone a specific stable version (recommended for production) git clone --branch v1.0.0 https://github.com/faisalthaheem/open-lpr.git cd open-lpr # Or clone the latest stable release git clone --branch $(git ls-remote --tags https://github.com/faisalthaheem/open-lpr.git | grep -v 'rc\|beta\|alpha' | tail -n1 | sed 's/.*\///') https://github.com/faisalthaheem/open-lpr.git cd open-lpr
For development/testing (may be unstable):
git clone https://github.com/faisalthaheem/open-lpr.git cd open-lpr -
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Configure environment variables
cp .env.example .env # Edit .env with your settings -
Set up database
python manage.py makemigrations python manage.py migrate
-
Create superuser (optional)
python manage.py createsuperuser
-
Run development server
python manage.py runserver
-
Access the application Open http://127.0.0.1:8000 in your browser
Click to expand
For local development (running Django directly):
Create a .env file based on .env.example:
# Django Settings
SECRET_KEY=your-secret-key-here
DEBUG=True
ALLOWED_HOSTS=localhost,127.0.0.1
# Qwen3-VL API Configuration
QWEN_API_KEY=your-qwen-api-key
QWEN_BASE_URL=https://your-open-api-compatible-endpoint.com/v1
QWEN_MODEL=qwen3-vl-4b-instruct
# File Upload Settings
UPLOAD_FILE_MAX_SIZE=10485760 # 10MB
MAX_BATCH_SIZE=10For local LlamaCpp inference deployment:
Create a .env.llamacpp file based on .env.llamacpp.example:
# HuggingFace Token
HF_TOKEN=hf_your_huggingface_token_here
# Model Configuration
MODEL_REPO=unsloth/Qwen3-VL-4B-Instruct-GGUF
MODEL_FILE=Qwen3-VL-4B-Instruct-Q5_K_M.gguf
MMPROJ_URL=https://huggingface.co/unsloth/Qwen3-VL-4B-Instruct-GGUF/resolve/main/mmproj-BF16.gguf
# Django Settings
SECRET_KEY=your-secret-key-here
DEBUG=False
ALLOWED_HOSTS=localhost,127.0.0.1,0.0.0.0
# File Upload Settings
UPLOAD_FILE_MAX_SIZE=10485760 # 10MB
MAX_BATCH_SIZE=10
# Database Configuration
DATABASE_PATH=/app/data/db.sqlite3
# Optional: Superuser creation
DJANGO_SUPERUSER_USERNAME=admin
DJANGO_SUPERUSER_EMAIL=[email protected]
DJANGO_SUPERUSER_PASSWORD=your-secure-password
# Qwen3-VL API Configuration
QWEN_API_KEY=sk-llamacpp-local
#when using a remote Open API compatible endpoint
# QWEN_BASE_URL=https://your-api-endpoint.io/v1
#When running bundled llamacpp using CPU (default)
QWEN_BASE_URL=http://llamacpp-cpu:8000/v1
#When running bundled llamacpp using AMD GPUs
# QWEN_BASE_URL=http://llamacpp-amd-vulkan:8000/v1
#When running bundled llamacpp using Nvidia GPUs
# QWEN_BASE_URL=http://llamacpp-nvidia-cuda:8000/v1
QWEN_MODEL=Qwen3-VL-4B-InstructFor detailed LlamaCpp deployment instructions, see README-llamacpp.md.
Click to expand
- Drag & Drop: Simply drag an image file onto the upload area
- Click to Browse: Click the upload area to select a file
- File Validation:
- Supported formats: JPEG, PNG, BMP
- Maximum size: 10MB
- Processing: Click "Analyze License Plates" to start detection
After processing, you'll see:
- Detection Summary: Number of plates and OCR texts found
- Image Comparison: Side-by-side view of original and processed images
- Detection Details:
- License plate coordinates and confidence
- OCR text results with confidence scores
- Bounding box coordinates for all detections
- Download Options: Download both original and processed images
Access the "History" page to:
- Search: Filter by filename
- Date Range: Filter by upload date
- Status Filter: View by processing status
- Pagination: Navigate through large numbers of uploads
Click to expand
GET /- Home page with upload formPOST /upload/- Upload and process imageGET /result/<int:image_id>/- View processing results for a specific imageGET /images/- Browse image history with search and filteringGET /image/<int:image_id>/- View detailed information about a specific imagePOST /progress/- Check processing status (AJAX endpoint)GET /download/<int:image_id>/<str:image_type>/- Download original or processed imagesGET /health/- API health check endpoint
POST /api/v1/ocr/- Upload an image and receive OCR results synchronously
The LPR REST API returns JSON in this format:
{
"success": true,
"image_id": 123,
"filename": "example.jpg",
"processing_time_ms": 2450,
"results": {
"detections": [
{
"plate_id": "plate1",
"plate": {
"confidence": 0.85,
"coordinates": {
"x1": 100,
"y1": 200,
"x2": 250,
"y2": 250
}
},
"ocr": [
{
"text": "ABC123",
"confidence": 0.92,
"coordinates": {
"x1": 105,
"y1": 210,
"x2": 245,
"y2": 240
}
}
]
}
]
},
"summary": {
"total_plates": 1,
"total_ocr_texts": 1
},
"processing_timestamp": "2023-12-07T15:30:45.123456"
}{
"success": false,
"error": "No image file provided",
"error_code": "MISSING_IMAGE"
}import requests
# API endpoint
url = "http://localhost:8000/api/v1/ocr/"
# Image file to upload
image_path = "license_plate.jpg"
# Read and upload the image
with open(image_path, 'rb') as f:
files = {'image': f}
response = requests.post(url, files=files)
# Check response
if response.status_code == 200:
result = response.json()
if result['success']:
print(f"Found {result['summary']['total_plates']} license plates")
for detection in result['results']['detections']:
for ocr in detection['ocr']:
print(f"License plate text: {ocr['text']} (confidence: {ocr['confidence']:.2f})")
else:
print(f"Processing failed: {result['error']}")
else:
print(f"HTTP Error: {response.status_code}")
print(response.text)# Upload image and get OCR results
curl -X POST \
-F "image=@license_plate.jpg" \
http://localhost:8000/api/v1/ocr/Click to expand
The project includes automated Docker image building and publishing to GitHub Container Registry (ghcr.io).
The Docker image is automatically built and published to GitHub Container Registry when code is pushed to the main branch or when tags are created.
π¨ Production Recommendation: For production deployments, always use versioned tags instead of
latest. Thelatesttag may contain unstable features from the mainline branch.
# Pull a specific stable version (recommended for production)
docker pull ghcr.io/faisalthaheem/open-lpr:v1.0.0
# List available versions
curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | head -10
# Pull the latest stable release (excluding pre-releases)
LATEST_STABLE=$(curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -1 | sed 's/"tag_name": "\(.*\)"/\1/')
docker pull ghcr.io/faisalthaheem/open-lpr:$LATEST_STABLE# Pull the latest image (mainline, may be unstable)
docker pull ghcr.io/faisalthaheem/open-lpr:latest
# Pull a specific pre-release version
docker pull ghcr.io/faisalthaheem/open-lpr:v1.1.0-beta.1π¨ Important: Individual compose files are now deprecated. Please use the new profile-based approach with the main
docker-compose.ymlfile.
This project provides a unified Docker Compose file with profiles for different deployment scenarios. For detailed deployment instructions, see the Quick Start section and Docker Deployment Guide.
# Core infrastructure + CPU inference
docker compose --profile core --profile cpu up -d
# Core infrastructure + NVIDIA inference
docker compose --profile core --profile nvidia-cuda up -d
# Core infrastructure + AMD Vulkan inference
docker compose --profile core --profile amd-vulkan up -d
# Core services only (for external API)
docker compose --profile core up -d
# Stop all services
docker compose downFor local inference deployments, copy and configure the environment file:
# Copy the example environment file
cp .env.llamacpp.example .env.llamacpp
# Edit with your settings
nano .env.llamacppFor external API deployments:
# Copy the example environment file
cp .env.example .env
# Edit with your API settings
nano .envAfter starting the services:
- OpenLPR Application: http://lpr.localhost
- Traefik Dashboard: http://traefik.localhost
- Prometheus: http://prometheus.localhost
- Grafana: http://grafana.localhost (admin/admin)
- Blackbox Exporter: http://blackbox.localhost
- Canary Service: http://canary.localhost
For comprehensive deployment instructions, including production configurations, see DOCKER_DEPLOYMENT.md and README-DOCKER-PROFILES.md.
The project includes a GitHub Actions workflow (.github/workflows/docker-publish.yml) that:
-
Triggers on:
- Push to main/master branch
- Creation of version tags (v*)
- Pull requests to main/master
-
Builds the Docker image for multiple architectures:
- linux/amd64
- linux/arm64
-
Publishes to GitHub Container Registry with tags:
- Branch name (e.g.,
main) - Semantic version tags (e.g.,
v1.0.0,v1.0,v1) latesttag for the main branch
- Branch name (e.g.,
-
Generates SBOM (Software Bill of Materials) for security scanning
Click to expand
open-lpr/
βββ manage.py # Django management script
βββ requirements.txt # Python dependencies
βββ .env.example # Environment variables template
βββ .env # Environment variables (create from .env.example)
βββ .env.llamacpp.example # LlamaCpp environment variables template
βββ .env.llamacpp # LlamaCpp environment variables (create from .env.llamacpp.example)
βββ .gitignore # Git ignore file
βββ .dockerignore # Docker ignore file
βββ API_DOCUMENTATION.md # Detailed REST API documentation
βββ README-DOCKER-PROFILES.md # Docker profiles guide
βββ README-llamacpp.md # LlamaCpp deployment guide
βββ DOCKER_DEPLOYMENT.md # Docker deployment guide
βββ PROMETHEUS_METRICS.md # Prometheus metrics documentation
βββ CHANGELOG.md # Project changelog
βββ LICENSE.md # License file
βββ test_api.py # API testing script
βββ test_setup.py # Test setup utilities
βββ test-llamacpp-integration.py # LlamaCpp integration test script
βββ test_metrics.py # Metrics testing script
βββ verify-monitoring-setup.sh # Monitoring setup verification script
βββ docker-compose.yml # Profile-based Docker Compose configuration
βββ docker-compose-llamacpp-cpu.yml # [DEPRECATED] CPU-based LlamaCpp Docker Compose
βββ docker-compose-llamacpp-amd-vulcan.yml # [DEPRECATED] AMD Vulkan GPU LlamaCpp Docker Compose
βββ docker-entrypoint.sh # Docker entrypoint script
βββ Dockerfile # Docker image definition
βββ start-llamacpp-cpu.sh # LlamaCpp CPU startup script
βββ build-docker-image.sh # Docker image build script
βββ lpr_project/ # Django project settings
β βββ __init__.py
β βββ settings.py # Django configuration
β βββ urls.py # Project URL patterns
β βββ wsgi.py # WSGI configuration
βββ lpr_app/ # Main application
β βββ __init__.py
β βββ admin.py # Django admin configuration
β βββ apps.py # Django app configuration
β βββ models.py # Database models
β βββ views.py # View functions and API endpoints
β βββ views_refactored.py # Refactored view functions
β βββ urls.py # App URL patterns
β βββ forms.py # Django forms
β βββ metrics.py # Application metrics
β βββ services/ # Business logic
β β βββ __init__.py
β β βββ qwen_client.py # Qwen3-VL API client
β β βββ image_processor.py # Image processing utilities
β β βββ bbox_visualizer.py # Bounding box visualization
β β βββ api_service.py # API service layer
β β βββ file_service.py # File handling service
β β βββ image_processing_service.py # Image processing service
β βββ utils/ # Utility functions
β β βββ __init__.py
β β βββ metrics_helpers.py # Metrics helper functions
β β βββ response_helpers.py # Response helper functions
β β βββ validators.py # Validation utilities
β βββ views/ # View modules
β β βββ __init__.py
β β βββ api_views.py # API view functions
β β βββ file_views.py # File handling views
β β βββ web_views.py # Web interface views
β βββ management/ # Django management commands
β β βββ __init__.py
β β βββ commands/
β β βββ __init__.py
β β βββ setup_project.py
β β βββ inspect_image.py
β βββ static/ # Static files
β β βββ lpr_app/
β β βββ images/
β β βββ favicon.ico
β β βββ favicon.svg
β βββ migrations/ # Database migrations
β βββ __init__.py
β βββ 0001_initial.py
βββ media/ # Uploaded images
β βββ uploads/ # Original images
β βββ processed/ # Processed images
βββ container-data/ # Docker container data persistence
βββ container-media/ # Docker container media persistence
βββ staticfiles/ # Collected static files
βββ templates/ # HTML templates
β βββ base.html # Base template
β βββ lpr_app/ # App-specific templates
β βββ base.html
β βββ image_detail.html
β βββ image_list.html
β βββ results.html
β βββ upload.html
βββ docs/ # Documentation
β βββ LLAMACPP_RESOURCES.md # LlamaCpp and ROCm resources
β βββ open-lpr-index.png
β βββ open-lpr-detection-result.png
β βββ open-lpr-detection-details.png
β βββ open-lpr-processed-image.png
β βββ RELEASE_NOTES_v1.0.1.md
βββ nginx/ # Nginx configuration
β βββ nginx.conf # Nginx reverse proxy configuration
β βββ ssl/ # SSL certificates directory
βββ traefik/ # Traefik reverse proxy configuration
β βββ traefik.yml # Traefik static configuration
β βββ dynamic/ # Dynamic configuration directory
β β βββ config.yml # Dynamic routing configuration
β βββ ssl/ # SSL certificates directory
βββ prometheus/ # Prometheus monitoring configuration
β βββ prometheus.yml # Prometheus configuration
βββ grafana/ # Grafana visualization configuration
β βββ provisioning/ # Auto-provisioning configuration
β βββ datasources/ # Data source configuration
β β βββ prometheus.yml
β βββ dashboards/ # Dashboard definitions
β βββ dashboards.yml
β βββ canary/
β β βββ lpr-canary-dashboard.json
β βββ default/
β βββ lpr-app-dashboard.json
βββ blackbox/ # Blackbox exporter configuration
β βββ blackbox.yml # Blackbox probing configuration
β βββ jeep.jpg # Test image for blackbox probing
βββ canary/ # Canary service for monitoring
β βββ canary.py # Canary service implementation
β βββ Dockerfile # Canary service Dockerfile
β βββ jeep.jpg # Test image for canary service
βββ logs/ # Application logs
βββ .github/ # GitHub workflows
β βββ workflows/ # CI/CD configurations
βββ plans/ # Project planning documents
βββ vllm-rocm/ # vLLM ROCm configuration
Click to expand
Use the provided test script to verify API functionality:
# Test with default image locations
python test_api.py
# Test with specific image
python test_api.py /path/to/your/image.jpgClick to expand
# Run Django tests
python manage.py test
# Run with coverage
pip install coverage
coverage run --source='.' manage.py test
coverage report# Create new migrations
python manage.py makemigrations lpr_app
# Apply migrations
python manage.py migrate# Collect static files for production
python manage.py collectstatic --noinputClick to expand
π¨ Critical Production Requirement: Always use tagged releases for production deployments. The mainline branch may contain experimental features and be unstable. Never use
latesttags or main branch in production environments.
# Find the latest stable release
curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -1
# Clone a specific stable version
git clone --branch v1.0.0 https://github.com/faisalthaheem/open-lpr.git
cd open-lpr
# Or checkout an existing repository to a stable version
git fetch --tags
git checkout v1.0.0# Automatically get the latest stable release (excluding pre-releases)
LATEST_STABLE=$(curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -1 | sed 's/"tag_name": "\(.*\)"/\1/')
git clone --branch $LATEST_STABLE https://github.com/faisalthaheem/open-lpr.git
cd open-lpr# Use a specific versioned Docker image (recommended)
VERSION=v1.0.0
docker pull ghcr.io/faisalthaheem/open-lpr:$VERSION
# Update your docker-compose.yml to use the versioned image
sed -i "s|ghcr.io/faisalthaheem/open-lpr:latest|ghcr.io/faisalthaheem/open-lpr:$VERSION|g" docker-compose.yml
# Or automatically use the latest stable release
LATEST_STABLE=$(curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -1 | sed 's/"tag_name": "\(.*\)"/\1/')
docker pull ghcr.io/faisalthaheem/open-lpr:$LATEST_STABLE- Set DEBUG=False in
.env - Configure ALLOWED_HOSTS with your domain
- Set up production database (PostgreSQL recommended)
- Configure static file serving (nginx/AWS S3)
- Set up media file serving (nginx/AWS S3)
- Use HTTPS with SSL certificate
- Pin to specific versions (see version selection above)
- Select a stable version (not
latestor main branch) - Pin both source code and Docker images to that version
- Test thoroughly in staging environment
- Deploy to production with pinned versions
- Monitor for issues before considering upgrades
For Source Code:
# In your deployment script
VERSION=v1.0.0
git clone --branch $VERSION https://github.com/faisalthaheem/open-lpr.gitFor Docker:
# In docker-compose.yml (production)
services:
openlpr:
image: ghcr.io/faisalthaheem/open-lpr:v1.0.0 # Pinned version, not latest
# ... other configuration- Development: SQLite database, DEBUG=True, mainline branch acceptable
- Staging: PostgreSQL, DEBUG=False, use same version as production
- Production: PostgreSQL, DEBUG=False, HTTPS required, always use tagged releases
-
Check for new stable releases:
curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -5
-
Review release notes for breaking changes
-
Test upgrade in staging with the new version
-
Backup production data
-
Deploy with pinned versions following the version selection steps above
-
Monitor and rollback if needed
latest tags or mainline branch. Always use specific version tags.
Click to expand
-
API Connection Failed
- Check QWEN_API_KEY in
.env - Verify QWEN_BASE_URL is accessible
- Check network connectivity
- Check QWEN_API_KEY in
-
Image Upload Failed
- Verify file format (JPEG/PNG/BMP only)
- Check file size (< 10MB)
- Ensure media directory permissions
-
Processing Errors
- Check Django logs:
tail -f django.log - Verify API response format
- Check image processing dependencies
- Check Django logs:
-
Static Files Not Loading
- Run
python manage.py collectstatic - Check STATIC_URL in settings
- Verify web server static file configuration
- Run
Application logs are written to:
- Development: Console and
django.log - Production: Configured logging destination
Log levels:
INFO: General application flowERROR: API failures and processing errorsDEBUG: Detailed debugging information
Click to expand
We welcome contributions! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Ensure all tests pass (
python manage.py test) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 for Python code
- Use meaningful variable and function names
- Add docstrings to functions and classes
- Keep commits small and focused
When reporting issues, please include:
- Detailed description of the problem
- Steps to reproduce
- Expected vs. actual behavior
- Environment details (OS, Python version, etc.)
- Relevant logs or error messages
Click to expand
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Click to expand
For issues and questions:
- Check the troubleshooting section
- Review application logs
- Create an issue with detailed information
- Include error messages and steps to reproduce
Click to expand
Click to expand
For specialized deployment scenarios and additional resources:
- π Docker Profiles Guide - New profile-based Docker Compose setup (Recommended)
- LlamaCpp and ROCm Resources - Important URLs for local LlamaCpp deployment
- README-llamacpp.md - Local inference with LlamaCpp server
- Docker Deployment Guide - Comprehensive Docker deployment instructions
- API Documentation - Complete REST API reference
Made with β€οΈ by Open LPR Team



