Skip to content

faisalthaheem/open-lpr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

71 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš— OPEN LPR - License Plate Recognition System

GitHub release GitHub stars GitHub forks GitHub issues GitHub Container Registry License: Apache 2.0

A powerful Django-based web application that uses Qwen3-VL AI model to detect and recognize license plates in images with advanced OCR capabilities.

🚨 Important Stability Notice: For production deployments, we strongly recommend using tagged releases instead of the mainline branch. The mainline may contain experimental features and be under active development. See the Production Deployment section for guidance on using stable tagged versions.

πŸ“‘ Table of Contents

πŸš€ Live Demo 🌟 Visual Showcase ✨ Features
πŸ› οΈ Technology Stack πŸš€ Quick Start βš™οΈ Configuration
πŸ“– Usage πŸ”Œ API Endpoints 🐳 Docker Deployment
πŸ“ File Structure πŸ§ͺ Testing πŸ”§ Development
πŸš€ Production Deployment πŸ› Troubleshooting 🀝 Contributing
πŸ“„ License πŸ†˜ Support πŸ™ Acknowledgments
πŸ“š Additional Documentation

πŸš€ Live Demo

Try the live demo of Open LPR at: https://rest-openlpr.computedsynergy.com/

Experience the license plate recognition system in action without any installation required!

🌟 Visual Showcase

Feature Preview
Main Interface Open LPR Main Interface
Detection Results Detection Results
Detection Details Detection Details
Processed Image Processed Image with Bounding Boxes

✨ Features

  • πŸ€– AI-Powered Detection: Uses qwen3-vl-4b-instruct vision-language model for accurate license plate recognition
  • πŸ” Advanced OCR Integration: Extracts text from detected license plates with confidence scores
  • 🎯 Bounding Box Visualization: Draws colored boxes around detected plates and OCR text
  • πŸ“€ Drag & Drop Upload: Modern, user-friendly file upload interface
  • πŸ’Ύ Permanent Storage: All uploaded and processed images are saved permanently
  • πŸ”„ Side-by-Side Comparison: View original and processed images together
  • πŸ”Ž Search & Filter: Browse and search through processing history
  • πŸ“± Responsive Design: Works on desktop, tablet, and mobile devices
  • 🐳 Docker Support: Easy deployment with Docker and Docker Compose
  • πŸ”Œ REST API: Full API for programmatic access

πŸ› οΈ Technology Stack

Backend AI Model Frontend Database Deployment
Django Qwen3-VL Bootstrap SQLite Docker
Python OpenAI API HTML5 PostgreSQL GitHub Actions

πŸš€ Quick Start

Click to expand

Docker Deployment (Recommended)

The quickest way to get started is with Docker using the new profile-based compose file, which includes everything needed for local inference without requiring any external API endpoints.

🚨 Stability Notice: For production environments, we strongly recommend using tagged releases instead of the mainline branch. See the Production Deployment section for stable version instructions.

🚨 Important Notice: Individual compose files (docker-compose-llamacpp-*.yml) are now deprecated. Please use the new profile-based approach with the main docker-compose.yml file.

Option 1: AMD Vulkan GPU Version (Fastest Local Inference)

For users with AMD GPUs that support Vulkan:

# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr

# Create environment file from template
cp .env.llamacpp.example .env.llamacpp

# Edit the environment file with your settings
nano .env.llamacpp

# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles

# Start the application with AMD Vulkan GPU support
docker compose --profile core --profile amd-vulkan up -d

# Check the logs to ensure everything is running correctly
docker compose logs -f

Option 2: CPU Version (Universal Compatibility)

For users without compatible GPUs or for testing purposes:

# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr

# Create environment file from template
cp .env.llamacpp.example .env.llamacpp

# Edit the environment file with your settings
nano .env.llamacpp

# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles

# Start the application with CPU support
docker compose --profile core --profile cpu up -d

# Check the logs to ensure everything is running correctly
docker compose logs -f

Option 3: NVIDIA CUDA GPU Version

For users with NVIDIA GPUs that support CUDA:

# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr

# Create environment file from template
cp .env.llamacpp.example .env.llamacpp

# Edit the environment file with your settings
nano .env.llamacpp

# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles

# Start the application with NVIDIA CUDA GPU support
docker compose --profile core --profile nvidia-cuda up -d

# Check the logs to ensure everything is running correctly
docker compose logs -f

Option 4: External API Only

For users who want to use an external OpenAI-compatible API endpoint:

# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr

# Create environment file from template
cp .env.example .env

# Edit the environment file with your API settings
nano .env

# Create necessary directories
mkdir -p container-data container-media staticfiles

# Start the application (core services only)
docker compose --profile core up -d

# Check the logs to ensure everything is running correctly
docker compose logs -f

Docker Compose Files

🚨 Deprecation Notice: Individual compose files (docker-compose-llamacpp-*.yml) are now deprecated and will be removed in a future release. Please migrate to the new profile-based approach using the main docker-compose.yml file.

πŸ†• Profile-Based Docker Compose (Recommended)

The main docker-compose.yml now uses the merge design pattern with profiles for flexible deployment:

Profiles Available:

  • core: Core infrastructure (Traefik, OpenLPR, Prometheus, Grafana, Blackbox Exporter, Canary)
  • cpu: CPU-based LlamaCpp inference
  • amd-vulkan: AMD Vulkan GPU inference
  • nvidia-cuda: NVIDIA CUDA GPU inference

Usage Examples:

# Core infrastructure + CPU inference
docker compose --profile core --profile cpu up -d

# Core infrastructure + NVIDIA inference
docker compose --profile core --profile nvidia-cuda up -d

# Core infrastructure + AMD Vulkan inference
docker compose --profile core --profile amd-vulkan up -d

# Core services only (for external API)
docker compose --profile core up -d

# Stop all services
docker compose down

Access Points:

For detailed profile documentation, see README-DOCKER-PROFILES.md.

Deprecated Individual Compose Files

⚠️ Deprecated: The following compose files are deprecated and will be removed in a future release. Please migrate to the profile-based approach above.

  1. docker-compose-llamacpp-amd-vulcan.yml (Deprecated)

    • Replaced by: docker compose --profile core --profile amd-vulkan up -d
    • Was: Full local deployment with AMD GPU acceleration using Vulkan
  2. docker-compose-llamacpp-cpu.yml (Deprecated)

    • Replaced by: docker compose --profile core --profile cpu up -d
    • Was: Full local deployment using CPU for inference
  3. docker-compose.yml (Legacy external API mode)

    • Replaced by: docker compose --profile core up -d
    • Was: OpenLPR deployment with external API endpoint

Manual Installation

For development or custom deployments:

  1. Prerequisites

    • Python 3.8+
    • pip package manager
    • Qwen3-VL API access
  2. Clone the repository

    For production/stable deployments, use a tagged release:

    # List available releases
    git ls-remote --tags https://github.com/faisalthaheem/open-lpr.git
    
    # Clone a specific stable version (recommended for production)
    git clone --branch v1.0.0 https://github.com/faisalthaheem/open-lpr.git
    cd open-lpr
    
    # Or clone the latest stable release
    git clone --branch $(git ls-remote --tags https://github.com/faisalthaheem/open-lpr.git | grep -v 'rc\|beta\|alpha' | tail -n1 | sed 's/.*\///') https://github.com/faisalthaheem/open-lpr.git
    cd open-lpr

    For development/testing (may be unstable):

    git clone https://github.com/faisalthaheem/open-lpr.git
    cd open-lpr
  3. Create virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  4. Install dependencies

    pip install -r requirements.txt
  5. Configure environment variables

    cp .env.example .env
    # Edit .env with your settings
  6. Set up database

    python manage.py makemigrations
    python manage.py migrate
  7. Create superuser (optional)

    python manage.py createsuperuser
  8. Run development server

    python manage.py runserver
  9. Access the application Open http://127.0.0.1:8000 in your browser

βš™οΈ Configuration

Click to expand

Development Environment

For local development (running Django directly):

Create a .env file based on .env.example:

# Django Settings
SECRET_KEY=your-secret-key-here
DEBUG=True
ALLOWED_HOSTS=localhost,127.0.0.1

# Qwen3-VL API Configuration
QWEN_API_KEY=your-qwen-api-key
QWEN_BASE_URL=https://your-open-api-compatible-endpoint.com/v1
QWEN_MODEL=qwen3-vl-4b-instruct

# File Upload Settings
UPLOAD_FILE_MAX_SIZE=10485760  # 10MB
MAX_BATCH_SIZE=10

Docker Environment with LlamaCpp

For local LlamaCpp inference deployment:

Create a .env.llamacpp file based on .env.llamacpp.example:

# HuggingFace Token
HF_TOKEN=hf_your_huggingface_token_here

# Model Configuration
MODEL_REPO=unsloth/Qwen3-VL-4B-Instruct-GGUF
MODEL_FILE=Qwen3-VL-4B-Instruct-Q5_K_M.gguf
MMPROJ_URL=https://huggingface.co/unsloth/Qwen3-VL-4B-Instruct-GGUF/resolve/main/mmproj-BF16.gguf

# Django Settings
SECRET_KEY=your-secret-key-here
DEBUG=False
ALLOWED_HOSTS=localhost,127.0.0.1,0.0.0.0

# File Upload Settings
UPLOAD_FILE_MAX_SIZE=10485760  # 10MB
MAX_BATCH_SIZE=10

# Database Configuration
DATABASE_PATH=/app/data/db.sqlite3

# Optional: Superuser creation
DJANGO_SUPERUSER_USERNAME=admin
DJANGO_SUPERUSER_EMAIL=[email protected]
DJANGO_SUPERUSER_PASSWORD=your-secure-password

# Qwen3-VL API Configuration
QWEN_API_KEY=sk-llamacpp-local
#when using a remote Open API compatible endpoint
# QWEN_BASE_URL=https://your-api-endpoint.io/v1
#When running bundled llamacpp using CPU (default)
QWEN_BASE_URL=http://llamacpp-cpu:8000/v1
#When running bundled llamacpp using AMD GPUs
# QWEN_BASE_URL=http://llamacpp-amd-vulkan:8000/v1
#When running bundled llamacpp using Nvidia GPUs
# QWEN_BASE_URL=http://llamacpp-nvidia-cuda:8000/v1
QWEN_MODEL=Qwen3-VL-4B-Instruct

For detailed LlamaCpp deployment instructions, see README-llamacpp.md.

πŸ“– Usage

Click to expand

Uploading Images

  1. Drag & Drop: Simply drag an image file onto the upload area
  2. Click to Browse: Click the upload area to select a file
  3. File Validation:
    • Supported formats: JPEG, PNG, BMP
    • Maximum size: 10MB
  4. Processing: Click "Analyze License Plates" to start detection

Viewing Results

After processing, you'll see:

  • Detection Summary: Number of plates and OCR texts found
  • Image Comparison: Side-by-side view of original and processed images
  • Detection Details:
    • License plate coordinates and confidence
    • OCR text results with confidence scores
    • Bounding box coordinates for all detections
  • Download Options: Download both original and processed images

Browsing History

Access the "History" page to:

  • Search: Filter by filename
  • Date Range: Filter by upload date
  • Status Filter: View by processing status
  • Pagination: Navigate through large numbers of uploads

πŸ”Œ API Endpoints

Click to expand

Web Endpoints

  • GET / - Home page with upload form
  • POST /upload/ - Upload and process image
  • GET /result/<int:image_id>/ - View processing results for a specific image
  • GET /images/ - Browse image history with search and filtering
  • GET /image/<int:image_id>/ - View detailed information about a specific image
  • POST /progress/ - Check processing status (AJAX endpoint)
  • GET /download/<int:image_id>/<str:image_type>/ - Download original or processed images
  • GET /health/ - API health check endpoint

REST API Endpoints

  • POST /api/v1/ocr/ - Upload an image and receive OCR results synchronously

Response Format

REST API Response Format

The LPR REST API returns JSON in this format:

{
    "success": true,
    "image_id": 123,
    "filename": "example.jpg",
    "processing_time_ms": 2450,
    "results": {
        "detections": [
            {
                "plate_id": "plate1",
                "plate": {
                    "confidence": 0.85,
                    "coordinates": {
                        "x1": 100,
                        "y1": 200,
                        "x2": 250,
                        "y2": 250
                    }
                },
                "ocr": [
                    {
                        "text": "ABC123",
                        "confidence": 0.92,
                        "coordinates": {
                            "x1": 105,
                            "y1": 210,
                            "x2": 245,
                            "y2": 240
                        }
                    }
                ]
            }
        ]
    },
    "summary": {
        "total_plates": 1,
        "total_ocr_texts": 1
    },
    "processing_timestamp": "2023-12-07T15:30:45.123456"
}

Error Response Format

{
    "success": false,
    "error": "No image file provided",
    "error_code": "MISSING_IMAGE"
}

Usage Examples

Python Example

import requests

# API endpoint
url = "http://localhost:8000/api/v1/ocr/"

# Image file to upload
image_path = "license_plate.jpg"

# Read and upload the image
with open(image_path, 'rb') as f:
    files = {'image': f}
    response = requests.post(url, files=files)

# Check response
if response.status_code == 200:
    result = response.json()
    if result['success']:
        print(f"Found {result['summary']['total_plates']} license plates")
        for detection in result['results']['detections']:
            for ocr in detection['ocr']:
                print(f"License plate text: {ocr['text']} (confidence: {ocr['confidence']:.2f})")
    else:
        print(f"Processing failed: {result['error']}")
else:
    print(f"HTTP Error: {response.status_code}")
    print(response.text)

cURL Example

# Upload image and get OCR results
curl -X POST \
  -F "image=@license_plate.jpg" \
  http://localhost:8000/api/v1/ocr/

🐳 Docker Deployment

Click to expand

The project includes automated Docker image building and publishing to GitHub Container Registry (ghcr.io).

Using the Pre-built Docker Image

The Docker image is automatically built and published to GitHub Container Registry when code is pushed to the main branch or when tags are created.

🚨 Production Recommendation: For production deployments, always use versioned tags instead of latest. The latest tag may contain unstable features from the mainline branch.

Production Deployment (Recommended)

# Pull a specific stable version (recommended for production)
docker pull ghcr.io/faisalthaheem/open-lpr:v1.0.0

# List available versions
curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | head -10

# Pull the latest stable release (excluding pre-releases)
LATEST_STABLE=$(curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -1 | sed 's/"tag_name": "\(.*\)"/\1/')
docker pull ghcr.io/faisalthaheem/open-lpr:$LATEST_STABLE

Development/Testing (May be unstable)

# Pull the latest image (mainline, may be unstable)
docker pull ghcr.io/faisalthaheem/open-lpr:latest

# Pull a specific pre-release version
docker pull ghcr.io/faisalthaheem/open-lpr:v1.1.0-beta.1

Docker Compose Deployment

🚨 Important: Individual compose files are now deprecated. Please use the new profile-based approach with the main docker-compose.yml file.

This project provides a unified Docker Compose file with profiles for different deployment scenarios. For detailed deployment instructions, see the Quick Start section and Docker Deployment Guide.

Quick Reference

# Core infrastructure + CPU inference
docker compose --profile core --profile cpu up -d

# Core infrastructure + NVIDIA inference
docker compose --profile core --profile nvidia-cuda up -d

# Core infrastructure + AMD Vulkan inference
docker compose --profile core --profile amd-vulkan up -d

# Core services only (for external API)
docker compose --profile core up -d

# Stop all services
docker compose down

Environment Configuration

For local inference deployments, copy and configure the environment file:

# Copy the example environment file
cp .env.llamacpp.example .env.llamacpp

# Edit with your settings
nano .env.llamacpp

For external API deployments:

# Copy the example environment file
cp .env.example .env

# Edit with your API settings
nano .env

Access Points

After starting the services:

For comprehensive deployment instructions, including production configurations, see DOCKER_DEPLOYMENT.md and README-DOCKER-PROFILES.md.

CI/CD Workflow

The project includes a GitHub Actions workflow (.github/workflows/docker-publish.yml) that:

  1. Triggers on:

    • Push to main/master branch
    • Creation of version tags (v*)
    • Pull requests to main/master
  2. Builds the Docker image for multiple architectures:

    • linux/amd64
    • linux/arm64
  3. Publishes to GitHub Container Registry with tags:

    • Branch name (e.g., main)
    • Semantic version tags (e.g., v1.0.0, v1.0, v1)
    • latest tag for the main branch
  4. Generates SBOM (Software Bill of Materials) for security scanning

πŸ“ File Structure

Click to expand
open-lpr/
β”œβ”€β”€ manage.py                    # Django management script
β”œβ”€β”€ requirements.txt              # Python dependencies
β”œβ”€β”€ .env.example                # Environment variables template
β”œβ”€β”€ .env                         # Environment variables (create from .env.example)
β”œβ”€β”€ .env.llamacpp.example       # LlamaCpp environment variables template
β”œβ”€β”€ .env.llamacpp               # LlamaCpp environment variables (create from .env.llamacpp.example)
β”œβ”€β”€ .gitignore                   # Git ignore file
β”œβ”€β”€ .dockerignore               # Docker ignore file
β”œβ”€β”€ API_DOCUMENTATION.md        # Detailed REST API documentation
β”œβ”€β”€ README-DOCKER-PROFILES.md   # Docker profiles guide
β”œβ”€β”€ README-llamacpp.md         # LlamaCpp deployment guide
β”œβ”€β”€ DOCKER_DEPLOYMENT.md        # Docker deployment guide
β”œβ”€β”€ PROMETHEUS_METRICS.md      # Prometheus metrics documentation
β”œβ”€β”€ CHANGELOG.md               # Project changelog
β”œβ”€β”€ LICENSE.md                 # License file
β”œβ”€β”€ test_api.py                 # API testing script
β”œβ”€β”€ test_setup.py               # Test setup utilities
β”œβ”€β”€ test-llamacpp-integration.py # LlamaCpp integration test script
β”œβ”€β”€ test_metrics.py             # Metrics testing script
β”œβ”€β”€ verify-monitoring-setup.sh  # Monitoring setup verification script
β”œβ”€β”€ docker-compose.yml           # Profile-based Docker Compose configuration
β”œβ”€β”€ docker-compose-llamacpp-cpu.yml    # [DEPRECATED] CPU-based LlamaCpp Docker Compose
β”œβ”€β”€ docker-compose-llamacpp-amd-vulcan.yml # [DEPRECATED] AMD Vulkan GPU LlamaCpp Docker Compose
β”œβ”€β”€ docker-entrypoint.sh         # Docker entrypoint script
β”œβ”€β”€ Dockerfile                  # Docker image definition
β”œβ”€β”€ start-llamacpp-cpu.sh     # LlamaCpp CPU startup script
β”œβ”€β”€ build-docker-image.sh      # Docker image build script
β”œβ”€β”€ lpr_project/               # Django project settings
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ settings.py             # Django configuration
β”‚   β”œβ”€β”€ urls.py                 # Project URL patterns
β”‚   └── wsgi.py                 # WSGI configuration
β”œβ”€β”€ lpr_app/                   # Main application
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ admin.py                # Django admin configuration
β”‚   β”œβ”€β”€ apps.py                 # Django app configuration
β”‚   β”œβ”€β”€ models.py               # Database models
β”‚   β”œβ”€β”€ views.py                # View functions and API endpoints
β”‚   β”œβ”€β”€ views_refactored.py     # Refactored view functions
β”‚   β”œβ”€β”€ urls.py                 # App URL patterns
β”‚   β”œβ”€β”€ forms.py                # Django forms
β”‚   β”œβ”€β”€ metrics.py              # Application metrics
β”‚   β”œβ”€β”€ services/               # Business logic
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ qwen_client.py      # Qwen3-VL API client
β”‚   β”‚   β”œβ”€β”€ image_processor.py  # Image processing utilities
β”‚   β”‚   β”œβ”€β”€ bbox_visualizer.py  # Bounding box visualization
β”‚   β”‚   β”œβ”€β”€ api_service.py      # API service layer
β”‚   β”‚   β”œβ”€β”€ file_service.py     # File handling service
β”‚   β”‚   └── image_processing_service.py # Image processing service
β”‚   β”œβ”€β”€ utils/                  # Utility functions
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ metrics_helpers.py  # Metrics helper functions
β”‚   β”‚   β”œβ”€β”€ response_helpers.py # Response helper functions
β”‚   β”‚   └── validators.py      # Validation utilities
β”‚   β”œβ”€β”€ views/                 # View modules
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ api_views.py       # API view functions
β”‚   β”‚   β”œβ”€β”€ file_views.py      # File handling views
β”‚   β”‚   └── web_views.py       # Web interface views
β”‚   β”œβ”€β”€ management/             # Django management commands
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   └── commands/
β”‚   β”‚       β”œβ”€β”€ __init__.py
β”‚   β”‚       β”œβ”€β”€ setup_project.py
β”‚   β”‚       └── inspect_image.py
β”‚   β”œβ”€β”€ static/                # Static files
β”‚   β”‚   └── lpr_app/
β”‚   β”‚       └── images/
β”‚   β”‚           β”œβ”€β”€ favicon.ico
β”‚   β”‚           └── favicon.svg
β”‚   └── migrations/            # Database migrations
β”‚       β”œβ”€β”€ __init__.py
β”‚       └── 0001_initial.py
β”œβ”€β”€ media/                     # Uploaded images
β”‚   β”œβ”€β”€ uploads/               # Original images
β”‚   └── processed/             # Processed images
β”œβ”€β”€ container-data/             # Docker container data persistence
β”œβ”€β”€ container-media/            # Docker container media persistence
β”œβ”€β”€ staticfiles/               # Collected static files
β”œβ”€β”€ templates/                 # HTML templates
β”‚   β”œβ”€β”€ base.html              # Base template
β”‚   └── lpr_app/               # App-specific templates
β”‚       β”œβ”€β”€ base.html
β”‚       β”œβ”€β”€ image_detail.html
β”‚       β”œβ”€β”€ image_list.html
β”‚       β”œβ”€β”€ results.html
β”‚       └── upload.html
β”œβ”€β”€ docs/                      # Documentation
β”‚   β”œβ”€β”€ LLAMACPP_RESOURCES.md  # LlamaCpp and ROCm resources
β”‚   β”œβ”€β”€ open-lpr-index.png
β”‚   β”œβ”€β”€ open-lpr-detection-result.png
β”‚   β”œβ”€β”€ open-lpr-detection-details.png
β”‚   β”œβ”€β”€ open-lpr-processed-image.png
β”‚   └── RELEASE_NOTES_v1.0.1.md
β”œβ”€β”€ nginx/                     # Nginx configuration
β”‚   β”œβ”€β”€ nginx.conf             # Nginx reverse proxy configuration
β”‚   └── ssl/                   # SSL certificates directory
β”œβ”€β”€ traefik/                   # Traefik reverse proxy configuration
β”‚   β”œβ”€β”€ traefik.yml            # Traefik static configuration
β”‚   β”œβ”€β”€ dynamic/               # Dynamic configuration directory
β”‚   β”‚   └── config.yml         # Dynamic routing configuration
β”‚   └── ssl/                   # SSL certificates directory
β”œβ”€β”€ prometheus/                # Prometheus monitoring configuration
β”‚   └── prometheus.yml         # Prometheus configuration
β”œβ”€β”€ grafana/                   # Grafana visualization configuration
β”‚   └── provisioning/          # Auto-provisioning configuration
β”‚       β”œβ”€β”€ datasources/       # Data source configuration
β”‚       β”‚   └── prometheus.yml
β”‚       └── dashboards/        # Dashboard definitions
β”‚           β”œβ”€β”€ dashboards.yml
β”‚           β”œβ”€β”€ canary/
β”‚           β”‚   └── lpr-canary-dashboard.json
β”‚           └── default/
β”‚               └── lpr-app-dashboard.json
β”œβ”€β”€ blackbox/                  # Blackbox exporter configuration
β”‚   β”œβ”€β”€ blackbox.yml           # Blackbox probing configuration
β”‚   └── jeep.jpg              # Test image for blackbox probing
β”œβ”€β”€ canary/                    # Canary service for monitoring
β”‚   β”œβ”€β”€ canary.py             # Canary service implementation
β”‚   β”œβ”€β”€ Dockerfile            # Canary service Dockerfile
β”‚   └── jeep.jpg             # Test image for canary service
β”œβ”€β”€ logs/                      # Application logs
β”œβ”€β”€ .github/                  # GitHub workflows
β”‚   └── workflows/             # CI/CD configurations
β”œβ”€β”€ plans/                     # Project planning documents
└── vllm-rocm/                # vLLM ROCm configuration

πŸ§ͺ Testing

Click to expand

Use the provided test script to verify API functionality:

# Test with default image locations
python test_api.py

# Test with specific image
python test_api.py /path/to/your/image.jpg

πŸ”§ Development

Click to expand

Running Tests

# Run Django tests
python manage.py test

# Run with coverage
pip install coverage
coverage run --source='.' manage.py test
coverage report

Database Migrations

# Create new migrations
python manage.py makemigrations lpr_app

# Apply migrations
python manage.py migrate

Static Files

# Collect static files for production
python manage.py collectstatic --noinput

πŸš€ Production Deployment

Click to expand

🚨 Critical Production Requirement: Always use tagged releases for production deployments. The mainline branch may contain experimental features and be unstable. Never use latest tags or main branch in production environments.

Version Selection for Production

Option 1: Use Specific Stable Release (Recommended)

# Find the latest stable release
curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -1

# Clone a specific stable version
git clone --branch v1.0.0 https://github.com/faisalthaheem/open-lpr.git
cd open-lpr

# Or checkout an existing repository to a stable version
git fetch --tags
git checkout v1.0.0

Option 2: Use Latest Stable Release

# Automatically get the latest stable release (excluding pre-releases)
LATEST_STABLE=$(curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -1 | sed 's/"tag_name": "\(.*\)"/\1/')
git clone --branch $LATEST_STABLE https://github.com/faisalthaheem/open-lpr.git
cd open-lpr

Option 3: Docker Production Deployment with Versioned Images

# Use a specific versioned Docker image (recommended)
VERSION=v1.0.0
docker pull ghcr.io/faisalthaheem/open-lpr:$VERSION

# Update your docker-compose.yml to use the versioned image
sed -i "s|ghcr.io/faisalthaheem/open-lpr:latest|ghcr.io/faisalthaheem/open-lpr:$VERSION|g" docker-compose.yml

# Or automatically use the latest stable release
LATEST_STABLE=$(curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -1 | sed 's/"tag_name": "\(.*\)"/\1/')
docker pull ghcr.io/faisalthaheem/open-lpr:$LATEST_STABLE

Production Settings

  1. Set DEBUG=False in .env
  2. Configure ALLOWED_HOSTS with your domain
  3. Set up production database (PostgreSQL recommended)
  4. Configure static file serving (nginx/AWS S3)
  5. Set up media file serving (nginx/AWS S3)
  6. Use HTTPS with SSL certificate
  7. Pin to specific versions (see version selection above)

Version Management Strategy

Recommended Production Workflow

  1. Select a stable version (not latest or main branch)
  2. Pin both source code and Docker images to that version
  3. Test thoroughly in staging environment
  4. Deploy to production with pinned versions
  5. Monitor for issues before considering upgrades

Version Pinning Examples

For Source Code:

# In your deployment script
VERSION=v1.0.0
git clone --branch $VERSION https://github.com/faisalthaheem/open-lpr.git

For Docker:

# In docker-compose.yml (production)
services:
  openlpr:
    image: ghcr.io/faisalthaheem/open-lpr:v1.0.0  # Pinned version, not latest
    # ... other configuration

Environment-Specific Settings

  • Development: SQLite database, DEBUG=True, mainline branch acceptable
  • Staging: PostgreSQL, DEBUG=False, use same version as production
  • Production: PostgreSQL, DEBUG=False, HTTPS required, always use tagged releases

Upgrade Process

  1. Check for new stable releases:

    curl -s "https://api.github.com/repos/faisalthaheem/open-lpr/releases" | grep -o '"tag_name": "v[^"]*"' | grep -v 'rc\|beta\|alpha' | head -5
  2. Review release notes for breaking changes

  3. Test upgrade in staging with the new version

  4. Backup production data

  5. Deploy with pinned versions following the version selection steps above

  6. Monitor and rollback if needed

⚠️ Warning: Never upgrade production systems directly from latest tags or mainline branch. Always use specific version tags.

πŸ› Troubleshooting

Click to expand

Common Issues

  1. API Connection Failed

    • Check QWEN_API_KEY in .env
    • Verify QWEN_BASE_URL is accessible
    • Check network connectivity
  2. Image Upload Failed

    • Verify file format (JPEG/PNG/BMP only)
    • Check file size (< 10MB)
    • Ensure media directory permissions
  3. Processing Errors

    • Check Django logs: tail -f django.log
    • Verify API response format
    • Check image processing dependencies
  4. Static Files Not Loading

    • Run python manage.py collectstatic
    • Check STATIC_URL in settings
    • Verify web server static file configuration

Logging

Application logs are written to:

  • Development: Console and django.log
  • Production: Configured logging destination

Log levels:

  • INFO: General application flow
  • ERROR: API failures and processing errors
  • DEBUG: Detailed debugging information

🀝 Contributing

Click to expand

We welcome contributions! Please follow these guidelines:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Add tests if applicable
  5. Ensure all tests pass (python manage.py test)
  6. Commit your changes (git commit -m 'Add some amazing feature')
  7. Push to the branch (git push origin feature/amazing-feature)
  8. Open a Pull Request

Code Style

  • Follow PEP 8 for Python code
  • Use meaningful variable and function names
  • Add docstrings to functions and classes
  • Keep commits small and focused

Issue Reporting

When reporting issues, please include:

  • Detailed description of the problem
  • Steps to reproduce
  • Expected vs. actual behavior
  • Environment details (OS, Python version, etc.)
  • Relevant logs or error messages

πŸ“„ License

Click to expand

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

πŸ†˜ Support

Click to expand

For issues and questions:

  • Check the troubleshooting section
  • Review application logs
  • Create an issue with detailed information
  • Include error messages and steps to reproduce

πŸ™ Acknowledgments

Click to expand
  • Qwen3-VL for the powerful vision-language model
  • Django for the robust web framework
  • Bootstrap for the responsive UI components
  • All contributors who help improve this project

πŸ“š Additional Documentation

Click to expand

For specialized deployment scenarios and additional resources:


⬆ Back to top

Made with ❀️ by Open LPR Team