Production-ready ML framework for programmatic advertising bid optimization with unified deployment
OpenAuction IQ is a comprehensive ML framework for bid optimization in programmatic advertising. It provides a unified deployment framework supporting three distinct modes, advanced reinforcement learning models, and a complete MLOps pipeline.
Deploy the same codebase in three different modes:
-
CloudX Central - Multi-tenant SaaS platform
- Train on ALL customer data
- Master model + per-customer fine-tuned models
- API authentication, rate limiting
- Best for: Platform providers offering AI-as-a-Service
-
Single User - Self-hosted deployment
- Train ONLY on your own data
- Optional transfer learning from CloudX pretrained base
- Full data privacy and control
- Best for: Individual customers wanting privacy
-
Federated - Privacy-preserving collaboration
- Clients train locally (data never leaves premises)
- Only gradients shared (never raw data)
- Differential privacy guarantees
- Best for: Collaborative learning without data sharing
Complete RL-based bid optimization system:
- PPO (Proximal Policy Optimization) for continuous action spaces
- Transformer-based state encoder for auction context
- Multi-objective reward function (6 components: revenue, win rate, budget, ROI, floor penalty, overbid penalty)
- Continuous learning (real-time model updates during live serving)
- Experience replay with in-memory and Redis backends
- A/B testing framework for gradual rollout
- Transformers - Attention-based sequence modeling
- LSTM/GRU - Time series forecasting
- Ensemble methods - Weighted and stacking ensembles
- VAE - Anomaly detection
- Multi-task learning - Hard parameter sharing
OpenAuction IQ serves as the AI/ML toolkit and training gym for OpenAuction, a Go library for TEE-based (Trusted Execution Environment) auction execution.
Architecture:
βββββββββββββββββββββββββββββββ
β OpenAuction Wrapper (Go) β β Thin HTTP service wrapping OpenAuction library
β - core.RunAuction() β
β - TEE attestation β
β - PostgreSQL logging β
ββββββββββββ¬βββββββββββββββββββ
β ML Predictions
βΌ
βββββββββββββββββββββββββββββββ
β OpenAuction IQ (Python) β β This framework
β - Floor price prediction β
β - Adjustment factors β
β - Continuous learning β
β - Anomaly detection β
βββββββββββββββββββββββββββββββ
Key Capabilities:
- Floor Price Prediction: ML-predicted single floor price per auction (not per-bidder)
- Adjustment Factors: Per-bidder multipliers (e.g., 1.10 for premium, 0.95 for risky bidders)
- Continuous Learning: Real-time feedback from auction results for model improvement
- Anomaly Detection: Fraud detection using excluded bids and rejection patterns
- TEE Attestation Validation: Cryptographic verification of auction integrity
Database Requirements: OpenAuction IQ requires PostgreSQL to store auction data from the OpenAuction wrapper service. See:
- Schema definition:
schema/openauction_schema.sql - Migration:
schema/migrations/001_initial.sql - Validation tool:
integration/schema_validator.py
Documentation:
- Architecture Guide - Complete integration architecture
- Database Schema - Schema documentation and ML training queries
- Integration Guide - End-to-end integration workflow
- Model Registry - Version tracking and lineage
- REST API - FastAPI with automatic OpenAPI docs
- CLI - Typer-based command-line interface
- Docker & Kubernetes - Production deployment
- Monitoring - Prometheus metrics and structured logging
- Testing - 73+ test cases with 70%+ coverage
# Clone repository
git clone https://github.com/openauction/openauction-iq.git
cd openauction-iq
# Install with development dependencies
pip install -e ".[dev]"from core.deployment import DeploymentManager, DeploymentMode
# Initialize CloudX deployment
manager = DeploymentManager(
mode=DeploymentMode.CLOUDX_CENTRAL,
model_type="rl_bid_optimizer",
device="cuda"
)
# Train master model + per-customer fine-tuned models
manager.train()
# Start multi-tenant API with authand rate limiting
manager.serve(host="0.0.0.0", port=8000)API Usage (Customer perspective):
curl -X POST https://api.openauction-iq.com/predict \
-H "Authorization: Bearer sk_live_customer_xyz" \
-H "Content-Type: application/json" \
-d '{"features": {...}}'from core.deployment import DeploymentManager, DeploymentMode
# Initialize single user deployment
manager = DeploymentManager(
mode=DeploymentMode.SINGLE_USER,
customer_id="my-company",
model_type="rl_bid_optimizer",
device="cpu",
allow_pretrained_base=True # Optional: use CloudX pretrained base
)
# Train on own data only
manager.train()
# Start local API (no auth needed)
manager.serve(host="localhost", port=8000)from core.deployment import DeploymentManager, DeploymentMode
# Server
server = DeploymentManager(
mode=DeploymentMode.FEDERATED,
role="server",
model_type="rl_bid_optimizer",
aggregation_algorithm="FedAvg",
enable_differential_privacy=True,
privacy_epsilon=1.0
)
server.train()
# Client
client = DeploymentManager(
mode=DeploymentMode.FEDERATED,
role="client",
customer_id="my-company",
server_url="https://federated.openauction-iq.com:8443",
enable_differential_privacy=True
)
client.train() # Trains locally, shares only gradientsfrom core.models.rl import (
StateEncoder,
BidOptimizer,
PPOConfig,
RLTrainer,
MultiObjectiveReward
)
# Initialize RL components
encoder = StateEncoder(num_bidders=10000, output_dim=256)
ppo_config = PPOConfig(learning_rate=3e-4)
model = BidOptimizer(encoder, ppo_config)
# Train with continuous learning
trainer = RLTrainer(
model=model,
config=RLTrainerConfig(continuous_mode=True),
reward_function=MultiObjectiveReward()
)
trainer.train(num_timesteps=1000000)
# Real-time bidding
bid_price = model.predict_bid(auction_state, deterministic=True)openauction_iq/
βββ core/
β βββ data/ # Data preprocessing
β βββ models/
β β βββ rl/ # RL models (PPO, state encoder, reward, etc.)
β β βββ transformer.py # Transformer models
β β βββ lstm.py # LSTM models
β β βββ ensemble.py # Ensemble methods
β β βββ registry.py # Model versioning
β βββ training/ # Training pipelines
β βββ deployment/ # Unified deployment framework
β β βββ modes.py # Deployment mode definitions
β β βββ data_access.py # Data access layer with privacy
β β βββ manager.py # Deployment manager
β β βββ model_serving.py # Multi-tenant & single-tenant APIs
β β βββ federated.py # Federated learning
β βββ monitoring/ # Metrics and logging
βββ api/
β βββ routers/
β βββ rl_bidding.py # RL API endpoints
β βββ models.py # Model management
β βββ training.py # Training jobs
β βββ predictions.py # Prediction endpoints
βββ cli/ # Command-line interface
βββ integration/ # OpenAuction integration
βββ tests/ # Test suite (73+ tests)
βββ examples/ # Usage examples
βββ docs/ # Documentation
βββ k8s/ # Kubernetes manifests
| Feature | CloudX Central | Single User | Federated |
|---|---|---|---|
| Data Access | ALL customers | Own data only | Local only |
| Privacy Level | Medium | High | Highest |
| Training Data | Aggregated | Own data | Federated |
| Model Quality | Best (more data) | Good | Very good |
| Serving | Multi-tenant API | Single-tenant | No serving |
| Authentication | API keys | Optional | Mutual TLS |
| Rate Limiting | Per-tier | None | None |
| Usage Tracking | Usage-based | None | None |
| Complexity | High | Low | Medium |
| Best For | SaaS platform | Self-hosted | Collaboration |
- Transformer-based architecture for encoding auction state
- Handles: bidder features, bid history, auction context, budget state
- Multi-head attention for sequence modeling
- Actor-Critic architecture
- Gaussian policy for continuous bid prices
- Action space: [min_bid, max_bid] CPM
Balances 6 competing objectives:
- Revenue maximization (50% weight)
- Win rate alignment (15%)
- Budget utilization (10%)
- ROI optimization (15%)
- Floor rejection penalty (5%)
- Overbidding penalty (5%)
- Real-time model updates during live serving
- Experience replay buffer (in-memory + Redis)
- A/B testing for gradual rollout
- Model versioning and checkpointing
# Import data
oiq data import --source database --output data/auctions.parquet
# Validate data
oiq data validate data/auctions.parquet
# Export data
oiq data export --format json --output data/auctions.json# Train RL model
oiq train start rl data/auctions.parquet --epochs 100
# Train Transformer
oiq train start transformer data/auctions.parquet --epochs 50
# Resume training
oiq train resume checkpoint_001# List models
oiq model list
# Model info
oiq model info rl_bid_optimizer --version 1.0.0
# Promote to production
oiq model promote rl_bid_optimizer 1.0.0
# Compare models
oiq model compare rl_v1 rl_v2# Start API server
oiq serve --port 8000 --workers 4
# Development mode (auto-reload)
oiq serve --port 8000 --reloadoiq serve --port 8000RL Bidding:
# Real-time bid
POST /rl/bid
{
"bidder_id": "bidder-123",
"auction_context": {...},
"budget_state": {...}
}
# Batch bids
POST /rl/batch-bids
# Collect experience (continuous learning)
POST /rl/collect-experience
# Configure A/B test
POST /rl/models/ab-testModel Management:
# List models
GET /api/v1/models/list
# Register model
POST /api/v1/models/register
# Load model
POST /api/v1/models/load
# Delete model
DELETE /api/v1/models/delete/{model_name}/{version}Predictions:
# Predict
POST /api/v1/predictions/predict
{
"model_name": "rl_bid_optimizer",
"features": {...}
}
# Batch predict
POST /api/v1/predictions/batchAPI Documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
# Start all services (PostgreSQL, Redis, MongoDB, Prometheus, Grafana)
docker compose up
# Start specific service
docker compose up api
# View logs
docker compose logs -f api
# Stop all
docker compose down# Build image
docker compose build
# Run training job
docker compose run --rm worker python examples/rl_training_example.py
# Run tests
docker compose run --rm worker pytest# Deploy to dev (1 replica, 256Mi, 100m CPU)
kubectl apply -k k8s/overlays/dev
# Check status
kubectl get pods -n openauction-iq
# View logs
kubectl logs -f deployment/openauction-iq-api -n openauction-iq# Deploy to prod (5 replicas, 1Gi, 500m CPU)
kubectl apply -k k8s/overlays/prod
# Scale
kubectl scale deployment openauction-iq-api --replicas=10 -n openauction-iq
# Rolling update
kubectl set image deployment/openauction-iq-api \
api=openauction-iq:v2.0.0 -n openauction-iqπ Complete Documentation Index - Comprehensive documentation organized by audience, topic, and learning paths
New to OpenAuction IQ? Start here:
- Installation Guide - Prerequisites and setup
- 5-Minute Quickstart - Get up and running fast
- First Prediction Tutorial - End-to-end walkthrough
- Deployment Modes - Choose CloudX Central, Single User, or Federated
- OpenAuction Integration - Integrate with OpenAuction platform
- RL Models - Reinforcement learning training and deployment
- Continuous Learning - Real-time model updates
- Federated Learning - Privacy-preserving collaboration
- Performance Tuning - Optimization and capacity planning
- Migration Guide - Migrate from other ML platforms
- API Reference - Complete REST API documentation
- CLI Reference - Command-line interface
- Model Catalog - Available models and capabilities
- Metrics Reference - Prometheus metrics
- Monitoring - Prometheus + Grafana setup
- Security - Authentication, SSL/TLS, secrets
- Kubernetes - Advanced K8s deployment
- Troubleshooting - Common issues and solutions
- System Overview - High-level architecture
- RL Architecture - Reinforcement learning design
- Federated Architecture - Federated learning design
- Architecture Decisions - ADRs (Why PPO? Why 3 modes?)
- Code Style - Python standards and tools
- Testing Guide - Test architecture and best practices
- Model Development - Creating custom models
- Business Overview - What is OpenAuction IQ?
- Use Cases - Real-world applications
- ROI Analysis - Business value and returns
- Deployment Comparison - Cost/benefit analysis
examples/deployment_framework_demo.py- Deployment modes demonstrationexamples/deployment_modes_example.py- Complete working exampleexamples/rl_training_example.py- RL model trainingexamples/end_to_end_example_refactored.py- Complete ML pipeline
# All tests
pytest
# With coverage
pytest --cov=core --cov=api --cov=cli --cov-report=html
# Specific test file
pytest tests/unit/test_preprocessor.py -v
# Integration tests only
pytest tests/integration/ -v
# Stop on first failure
pytest -x- Unit tests: 40+ tests covering core components
- Integration tests: 33+ tests for API and CLI
- Target coverage: 70%+
- Coverage reports:
htmlcov/index.html
- β Audit logging for compliance
- β Customer data isolation option
- β API authentication and authorization
- β Rate limiting per customer tier
β οΈ CloudX has access to all data (requires trust)
- β Full data privacy (customer controls everything)
- β No data sharing
- β On-premises deployment
- β Optional telemetry (opt-in)
- β Highest privacy: Raw data NEVER shared
- β Differential privacy (Ξ΅, Ξ΄)-DP with formal guarantees
- β Secure aggregation (server can't see individual updates)
- β Gradient clipping for privacy
- β
Server never sees raw data (raises
PermissionErrorif attempted)
# Model predictions
oiq_predictions_total{model="rl_bid_optimizer",version="1.0.0"}
oiq_prediction_latency_seconds{model="rl_bid_optimizer"}
# Training metrics
oiq_training_loss{model="rl_bid_optimizer",epoch="10"}
oiq_training_reward{model="rl_bid_optimizer"}
# RL-specific metrics
oiq_rl_win_rate{model="rl_bid_optimizer"}
oiq_rl_avg_bid{model="rl_bid_optimizer"}
oiq_rl_roi{model="rl_bid_optimizer"}
# Deployment metrics
oiq_api_requests_total{mode="cloudx_central",customer_id="xyz"}
oiq_rate_limit_exceeded_total{customer_id="xyz"}
oiq_usage_predictions_total{customer_id="xyz"}
- Model performance metrics
- RL training progress
- API usage tracking
- System health and resources
# Install development dependencies
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run pre-commit checks
pre-commit run --all-files# Format code
black .
# Lint code
ruff check .
# Type checking
mypy core/ api/ cli/
# Security scanning
bandit -r core/ api/ cli/# Create feature branch
git checkout -b feature/my-new-feature
# Make changes and commit
git add .
git commit -m "feat: add new deployment mode"
# Run tests before push
pytest
black .
ruff check .
# Push and create PR
git push origin feature/my-new-feature- β Complete deployment framework with 3 modes (CloudX, Single User, Federated)
- β Privacy-by-design data access layer
- β Multi-tenant API with authenticationand rate limiting
- β Federated learning with differential privacy
- β Comprehensive documentation (600+ lines)
- β PPO-based bid optimization
- β Continuous learning pipeline
- β Multi-objective reward function
- β A/B testing framework
- β Base ML framework (10 phases completed)
- β Docker and Kubernetes deployment
- β 73+ test cases
- PyTorch - Deep learning framework
- FastAPI - Modern web framework
- Differential Privacy - Privacy-preserving ML research
- Documentation: See
docs/directory - Examples: See
examples/directory - Issues: GitHub Issues
- Discussions: GitHub Discussions
- Enhanced federated learning (FedYogi, FedOpt)
- Advanced privacy mechanisms (secure MPC)
- Multi-cloud deployment support
- Auto-scaling and HPA
- Real-time model monitoring dashboard
- Automated model retraining
- Advanced A/B testing framework
- Model explainability (SHAP, LIME)
- Multi-model ensemble serving
- GraphQL API
- WebSocket for real-time updates
- Model compression and quantization