A Self-Regulating Coherence-Aware Ensemble Architecture for Sequential Decision Making
Author: Mike Amega
Affiliation: Independent Researcher
Contact: [email protected]
private: [email protected]
LinkedIn: https://www.linkedin.com/in/mike-amega-486329184/
Disclosure Date: November 13, 2025
EARCP is a novel ensemble learning architecture that dynamically weights heterogeneous expert models based on both their individual performance and inter-model coherence. Unlike traditional ensemble methods with static or offline-learned combinations, EARCP continuously adapts through principled online learning with provable regret bounds.
Key Innovation: Dual-signal weighting mechanism combining exploitation (performance) and exploration (coherence) for robust sequential prediction.
- ✅ Adaptive: Continuously adjusts to changing model reliability
- ✅ Robust: Maintains diversity through coherence-aware weighting
- ✅ Theoretically Grounded: Provable O(√(T log M)) regret bounds
- ✅ Practical: Stable implementation with multiple safeguards
- ✅ General-Purpose: Applicable to any sequential prediction task
This repository has two branches:
main(earcp): Documentation, academic papers, research materials, and IP protection documentsearcp-lib: Python library implementation for installation and use in your projects
Install directly from the earcp-lib branch:
pip install git+https://github.com/Volgat/earcp.git@earcp-libClone and install locally:
# Clone the library branch
git clone -b earcp-lib https://github.com/Volgat/earcp.git
cd earcp
pip install -e .pip install earcpThis repository contains complete documentation for academic recognition and IP protection:
- Academic Paper - Full peer-review ready paper with theoretical analysis
- Technical Whitepaper - Complete implementation specification
- Implementation Guide - Step-by-step integration guide
- API Reference - Complete API documentation
- Proofs: Mathematical derivations and regret bound proofs
- Experiments: Reproducible experimental protocols and results
- Benchmarks: Performance comparisons against baselines
from earcp import EARCP
# Create expert models (any models with .predict() method)
experts = [cnn_model, lstm_model, transformer_model, dqn_model]
# Initialize EARCP ensemble
ensemble = EARCP(
experts=experts,
alpha_P=0.9, # Performance smoothing
alpha_C=0.85, # Coherence smoothing
beta=0.7, # Performance-coherence balance
eta_s=5.0, # Sensitivity
w_min=0.05 # Weight floor
)
# Online learning loop
for t in range(T):
# Get predictions
prediction, expert_preds = ensemble.predict(state)
# Execute action and observe target
target = execute_and_observe(prediction)
# Update weights
metrics = ensemble.update(expert_preds, target)
# Monitor (optional)
diagnostics = ensemble.get_diagnostics()
print(f"Weights: {diagnostics['weights']}")At each time step t, EARCP:
- Collects predictions from M expert models: p₁,ₜ, ..., p_M,ₜ
- Computes performance scores: P_i,t = αₚ·P_i,t-1 + (1-αₚ)·(-ℓ_i,t)
- Calculates coherence: C_i,t = (1/(M-1))·Σⱼ≠ᵢ Agreement(i,j)
- Combines signals: s_i,t = β·P_i,t + (1-β)·C_i,t
- Updates weights: w_i,t ∝ exp(ηₛ·s_i,t) with floor constraints
Theorem: Under standard assumptions (bounded losses, convexity), EARCP achieves:
Regret_T ≤ √(2T log M)
for pure performance (β=1), and:
Regret_T ≤ (1/β)·√(2T log M)
with coherence incorporation (β<1).
Proof: See Section 4 of academic paper.
| Method | Electricity (RMSE) | HAR (Acc.) | Financial (Sharpe) |
|---|---|---|---|
| Best Single | 0.124 ± 0.008 | 91.2 ± 1.1 | 1.42 ± 0.18 |
| Equal Weight | 0.118 ± 0.006 | 92.8 ± 0.9 | 1.58 ± 0.15 |
| Stacking | 0.112 ± 0.007 | 93.1 ± 1.0 | 1.61 ± 0.14 |
| Offline MoE | 0.109 ± 0.006 | 93.5 ± 0.8 | 1.65 ± 0.16 |
| Hedge | 0.107 ± 0.005 | 93.9 ± 0.7 | 1.71 ± 0.12 |
| EARCP | 0.098 ± 0.004 | 94.8 ± 0.6 | 1.89 ± 0.11 |
Key Findings:
- 8.4% improvement over Hedge on RMSE
- 10.5% improvement over Hedge on Sharpe ratio
- Consistent gains across diverse tasks
- Superior robustness during distribution shifts
Any model implementing:
class ExpertModel:
def predict(self, x):
"""Return prediction for input x."""
return prediction # array-like- Number of experts: 2 to 100+ (tested up to M=50)
- Prediction types: Classification, regression, reinforcement learning
- Update frequency: Real-time to batch updates
- Loss functions: Any L: Y×Y → [0,1]
EARCP is released under the Business Source License 1.1.
You can use EARCP for free if:
- 🎓 Academic research and education
- 💻 Personal and open-source projects
- 🏢 Internal business use where your organization's total revenue is less than USD $100,000 per year
Organizations with revenue exceeding $100,000/year or those wishing to:
- Embed EARCP in commercial products
- Offer EARCP as a hosted service (SaaS)
- Redistribute EARCP commercially
...must obtain a commercial license.
📧 Contact for Commercial Licensing:
- Email: [email protected]
- Subject: "EARCP Commercial License Inquiry"
After November 13, 2029 (four years from publication), EARCP will automatically be released under the Apache 2.0 license, making it freely available for all uses.
For complete license terms, see LICENSE.md
If you use EARCP in academic work, please cite:
@article{amega2025earcp,
title={EARCP: Ensemble Auto-Régulé par Cohérence et Performance},
author={Amega, Mike},
journal={arXiv preprint},
year={2025},
url={https://github.com/Volgat/earcp},
note={Prior art established November 13, 2025}
}For technical implementations:
@techreport{amega2025earcp_tech,
title={EARCP: Technical Whitepaper and Implementation Specification},
author={Amega, Mike},
institution={Independent Research},
year={2025},
url={https://github.com/Volgat/earcp},
note={Business Source License 1.1}
}Copyright © 2025 Mike Amega. All rights reserved.
This software and associated documentation are protected by copyright law. The architecture, algorithms, and implementation details are original works by Mike Amega.
Prior Art Established: November 13, 2025
This repository constitutes a defensive publication establishing prior art for:
- Core EARCP algorithm and mathematical formulation
- Dual-signal weighting mechanism (performance + coherence)
- Specific implementation details and optimizations
- Extension mechanisms and variations
Legal Effect: This public disclosure prevents third-party patent claims on disclosed inventions while preserving the author's rights to commercialize and license this technology.
All uses must include:
This work uses EARCP (Ensemble Auto-Régulé par Cohérence et Performance)
developed by Mike Amega (2025). See: https://github.com/Volgat/earcp
- Core algorithm implemented and tested
- Theoretical guarantees proven
- Comprehensive benchmarking completed
- Production-grade code with safeguards
- Business Source License 1.1 applied
- PyPI package publication
- Academic paper submission to conference
- Extended documentation and tutorials
- Community extensions and contributions
Planned enhancements:
- Learned coherence functions
- Hierarchical EARCP for large-scale ensembles
- Multi-objective optimization extensions
- Integration with popular ML frameworks (scikit-learn, PyTorch, TensorFlow)
- Distributed/parallel implementations
Contributions are welcome! Please read CONTRIBUTING.md for guidelines.
- Implementations: Integration with specific ML frameworks
- Experiments: Testing on new domains and benchmarks
- Theory: Tightening regret bounds, new guarantees
- Documentation: Tutorials, examples, case studies
- Optimizations: Performance improvements, GPU acceleration
Contributors will be acknowledged in:
- README contributors section
- Academic papers citing this work
- Release notes and documentation
Mike Amega
Email: [email protected]
Location: Ontario, Canada
GitHub: @Volgat
Email: [email protected]
Subject: "EARCP Commercial License Inquiry"
Open to collaborations on:
- Theoretical extensions
- Large-scale applications
- Domain-specific adaptations
- Academic publications
- Initial public release
- Complete implementation with theoretical guarantees
- Comprehensive documentation
- Benchmark results on three domains
- Defensive publication for IP protection
- Business Source License 1.1 applied
Thanks to the open-source machine learning community for tools and datasets that enabled this research.
Core Dependencies:
- NumPy (numerical computations)
- PyTorch (neural network experts)
- scikit-learn (baseline comparisons)
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
For full legal terms, see LICENSE.md file.
This repository includes the following for complete IP protection:
- Academic paper with full mathematical derivation
- Technical whitepaper with implementation details
- Complete working code with documentation
- Timestamp through GitHub commit history
- Copyright notices in all files
- Business Source License 1.1 applied
- Citation guidelines
- DOI from Zenodo/figshare (recommended)
- arXiv submission (recommended within 30 days)
🌟 Star this repository if you find EARCP useful!
🔔 Watch for updates and new features
🍴 Fork to create your own variations
Last Updated: December 3, 2025
Repository: https://github.com/Volgat/earcp
Prior Art Date: November 13, 2025
License: Business Source License 1.1