Currently, we support security updates for the following versions of CABiNet:
| Version | Supported |
|---|---|
| Latest (main branch) | ✅ |
| < Latest | ❌ |
As this is a research project, we recommend always using the latest version from the main branch.
We take the security of CABiNet seriously. If you discover a security vulnerability, please follow these steps:
DO NOT create a public GitHub issue for security vulnerabilities.
Instead, please report security vulnerabilities by emailing:
Include the following information in your report:
- Type of vulnerability (e.g., code injection, privilege escalation, etc.)
- Full paths of source file(s) related to the vulnerability
- Location of the affected source code (tag/branch/commit or direct URL)
- Step-by-step instructions to reproduce the issue
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit it
After you submit a vulnerability report:
- Acknowledgment: We will acknowledge receipt of your report within 3 business days
- Investigation: We will investigate and validate the vulnerability
- Updates: We will keep you informed about our progress
- Resolution: We will work on a fix and coordinate disclosure timing with you
- Credit: We will credit you in the security advisory (unless you prefer to remain anonymous)
When a security issue is confirmed:
- A security advisory will be created
- A fix will be developed in a private repository
- The fix will be tested thoroughly
- A security release will be published
- The vulnerability will be publicly disclosed with appropriate credit
When using CABiNet in production environments:
- Validate all input data before passing to the model
- Sanitize file paths when loading datasets or models
- Use secure data storage for training datasets
- Implement access controls for model weights and configs
- Verify model checksums before loading pre-trained weights
- Use trusted sources for downloading pre-trained models
- Implement input validation to prevent adversarial attacks
- Monitor model predictions for anomalies
- Run inference in isolated environments (containers, sandboxes)
- Limit resource access (CPU, GPU, memory) to prevent DoS
- Implement rate limiting for API endpoints
- Use HTTPS for all network communications
- Keep dependencies updated to patch known vulnerabilities
This project relies on external packages. Keep them updated:
# Regularly update dependencies
pip install --upgrade torch torchvision
pip list --outdatedMonitor security advisories for:
- PyTorch
- NumPy
- Pillow
- OpenCV
- Other dependencies listed in
cabinet_environment.yml
Deep learning models can potentially leak training data through model inversion attacks. If using CABiNet with sensitive data:
- Implement differential privacy during training
- Use techniques like knowledge distillation for deployment
- Limit access to model weights
Semantic segmentation models can be fooled by adversarial perturbations:
- Implement input validation and sanitization
- Consider adversarial training if deploying in security-critical applications
- Monitor for unusual prediction patterns
Large models can consume significant computational resources:
- Implement timeouts for inference
- Set memory limits
- Use batch size limits
- Monitor GPU/CPU usage
- Security vulnerabilities will be disclosed publicly after a fix is available
- We aim for a 90-day disclosure timeline from initial report to public disclosure
- Critical vulnerabilities may be disclosed sooner if actively exploited
- We will coordinate disclosure timing with the reporter
For security-related inquiries:
- Email: [email protected]
- Response Time: Within 3 business days
For general questions (non-security), please use GitHub Issues.
- OWASP Machine Learning Security Top 10
- PyTorch Security Guidelines
- NIST AI Risk Management Framework
Last Updated: November 2025
Thank you for helping keep CABiNet and its users safe!