Prevents enterprise AI applications from leaking sensitive data to external LLM providers — without disrupting user workflows.
-
Updated
Jan 31, 2026 - Python
Prevents enterprise AI applications from leaking sensitive data to external LLM providers — without disrupting user workflows.
The one-stop DLP browser extension to stop users from sharing sensitive information with chatGPT.
A GUI Based tool developed in Rust & Electron for preventing Sensitive Information Leaks
The official Python library for the OpenGuardrails API
A straight forward Data Leakage Protection solution for Apache / NGINX that intelligently blocks access to the most sensitive data types. Offering Zero day protection for the top 10 data hacking targets across your Apache/AWS/RDS/Mysql and web based services.
Vigilante Vixen has learned that there were many security vulnerabilities from their technical, behavioral, law, and human resources aspects. Despite us not being directly involved in offshore financial services or the legal profession, technology roles have a considerable amount of opportunity to review this case and implement security regulations
An end-to-end machine learning pipeline for detecting fraudulent credit card transactions using advanced resampling techniques and ensemble modeling strategies.
School Project for DLP class
This project highlights the dangers of data leakage in machine learning, showcasing how it can lead to misleadingly high model accuracy, and emphasizes the importance of rigorous validation and awareness to ensure realistic predictions.
SecureLLM is a policy-driven guardrail layer for LLMs that combines input risk detection, output validation, and red-team evaluation to enforce deterministic AI safety guarantees. It demonstrates how LLM security can be treated as an engineering problem with measurable outcomes rather than prompt-level heuristics.
KNN in Forensic Science: Classifying Glass Evidence for Criminal Investigations
"An automation tool designed to help users report data broker websites that display and sell personal information. By collectively reporting these sites to Google Safe Browsing, we can help protect our privacy without relying on costly services that can still be prone to leaks."
Regression | Analysis | Modelling | Bayesian Search CV | Data Leakage | Overfit/Underfit | End-to-End Project
Add a description, image, and links to the data-leakage-prevention topic page so that developers can more easily learn about it.
To associate your repository with the data-leakage-prevention topic, visit your repo's landing page and select "manage topics."