Skip to content

tulane-cmps6730/sp2025-ai-memory

Repository files navigation

AI Memory Contamination Detection System

The emergence of powerful large language models (LLMs) has introduced a new challenge in AI safety and security: memory contamination. This occurs when an AI model's responses are influenced by previously processed data that may contain sensitive, proprietary, or confidential information

This report analyzes a machine learning system designed to detect and mitigate potential contamination in AI model responses. The system uses a BERT-based neural network to identify when input text contains sensitive or proprietary information that could lead to data leakage or privacy violations. When such contamination is detected, the system implements various mitigation strategies to sanitize responses.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •