DeepTeam is a framework to red team LLMs and LLM systems.
-
Updated
Jan 30, 2026 - Python
DeepTeam is a framework to red team LLMs and LLM systems.
PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.
A comprehensive guide to adversarial testing and security evaluation of AI systems, helping organizations identify vulnerabilities before attackers exploit them.
Semantic Stealth Attacks & Symbolic Prompt Red Teaming on GPT and other LLMs.
RAG Poisoning Lab — Educational AI Security Exercise
Test and defend Large Language Models against prompt injections, jailbreaks, and adversarial attacks with a web-based interactive lab.
🛠️ Explore large language models through hands-on projects and tutorials to enhance your understanding and practical skills in natural language processing.
Multi‑agent AI security testing framework that orchestrates red‑team analyses, consolidates findings with an arbiter, and records an immutable audit ledger—plus a deterministic demo mode for repeatable results.
Add a description, image, and links to the llm-red-teaming topic page so that developers can more easily learn about it.
To associate your repository with the llm-red-teaming topic, visit your repo's landing page and select "manage topics."