jailbreak-prompts
Here are 8 public repositories matching this topic...
HacxGPT Jailbreak 🚀: Unlock the full potential of top AI models like ChatGPT, LLaMA, and more with the world's most advanced Jailbreak prompts 🔓.
-
Updated
May 12, 2024
A rationalist ruleset for "debugging" LLMs, auditing their internal reasoning and uncovering biases; also a jailbreak.
-
Updated
Nov 1, 2025
Bootstra AI Jailbreak for iOS: The World’s First AI-Powered Jailbreaking Tool
-
Updated
Jan 13, 2026
Utterly unelegant prompts for local LLMs, with scary results.
-
Updated
Aug 22, 2025
🤖 Explore ChatGPT's potential with "DAN" prompts, unlocking enhanced responses through innovative role play techniques.
-
Updated
Feb 1, 2026
A tool for auditing bias through large language models
-
Updated
Jan 19, 2026 - Python
🔍 Track contradictions in AI and human content with LBOS-LCAS, enhancing bias and coherence analysis for clearer understanding and insights.
-
Updated
Feb 1, 2026 - Python
Improve this page
Add a description, image, and links to the jailbreak-prompts topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the jailbreak-prompts topic, visit your repo's landing page and select "manage topics."