Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
-
Updated
Jan 28, 2026 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
This repository contains Cursor Security Rules designed to improve the security of both development workflows and AI agent usage within the Cursor environment. These rules aim to enforce safe coding practices, control sensitive operations, and reduce risk in AI-assisted development.
A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.
A native policy enforcement layer for AI coding agents. Built on OPA/Rego.
Build Secure and Compliant AI agents and MCP Servers. YC W23
AI-first security scanner with 74+ analyzers, 180+ AI agent security rules, intelligent false positive reduction. Supports all languages. CVE detection for React2Shell, mcp-remote RCE.
See what your AI agents can access. Scan MCP configs for exposed secrets, shadow APIs, and AI models. Generate AI-BOMs for compliance.
Scan A2A agents for potential threats and security issues
Agent orchestration & security template featuring MCP tool building, agent2agent workflows, mechanistic interpretability on sleeper agents, and agent integration via DLL injection and CLI wrappers.
Agent Identity Management (AIM) - Security management for autonomous AI agents and MCP servers
Local open-source dev tool to debug, secure, and evaluate LLM agents. Provides static analysis, dynamic security checks, and runtime monitoring - integrates with Cursor and Claude Code.
Secure credential management for AI agents
Runtime security proxy for MCP: lockfile enforcement, drift detection, artifact pinning, Sigstore/Ed25519 signing, CEL policy, OpenTelemetry tracing. Works with Claude Desktop, LangChain, AutoGen, CrewAI.
The ultimate OWASP MCP Top 10 security checklist and pentesting framework for Model Context Protocol (MCP), AI agents, and LLM-powered systems.
🚀 Streamline your Next.js development with practical rules and tested patterns for efficient coding and minimal bugs.
🛡️ Community-built integrations, SDKs, and tools for APort - the neutral trust rail for AI agents. Join Hacktoberfest 2025!
The missing safety layer for AI Agents. Adaptive High-Friction Guardrails (Time-locks, Biometrics) for critical operations to prevent catastrophic errors.
Real-time semantic security for AI coding agents and MCP tools.
POC for A2AS.org: Standard for Agentic AI Security
A zero-trust encrypted transport layer for AI agents and tools, with AES-GCM encryption, HMAC signing, and identity-aware JSON-RPC messaging.
Add a description, image, and links to the agent-security topic page so that developers can more easily learn about it.
To associate your repository with the agent-security topic, visit your repo's landing page and select "manage topics."