Open framework for confidential AI
-
Updated
Jan 30, 2026 - Rust
Open framework for confidential AI
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Let AI agents like ChatGPT & Claude use real-world local/remote tools you approve via browser extension + optional MCP server
This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.
Secure Computing in the AI age
IntentusNet - Deterministic intent routing runtime for agent execution, with explicit fallback, transport abstraction, and clear operational boundaries.
Project Agora: MVP of the Concordia framework. An ethical, symbiotic AI designed to foster and protect human flourishing.
A living map of the AI agent security ecosystem.
Secure Python Chatbot with PANW AIRS protection and Claude API
Secure Python Chatbot with PANW AIRS protection and OpenAI API
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
Behavior-driven cognitive experimentation toolkit with BCE (Behavioral Consciousness Engine) regularization, telemetry, and plug-and-play integrators for language-model training and evaluation.
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
airlock is a cryptographic handshake protocol for verifying AI model identity at runtime. It enables real-time attestation of model provenance, environment integrity, and agent authenticity - without relying on vendor trust or static manifests.
A zero-trust encrypted transport layer for AI agents and tools, with AES-GCM encryption, HMAC signing, and identity-aware JSON-RPC messaging.
A security runtime that sits inside AI agents to block unauthorized actions, enforce accountability, and prevent misuse in real time
A self-hosted AI chatbot for privacy-conscious users. Runs locally with Ollama, ensuring data never leaves your device. Built with SvelteKit for performance and flexibility. No external dependencies—your AI, your rules. 🚀
A security-first control plane for autonomous AI code agents: sandboxed execution, hash grounding, diff validation, verification, and full auditability.
Offline-first cognitive operating system for synthetic intelligence. Features belief ecology, RL-based goal evolution with differential privacy, contradiction tracing, HMAC-signed audit logs, sandboxed execution, and local LLM inference. Designed for air-gapped, adversarial environments.
Add a description, image, and links to the secure-ai topic page so that developers can more easily learn about it.
To associate your repository with the secure-ai topic, visit your repo's landing page and select "manage topics."