Intrinsic moral reflexes for AI systems — to detect and prevent cruelty, protect the vulnerable, and initiate human intervention (where available), before harm occurs.
-
Updated
Jul 23, 2025 - Python
Intrinsic moral reflexes for AI systems — to detect and prevent cruelty, protect the vulnerable, and initiate human intervention (where available), before harm occurs.
AI systems are really good at being helpful and terrible at noticing when "helpful" becomes "enabling harm." Good-Faith fixes this by teaching pattern recognition: passive voice hiding accountability, false collective manufacturing consent, weaponized care violating boundaries.
This repo provides an open-source decision engine (7 criteria) for evaluating tech-ethics trade-offs, from inequality risk to legacy impact. Built for quick peer adaptation and policy pilots.
Spiritually-governed AI framework based on the Mool Mantar. Includes MMAT benchmark and license.
Add a description, image, and links to the open-source-ethics topic page so that developers can more easily learn about it.
To associate your repository with the open-source-ethics topic, visit your repo's landing page and select "manage topics."