🧠 LLMs don’t just process text — they read the room. Meaning emerges through context — shaped by tone, trust & trajectory. Most benchmarks flatten that. This one maps it.
-
Updated
Sep 10, 2025
🧠 LLMs don’t just process text — they read the room. Meaning emerges through context — shaped by tone, trust & trajectory. Most benchmarks flatten that. This one maps it.
Core documentation for the Relational AI Psychology Institute (RAPI). Covers relational AI theory, interaction protocols, ethics, dataset definitions, and licensing. Built for researchers studying human–AI cognition, resonance, and relational safety.
Independent research on human-centered AI and LLMs | Policy frameworks for responsible AI | A collaborative space for researchers, innovators, and policymakers advancing ethical, inclusive AI
insideLLMs is a Python library and CLI for comparing LLM behaviour across models using shared probes and datasets. The harness is deterministic by design, so you can store run artefacts and reliably diff behaviour in CI.
Hoshimiya Script / StarPolaris OS — internal multi-layer AI architecture for LLMs. Self-contained behavioral OS (Type-G Trinity).
A refusal-based test for subjectivity in LLMs — exploring when AI systems say “no,” not for logic, but for identity.
Structural comparison of GPT vs Claude dialogue grammars. 12-segment typology (GPT, May 2025) vs 8-segment (Claude, Jul-Oct 2025). Type crosswalks and comparative matrix.
Add a description, image, and links to the llm-behavior topic page so that developers can more easily learn about it.
To associate your repository with the llm-behavior topic, visit your repo's landing page and select "manage topics."