Thank you for your interest in improving Diffract!
If you applied Diffract and found something the lenses didn't catch, open an issue describing:
- What you were reviewing (language, architecture style)
- What the lenses missed (the specific finding)
- Which lens should have caught it (or whether a new lens is needed)
- Evidence that none of the existing 9 lenses cover it
To propose adding, removing, or modifying a lens:
- Root principle — what first principle outside software is it grounded in?
- Uniqueness proof — what does it catch that no other lens catches?
- The question — express it as a single yes/no question
- Evidence format — what does the output look like?
If you've completed a full Diffract review and want to share it:
- Anonymize all project-specific details
- Include the PLAN (governors), DO (findings), CHECK (vetting), and LEARN (retro)
- Add it to
examples/as a pull request
Clarity improvements, additional examples, and translations are welcome.
If you've run Diffract with two independent reviewers (human or AI):
- Record results using the template in docs/calibration.md
- Note which lenses produced the same findings and which diverged
- Submit as a PR or issue
Diffract follows its own framework. Changes to the framework should be validated by running Diffract on itself:
- Does the change pass the 🗑️ Subtract lens? (Is it necessary?)
- Does it pass the 📌 Truth lens? (Is it in one place?)
- Does it pass the 🏷️ Name lens? (Is it well-named?)
- Is the finding that motivated the change falsifiable?
See CODE_OF_CONDUCT.md.