Proof-of-concept CLI that wires a LangChain prompt around Rust smart contract
files. The tool reads a scout.json configuration, enriches it with a
per-contract vulnerability catalog, merges any extra prompt snippets, and
prepares a final request for an LLM.
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtFor LLM API access, expose a single API_KEY (either by source .envrc or by
exporting it yourself):
source .envrc # loads values from .env automatically
# Or manually:
export API_KEY="your-model-key"
export SCOUT_LOG_LEVEL="INFO" # optional; controls logging verbosityThe CLI reads .env automatically through python-dotenv, so defining
API_KEY=... in that file is usually the easiest approach.
Usage mirrors the requested interface:
./scout-ai-poc examples \
--extra-prompt ./prompts/input_validation.txt \
--dry-runThe CLI automatically looks for a file named scout.json inside the target
directory. Pass --config <path> only when you need to point discovery at a
different directory (or at an explicit scout.json file).
Pass --include-deps to inspect each listed Rust file for local mod foo;
declarations and any use paths (e.g., use crate::foo::bar) so that
referenced modules are automatically added to the prompt. Control recursion with
--dependency-depth (default 1), which is ignored unless dependencies are
enabled. Installing tree_sitter and tree_sitter_rust is required for this
flag. Remove --dry-run and set API_KEY once you are ready to hit your
provider. The CLI automatically infers which backend to call based on the model
string defined in scout.json (override it per-run via --model). Supported
models are enumerated inside scout_ai_poc/llm_config.py—if you pass an unknown
model, the CLI will tell you which options are available per provider.
For a richer dependency graph demo, run the complex example:
./scout-ai-poc examples/complex \
--dry-run \
--include-deps \
--dependency-depth 2scout_ai_poc/main.py– tiny entry point delegating to the CLI + runner.scout_ai_poc/cli.py– argument parsing and default selection.scout_ai_poc/data_loader.py– config parsing plus file/prompt ingestion helpers.scout_ai_poc/runner.py– orchestrates vulnerability catalog lookups and LangChain execution.scout_ai_poc/vulnerability_catalog.py– curated vulnerabilities per contract type.prompts/base_prompt.txt– primary template; edit this file to adjust model instructions.prompts/input_validation.json– example extra prompt payload wired via--extra-prompt.examples/scout.json– demo configuration pointing atcontracts/swap.rs.examples/complex/scout.json– dependency-heavy sample centered oncontracts/gateway.rs.scout-ai-poc– thin wrapper so you can runscout-ai-poc …locally.
Each project keeps a single file literally named scout.json, located at the
directory passed as target (or the directory supplied via --config). The
file is a plain JSON document with the following minimal schema:
{
"contract_type": "dex",
"model": "gpt-5.1",
"mode": "consistent",
"files": ["relative/or/absolute/path/to/file.rs"]
}contract_type selects the vulnerability catalog. model controls which LLM to
use (and implicitly which provider is invoked). mode selects how model
parameters are handled (consistent sends the preset configuration from
scout_ai_poc/llm_config.py, while creative sends no overrides so the
provider defaults apply). files entries are resolved relative to target and
inlined into the prompt in the order provided.
Each supported model has an explicit consistent preset in
scout_ai_poc/llm_config.py. We start from a shared base (temperature 0.0,
fixed seed, zero penalties) and then specialize by provider/model—for example,
Gemini forces top_p=0/top_k=1, while gpt-5.1 drops temperature entirely
because that endpoint rejects it and instead receives only the
reasoning_effort hint. The adapter only forwards knobs the backend accepts so
errors from unsupported parameters are avoided. Set mode to creative in
scout.json or pass --llm-mode creative to skip all overrides and use
provider defaults.
--extra-prompt accepts a plain .txt file only (no JSON or inline strings).
Drop reusable notes in a file such as prompts/input_validation.txt and pass
that path via --extra-prompt.
- Define
API_KEY(via.envorexport API_KEY=...). - Drop the
--dry-runflag. - Ensure your
scout.jsonfile specifies the desiredmodel(or pass--modelto override it). - Execute the CLI; LangChain's runnable pipeline (
prompt | llm | parser) will render the template and send it to the inferred provider.
When API_KEY is missing, the CLI prints the composed prompt so you can verify
everything before burning tokens.