This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Graph Node is a Rust-based decentralized blockchain indexing protocol that enables efficient querying of blockchain data through GraphQL. It's the core component of The Graph protocol, written as a Cargo workspace with multiple crates organized by functionality.
Use unit tests for regular development and only run integration tests when:
- Explicitly asked to do so
- Making changes to integration/end-to-end functionality
- Debugging issues that require full system testing
- Preparing releases or major changes
Unit tests are inlined with source code.
Prerequisites:
- PostgreSQL running on localhost:5432 (with initialised
graph-testdatabase) - IPFS running on localhost:5001
- PNPM
- Foundry (for smart contract compilation)
- Environment variable
THEGRAPH_STORE_POSTGRES_DIESEL_URLset topostgresql://graph:graph@127.0.0.1:5432/graph-test
The environment dependencies and environment setup are operated by the human.
Running Unit Tests:
# Run unit tests
just test-unit
# Run specific tests (e.g. `data_source::common::tests`)
just test-unit data_source::common::testsPrerequisites:
- PostgreSQL running on localhost:5432 (with initialised
graph-testdatabase) - IPFS running on localhost:5001
- PNPM
- Foundry (for smart contract compilation)
- Environment variable
THEGRAPH_STORE_POSTGRES_DIESEL_URLset topostgresql://graph:graph@127.0.0.1:5432/graph-test
Running Runner Tests:
# Run runner tests.
just test-runner
# Run specific tests (e.g. `block_handlers`)
just test-runner block_handlersImportant Notes:
- Runner tests take moderate time (10-20 seconds)
- Tests automatically reset the database between runs
- Some tests can pass without IPFS, but tests involving file data sources require it
Prerequisites:
- PostgreSQL running on localhost:3011 (with initialised
graph-nodedatabase) - IPFS running on localhost:3001
- Anvil running on localhost:3021
- PNPM
- Foundry (for smart contract compilation)
The environment dependencies and environment setup are operated by the human.
Running Integration Tests:
# Run all integration tests (automatically builds graph-node and gnd)
just test-integration
# Run a specific integration test case (e.g., "grafted" test case)
TEST_CASE=grafted just test-integration
# (Optional) Use graph-cli instead of gnd for compatibility testing
GRAPH_CLI=node_modules/.bin/graph just test-integration- ALWAYS verify tests actually ran - Check the output for "test result: ok. X passed" where X > 0
- If output shows "0 passed" or "0 tests run", the TEST_CASE variable or filter was wrong - fix and re-run
- Never trust exit code 0 alone - Cargo can exit successfully even when no tests matched your filter
Important Notes:
- Integration tests take significant time (several minutes)
- Tests automatically reset the database between runs
- Logs are written to
tests/integration-tests/graph-node.log
# 🚨 MANDATORY: Format all code IMMEDIATELY after any .rs file edit
just format
# 🚨 MANDATORY: Check code for warnings and errors - MUST have zero warnings
just lint
# 🚨 MANDATORY: Check in release mode to catch linking/optimization issues that cargo check misses
just check --release🚨 CRITICAL REQUIREMENTS for ANY implementation:
- 🚨 MANDATORY:
cargo fmt --allMUST be run before any commit - 🚨 MANDATORY:
just lintMUST show zero warnings before any commit - 🚨 MANDATORY:
cargo check --releaseMUST complete successfully before any commit - 🚨 MANDATORY: The unit test suite MUST pass before any commit
Forgetting any of these means you failed to follow instructions. Before any commit or PR, ALL of the above MUST be satisfied! No exceptions!
graph/: Core abstractions, traits, and shared typesnode/: Main executable and CLI (graphman)chain/: Blockchain-specific adapters (ethereum, near, substreams)runtime/: WebAssembly runtime for subgraph executionstore/: PostgreSQL-based storage layergraphql/: GraphQL query execution engineserver/: HTTP/WebSocket APIs
Blockchain → Chain Adapter → Block Stream → Trigger Processing → Runtime → Store → GraphQL API
- Chain Adapters connect to blockchain nodes and convert data to standardized formats
- Block Streams provide event-driven streaming of blockchain blocks
- Trigger Processing matches blockchain events to subgraph handlers
- Runtime executes subgraph code in WebAssembly sandbox
- Store persists entities with block-level granularity
- GraphQL processes queries and returns results
Blockchaintrait: Core blockchain interfaceStoretrait: Storage abstraction with read/write variantsRuntimeHost: WASM execution environmentTriggerData: Standardized blockchain eventsEventConsumer/EventProducer: Component communication
- Event-driven: Components communicate through async streams and channels
- Trait-based: Extensive use of traits for abstraction and modularity
- Async/await: Tokio-based async runtime throughout
- Multi-shard: Database sharding for scalability
- Sandboxed execution: WASM runtime with gas metering
Use format: {crate-name}: {description}
- Single crate:
store: Support 'Or' filters - Multiple crates:
core, graphql: Add event source to store - All crates:
all: {description}
- Rebase on master (don't merge master into feature branch)
- Keep commits logical and atomic
- Squash commits to clean up history before merging
graph: Shared types, traits, and utilitiesnode: Main binary and component wiringcore: Business logic and subgraph management
chain/ethereum: Ethereum chain supportchain/near: NEAR protocol supportchain/substreams: Substreams data source support
store/postgres: PostgreSQL storage implementationruntime/wasm: WebAssembly runtime and host functionsgraphql: Query processing and executionserver/: HTTP/WebSocket servers
diesel: PostgreSQL ORMtokio: Async runtimetonic: gRPC frameworkwasmtime: WebAssembly runtimeweb3: Ethereum interaction
The repository includes a process-compose-flake setup that provides native, declarative service management.
Currently, the human is required to operate the service dependencies as illustrated below.
Unit Tests:
# Human: Start PostgreSQL + IPFS for unit tests in a separate terminal
# PostgreSQL: localhost:5432, IPFS: localhost:5001
nix run .#unit
# Claude: Run unit tests
just test-unitRunner Tests:
# Human: Start PostgreSQL + IPFS for runner tests in a separate terminal
# PostgreSQL: localhost:5432, IPFS: localhost:5001
nix run .#unit # NOTE: Runner tests are using the same nix services stack as the unit test
# Claude: Run runner tests
just test-runnerIntegration Tests:
# Human: Start all services for integration tests in a separate terminal
# PostgreSQL: localhost:3011, IPFS: localhost:3001, Anvil: localhost:3021
nix run .#integration
# Claude: Run integration tests (automatically builds graph-node and gnd)
just test-integrationServices Configuration: The services are configured to use the test suite default ports for unit- and integration tests respectively.
| Service | Unit Tests Port | Integration Tests Port | Database/Config |
|---|---|---|---|
| PostgreSQL | 5432 | 3011 | graph-test / graph-node |
| IPFS | 5001 | 3001 | Data in ./.data/unit or ./.data/integration |
| Anvil (Ethereum) | - | 3021 | Deterministic test chain |
Service Configuration: The setup combines built-in services-flake services with custom multiService modules:
Built-in Services:
- PostgreSQL: Uses services-flake's postgres service with a helper function (
mkPostgresConfig) that provides graph-specific defaults including required extensions.
Custom Services (located in ./nix):
ipfs.nix: IPFS (kubo) with automatic initialization and configurable portsanvil.nix: Ethereum test chain with deterministic configuration