A modern, modular Zcash software stack combining Zebra, Zaino, and Zallet to replace the legacy zcashd.
- Quick Start
- Understanding the Architecture
- Docker Images
- Prerequisites
- System Requirements
- Setup
- Running the Stack
- Stopping the Stack
- Data Storage & Volumes
- Interacting with Services
- Configuration Guide
- Health and Readiness Checks
Important
First time running Z3? You must sync Zebra before starting the other services. This takes 24-72 hours for mainnet or 2-12 hours for testnet. There is no way around this initial sync.
Already have synced Zebra data? You can start all services immediately.
# 1. Clone and generate required files
git clone https://github.com/ZcashFoundation/z3 && cd z3
git submodule update --init --recursive
openssl req -x509 -newkey rsa:4096 -keyout config/tls/zaino.key -out config/tls/zaino.crt \
-sha256 -days 365 -nodes -subj "/CN=localhost" \
-addext "subjectAltName=DNS:localhost,DNS:zaino,IP:127.0.0.1"
rage-keygen -o config/zallet_identity.txt
# 2. Build Zaino and Zallet (required - no pre-built images available)
docker compose build zaino zallet
# 3. Review configuration
# - config/zallet.toml: set network = "main" or "test"
# - .env: review defaults (usually no changes needed)
# 4. Start ONLY Zebra first
docker compose up -d zebra
# 5. Wait for Zebra to sync (this takes hours/days)
./check-zebra-readiness.sh
# Or manually: curl http://localhost:8080/ready (returns "ok" when synced)
# 6. Once Zebra is synced, start the remaining services
docker compose up -dNote
The check-zebra-readiness.sh script polls Zebra's readiness endpoint and notifies you when sync is complete. You can safely close your terminal during sync and check back later.
If you have previously synced Zebra data (or are mounting existing blockchain state):
# Build if not already built
docker compose build zaino zallet
# All services can start immediately
docker compose up -d
# Verify all services are healthy
docker compose psTip
ARM64 Users (Apple Silicon): Set DOCKER_PLATFORM=linux/arm64 in .env for native builds. This reduces build time from ~50 minutes to ~3 minutes.
┌─────────────────────────────────────────────────────────────────┐
│ Z3 Stack │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Zebra │◄────────│ Zaino │ │ Zallet │ │
│ │ (node) │ │ (index) │ │(wallet) │ │
│ └────┬────┘ └─────────┘ └────┬────┘ │
│ │ │ │
│ │ ┌─────────────┐ │ │
│ └──────────────│ Embedded │◄─────────┘ │
│ │ Zaino libs │ │
│ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Note
Zallet embeds Zaino libraries internally. It connects directly to Zebra's JSON-RPC, not to the standalone Zaino service. The Zaino container in this stack is for external gRPC clients (like Zingo wallet) and for testing the indexer independently.
Service Roles:
- Zebra - Full node that syncs and validates the Zcash blockchain
- Zaino - Standalone indexer providing gRPC interface for light wallets
- Zallet - Wallet service with embedded indexer that talks directly to Zebra
Important
Current Status: Zaino and Zallet require local builds. Pre-built images are available for Zebra only.
| Service | Image | Source |
|---|---|---|
| Zebra | zfnd/zebra:3.1.0 |
Pre-built from ZcashFoundation/zebra |
| Zaino | z3-zaino:local |
Must build locally from submodule |
| Zallet | z3-zallet:local |
Must build locally from submodule |
# Initialize submodules
git submodule update --init --recursive
# Build zaino and zallet
docker compose build zaino zalletNote
Local builds are required because Zaino and Zallet are under active development and require specific version pinning for compatibility.
Zallet embeds Zaino libraries internally. Both must use compatible versions of the Zaino codebase. The submodules in this repository are pinned to tested, compatible commits.
For production deployments, use official release images when available:
- Zebra: zfnd/zebra (stable releases)
- Zaino/Zallet: Official releases when published
Before you begin, ensure you have the following installed:
- Docker Engine: Install Docker
- Docker Compose: (Usually included with Docker Desktop, or install separately)
- Docker Permissions (Linux): You may need to run Docker commands with
sudo, or add your user to thedockergroup. See Docker's post-installation steps for details. Note that thedockergroup grants root-level privileges. - rage: For generating the Zallet identity file. Install from str4d/rage releases or build from source.
- Git: For cloning the repositories and submodules.
Running the full Z3 stack (Zebra + Zaino + Zallet) requires substantial hardware resources due to blockchain synchronization and indexing.
- CPU: 2 cores (4+ cores strongly recommended)
- RAM: 4 GB for Zebra; 8+ GB recommended for full stack
- Disk Space:
- Mainnet: 300 GB (blockchain state)
- Testnet: 30 GB (blockchain state)
- Additional space for Zaino indexer database (requirements under determination)
- SSD strongly recommended for sync performance
- Network: Reliable internet connection
- Initial sync download: ~300 GB for mainnet
- Ongoing bandwidth: 10 MB - 10 GB per day
- CPU: 4+ cores
- RAM: 16+ GB
- Disk Space: 500+ GB with room for blockchain growth
- Network: 100+ Mbps connection with ~300 GB/month bandwidth
- Mainnet: 24-72 hours on recommended hardware
- Testnet: 2-12 hours (currently ~3.1M blocks)
- Cached/Resumed: Minutes (if using existing Zebra state)
Sync time varies based on CPU speed, disk I/O (SSD vs HDD), and network bandwidth.
Note: These specifications are based on Zebra's official requirements. Zaino indexer adds additional resource overhead; specific requirements are under determination. Running all three services together requires resources beyond Zebra alone.
-
Clone the Repository:
Clone the
z3repository:git clone https://github.com/ZcashFoundation/z3 cd z3Using Pre-Built Images (Recommended): Submodules are not required. Skip to step 2.
Building Locally (Optional): Initialize submodules to build from source:
git submodule update --init --recursive
-
Platform Configuration (Apple Silicon / ARM64):
ARM64 users: Enable native builds for dramatically faster performance.
Z3 defaults to AMD64 (x86_64) for development consistency. On ARM64 systems (Apple Silicon M1/M2/M3 or ARM64 Linux), this uses emulation which is very slow:
- AMD64 emulation: ~50 minutes to build Zebra
- Native ARM64: ~2-3 minutes to build Zebra
To enable native ARM64 builds:
Edit
.envand uncomment theDOCKER_PLATFORMline:# In .env file, change this: # DOCKER_PLATFORM=linux/arm64 # To this: DOCKER_PLATFORM=linux/arm64
Or set it directly in your shell:
echo "DOCKER_PLATFORM=linux/arm64" >> .env
Intel/AMD users: No action needed. Default AMD64 settings work optimally.
-
Required Files:
You'll need to generate these files in the
config/directory:config/tls/zaino.crtandconfig/tls/zaino.key- Zaino TLS certificatesconfig/zallet_identity.txt- Zallet encryption keyconfig/zallet.toml- Zallet configuration (provided, review and customize)
-
Generate Zaino TLS Certificates:
openssl req -x509 -newkey rsa:4096 -keyout config/tls/zaino.key -out config/tls/zaino.crt -sha256 -days 365 -nodes -subj "/CN=localhost" -addext "subjectAltName = DNS:localhost,IP:127.0.0.1"
This creates a self-signed certificate valid for 365 days. For production, use certificates from a trusted CA.
-
Generate Zallet Identity File:
rage-keygen -o config/zallet_identity.txt
Securely back up this file and the public key (printed to terminal).
-
Review Zallet Configuration:
Review
config/zallet.tomland update the network setting:- For mainnet:
network = "main"in[consensus]section - For testnet:
network = "test"in[consensus]section
See Configuration Guide for details on Zallet's architecture and config requirements.
- For mainnet:
-
Review Environment Variables:
A comprehensive
.envfile is provided with sensible defaults. Review and customize as needed:NETWORK_NAME- Set toMainnetorTestnet- Log levels for each service (defaults to
infowith warning filters) - Port mappings (defaults work for most setups)
See Configuration Guide for the complete variable hierarchy and customization options.
Warning
Why can't I just run docker compose up?
Docker Compose healthchecks have timeout limits that cannot accommodate blockchain sync times (hours to days). If you run docker compose up on a fresh install, Zaino and Zallet will repeatedly fail waiting for Zebra to sync.
Solution: Start Zebra alone first, wait for sync, then start everything else.
# Step 1: Start only Zebra
docker compose up -d zebra
# Step 2: Monitor sync progress (choose one method)
./check-zebra-readiness.sh # Recommended: script waits and notifies
docker compose logs -f zebra # Watch logs
curl http://localhost:8080/ready # Manual check (returns "ok" when synced)
# Step 3: Once synced, start all services
docker compose up -dNote
Sync times:
- Mainnet: 24-72 hours (depends on hardware/network)
- Testnet: 2-12 hours
You can close your terminal during sync. Zebra runs in the background.
If Zebra has previously synced (data persists in Docker volumes):
docker compose up -d
docker compose ps # Verify all healthyFor local development when you need services running during sync:
cp docker-compose.override.yml.example docker-compose.override.yml
docker compose up -dCaution
Development mode uses /healthy instead of /ready. Services will start but may error until Zebra catches up. Not for production use.
To stop the services and remove the containers:
docker compose downTo also remove the data volumes (
docker compose down -vThe Z3 stack stores blockchain data, indexer state, and wallet data in Docker volumes. You can choose between Docker-managed volumes (default) or local directories.
By default, the stack uses Docker named volumes which are managed by Docker:
zebra_data: Zebra blockchain state (~300GB+ for mainnet, ~30GB for testnet)zaino_data: Zaino indexer databasezallet_data: Zallet wallet datashared_cookie_volume: RPC authentication cookies
Advantages:
- No permission issues
- Automatic management by Docker
- Better performance on macOS/Windows
For advanced use cases (backups, external SSDs, shared storage), you can bind local directories instead of using Docker-managed volumes.
Important: Choose directory locations appropriate for your operating system and requirements:
- Linux:
/mnt/data/z3,/var/lib/z3, or user home directories - macOS:
/Volumes/ExternalDrive/z3,~/Library/Application Support/z3, or user Documents - Windows (WSL):
/mnt/c/Z3Dataor native Windows paths if using Docker Desktop
-
Create your directories in your chosen location:
mkdir -p /your/chosen/path/zebra-state mkdir -p /your/chosen/path/zaino-data mkdir -p /your/chosen/path/zallet-data
-
Fix permissions using the provided utility:
./fix-permissions.sh zebra /your/chosen/path/zebra-state ./fix-permissions.sh zaino /your/chosen/path/zaino-data ./fix-permissions.sh zallet /your/chosen/path/zallet-data
Note: Keep the cookie directory as a Docker volume (recommended) to avoid cross-user permission issues.
-
Update
.envfile with your paths:Z3_ZEBRA_DATA_PATH=/your/chosen/path/zebra-state Z3_ZAINO_DATA_PATH=/your/chosen/path/zaino-data Z3_ZALLET_DATA_PATH=/your/chosen/path/zallet-data # Z3_COOKIE_PATH=shared_cookie_volume # Keep as Docker volume -
Restart the stack:
docker compose down docker compose up -d
Each service runs as a specific non-root user with distinct UIDs/GIDs:
- Zebra: UID=10001, GID=10001, permissions 700
- Zaino: UID=1000, GID=1000, permissions 700
- Zallet: UID=65532, GID=65532, permissions 700
Critical: Local directories must have correct ownership and secure permissions:
- Use
fix-permissions.shto set ownership automatically - Permissions must be 700 (owner only) or 750 (owner + group read)
- Never use 755 or 777 - these expose your blockchain data and wallet to other users
This section explains how the Z3 stack is configured and how to customize it for your needs.
The Z3 stack uses a layered configuration approach:
- Service Defaults - Built-in defaults for each service
- Environment Variables (
.env) - Runtime configuration and customization - Configuration Files - Required for specific services (Zallet, Zaino TLS)
- Docker Compose Remapping - Transforms variables for service-specific formats
The Z3 stack uses a three-tier variable naming system to avoid collisions:
1. Z3_ Variables (Infrastructure)*
- Purpose: Docker-level configuration (volume paths, port mappings, service discovery)
- Scope: Used only in
docker-compose.yml, never passed directly to containers - Examples:
Z3_ZEBRA_DATA_PATH,Z3_ZEBRA_RPC_PORT,Z3_ZEBRA_RUST_LOG - Why: Prevents collision with service configuration variables
2. Shared Variables (Common Configuration)
- Purpose: Settings used by multiple services
- Scope: Remapped in
docker-compose.ymlto service-specific names - Examples:
NETWORK_NAME→ZEBRA_NETWORK__NETWORK,ZAINO_NETWORK,ZALLET_NETWORKENABLE_COOKIE_AUTH→ZEBRA_RPC__ENABLE_COOKIE_AUTH,ZAINO_VALIDATOR_COOKIE_AUTHCOOKIE_AUTH_FILE_DIR→ Mapped to cookie paths for each service
3. Service Configuration Variables (Application Config)
- Purpose: Service-specific configuration passed to applications
- Scope: Passed via
env_fileindocker-compose.yml - Formats:
- Zebra:
ZEBRA_*(config-rs format:ZEBRA_SECTION__KEYwith__separator) - Zaino:
ZAINO_* - Zallet:
ZALLET_*
- Zebra:
Zebra:
- Method: Pure environment variables
- Format:
ZEBRA_SECTION__KEY(e.g.,ZEBRA_RPC__LISTEN_ADDR) - Files: None required (uses environment variables only)
Zaino:
- Method: Pure environment variables
- Format:
ZAINO_*(e.g.,ZAINO_GRPC_PORT) - Files: TLS certificates (
config/tls/zaino.crt,config/tls/zaino.key)
Zallet:
- Method: Hybrid (TOML file + environment variables)
- Format:
ZALLET_*for runtime parameters (e.g.,ZALLET_RUST_LOG) - Files:
config/zallet.toml- Core configuration (required)config/zallet_identity.txt- Encryption key (required)
Zallet differs from Zebra and Zaino in key ways:
Embedded Zaino Indexer:
- Zallet embeds Zaino's indexer libraries (
zaino-fetch,zaino-state,zaino-proto) as dependencies - This embedded indexer connects directly to Zebra's JSON-RPC endpoint to fetch blockchain data
- Zallet does NOT connect to the standalone Zaino gRPC/JSON-RPC service (which is for other light clients)
Service Connectivity:
Zebra (JSON-RPC :18232)
├─→ Zaino Service (standalone indexer for gRPC clients like Zingo)
└─→ Zallet (uses embedded Zaino indexer libraries)
Critical Configuration Requirements:
config/zallet.tomlmust exist with all required sections (even if empty)validator_addressmust point tozebra:18232(Zebra's JSON-RPC), NOTzaino:8137- All TOML sections must be present:
[builder],[consensus],[database],[external],[features],[indexer],[keystore],[note_management],[rpc] - Cookie authentication must be configured in both TOML and mounted as a volume
Change Network (Mainnet/Testnet):
# In .env:
NETWORK_NAME=Mainnet # or Testnet
# In config/zallet.toml:
[consensus]
network = "main" # or "test"Adjust Log Levels:
# In .env:
Z3_ZEBRA_RUST_LOG=info
ZAINO_RUST_LOG=info,reqwest=warn,hyper_util=warn
ZALLET_RUST_LOG=info,hyper_util=warn,reqwest=warn
# For debugging, use:
ZAINO_RUST_LOG=debugChange Ports:
# In .env:
Z3_ZEBRA_HOST_RPC_PORT=18232
ZAINO_HOST_GRPC_PORT=8137
ZALLET_HOST_RPC_PORT=28232Environment Variable Precedence:
Docker Compose applies variables in this order (later overrides earlier):
- Dockerfile defaults
.envfile substitution (e.g.,${VARIABLE})env_filesectionenvironmentsection- Shell environment variables (if exported)
Important: If you export a variable in your shell, it will override the .env file. Use unset VARIABLE to remove shell variables.
Zebra provides two HTTP endpoints for monitoring service health:
- Returns 200: Zebra is running and has minimum connected peers (configurable, default: 1)
- Returns 503: Not enough peer connections
- Use for: Docker healthchecks, liveness monitoring, restart decisions
- Works during: Initial sync, normal operation
- Endpoint:
http://localhost:${Z3_ZEBRA_HOST_HEALTH_PORT:-8080}/healthy
- Returns 200: Zebra is synced near the network tip (within configured blocks, default: 2)
- Returns 503: Still syncing or lagging behind network tip
- Use for: Production traffic routing, manual verification before use
- Fails during: Fresh sync (can take 24+ hours for mainnet)
- Endpoint:
http://localhost:${Z3_ZEBRA_HOST_HEALTH_PORT:-8080}/ready
The Z3 stack uses readiness-based dependencies to prevent service hangs:
Zebra (/ready - synced near tip)
→ Zaino (gRPC responding)
→ Zallet (RPC responding)
Why this approach:
- Zaino requires Zebra to be near the network tip - if Zebra is still syncing, Zaino will hang internally waiting
- Two-phase deployment separates initial sync from normal operation
- Docker Compose healthcheck verifies Zebra is synced before starting dependent services
What each healthcheck tests:
zebra:/ready- Synced near network tip (within 2 blocks, configurable)zaino: gRPC server responding - Ready to index blockszallet: RPC server responding - Ready for wallet operations
Deployment modes:
| Mode | When to use | Zebra healthcheck | Behavior |
|---|---|---|---|
| Production (default) | Mainnet, production testnet | /ready |
Two-phase: sync Zebra first, then start stack |
| Development (override) | Local dev, quick testing | /healthy |
Start all services immediately (may have delays) |
During Phase 1 (Zebra sync), monitor progress:
# Check readiness (returns "ok" when synced near tip)
curl http://localhost:8080/ready
# Monitor sync progress via logs
docker compose logs -f zebra
# Check current status
docker compose ps zebraWhat to expect:
- Zebra shows
healthy (starting)while syncing (during 90-second grace period) - Once synced,
/readyreturnsokand Zebra showshealthy - Zaino and Zallet remain in
waitingstate until dependencies are healthy
Skip sync wait for development (.env):
# Make /ready always return 200 on testnet (even during sync)
ZEBRA_HEALTH__ENFORCE_ON_TEST_NETWORKS=false # Default: false
# When set to true, testnet behaves like mainnet (strict readiness check)Adjust readiness threshold (.env):
# How many blocks behind network tip is acceptable (default: 2)
ZEBRA_HEALTH__READY_MAX_BLOCKS_BEHIND=2
# Minimum peer connections for /healthy (default: 1)
ZEBRA_HEALTH__MIN_CONNECTED_PEERS=1Once the stack is running, services can be accessed via their exposed ports:
- Zebra RPC:
http://localhost:${Z3_ZEBRA_HOST_RPC_PORT:-18232}(default: Testnethttp://localhost:18232) - Zebra Health:
http://localhost:${Z3_ZEBRA_HOST_HEALTH_PORT:-8080}/healthyand/ready - Zaino gRPC:
localhost:${ZAINO_HOST_GRPC_PORT:-8137}(default:localhost:8137) - Zaino JSON-RPC:
http://localhost:${ZAINO_HOST_JSONRPC_PORT:-8237}(default:http://localhost:8237, if enabled) - Zallet RPC:
http://localhost:${ZALLET_HOST_RPC_PORT:-28232}(default:http://localhost:28232)
Refer to the individual component documentation for RPC API details.