Open-source zero-trust microsegmentation with eBPF enforcement, policy-as-code, and hybrid cloud support
# Build and install (Linux/macOS/Windows)
go build -o ztap
sudo mv ztap /usr/local/bin/
# Note for Linux: The binary includes pre-compiled eBPF bytecode.
# No clang/llvm dependency is required at runtime.# 1. Authenticate
echo "ztap-admin-change-me" | ztap user login admin
ztap user change-password admin
# 2. Register services
ztap discovery register web-1 10.0.1.1 --labels app=web,tier=frontend
ztap discovery register db-1 10.0.2.1 --labels app=database,tier=backend
# 3. Enforce a policy
# Validate the policy first (CI/CD friendly)
ztap policy validate -f examples/web-to-db.yaml
# macOS (pf)
ztap enforce -f examples/web-to-db.yaml
# Windows (WFP)
# Note: run in an elevated terminal (Administrator).
# Supports IPv4/IPv6 `ipBlock.cidr` (arbitrary CIDRs) and TCP/UDP/ICMP.
# For `protocol: ICMP`, the policy `port` is accepted by validation but ignored during enforcement.
# Optional strict default-deny can be enabled with: ZTAP_WFP_STRICT=1
ztap enforce -f policy.yaml
# Linux (eBPF)
# Note: `ztap enforce` keeps running while enforcement is active.
# Supports IPv4/IPv6 `ipBlock.cidr` (arbitrary CIDRs) and TCP/UDP/ICMP (ICMP ignores `port`).
# Policies that use selector targets (`podSelector` with optional `namespaceSelector`) are enforced by resolving selectors into concrete `ipBlock` rules via discovery:
# - In-cluster: run `ztap agent`
# - Local/CLI: run `ztap enforce` with `discovery.backend: k8s` configured (auto-resolves and refreshes while running)
# - Control refresh with `--resolve-labels-interval` (default: `5s`; set to `0` to resolve once)
# - If a selector currently resolves to zero targets, enforcement still starts; the rule becomes active when targets appear and resolution refreshes
# In multi-namespace Kubernetes deployments:
# - `ztap agent --namespaces ns-a,ns-b` or `ztap agent --all-namespaces`
# - Tenant isolation requires Linux eBPF (iptables fallback can't guarantee isolation)
sudo ztap enforce -f policy.yaml
# Dry-run mode (all platforms)
# Simulate enforcement without making system changes
ztap enforce -f policy.yaml --dry-run
ztap agent --dry-run
# 4. Check status
ztap statusCluster backend:
- Default: in-memory (single-process)
- Production: configure etcd via
cluster.*inconfig.yaml(seeconfig.yaml.example) or env vars likeZTAP_ETCD_ENDPOINTS
Full Setup Guide | Architecture | eBPF Setup
|
|
| Guide | Description |
|---|---|
| Setup Guide | Installation and configuration |
| Architecture | System design and components |
| eBPF Enforcement | Linux kernel-level enforcement |
| Cluster Coordination | Multi-node clustering and leader election |
| Audit Logging | Tamper-evident audit log system |
| Compliance Reporting | Compliance mapping exports and reports |
| Testing Guide | Comprehensive testing documentation |
| Roadmap | Delivered and planned features |
| Windows Flow Runbook | Manual validation for WFP flows |
| Anomaly Detection | ML service setup |
Web to Database (Label-based)
apiVersion: ztap/v1
kind: NetworkPolicy
metadata:
name: web-to-db
spec:
podSelector:
matchLabels:
app: web
egress:
- to:
podSelector:
matchLabels:
app: db
ports:
- protocol: TCP
port: 5432PCI Compliant (IP-based)
apiVersion: ztap/v1
kind: NetworkPolicy
metadata:
name: pci-compliant
annotations:
ztap.io/compliance.pci-dss: "10.2.1"
spec:
podSelector:
matchLabels:
app: payment-processor
egress:
- to:
ipBlock:
cidr: 10.0.0.0/8
ports:
- protocol: TCP
port: 443Bidirectional (Ingress + Egress)
apiVersion: ztap/v1
kind: NetworkPolicy
metadata:
name: web-tier
spec:
podSelector:
matchLabels:
tier: web
egress:
- to:
podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
ingress:
- from:
ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 443Compliance Exports
# JSON export (canonical)
ztap compliance export -f examples/pci-compliant.yaml --format json
# CSV export (spreadsheets)
ztap compliance export -f examples/pci-compliant.yaml --format csv --out compliance.csv
# Human-readable report
ztap compliance report -f examples/pci-compliant.yaml --format mdSee docs/compliance.md for policy annotations and mapping files.
More examples in examples/
ztap [command]
Commands:
api Run REST API server (serve)
grpc Run gRPC API server (serve)
aws AWS Security Group synchronization (sg-sync, inventory)
azure Azure NSG synchronization (nsg-sync)
gcp GCP firewall rule synchronization (firewall-sync)
agent Run node agent (Kubernetes / in-cluster)
compliance Compliance mapping exports and reports
enforce Enforce zero-trust network policies
version Print ZTAP version
status Show on-premises and cloud resource status
cluster Manage cluster coordination (status, join, leave, list)
policy Distributed policy management (sync, list, watch, show, history, rollback)
flows Real-time flow event monitoring (--follow, --action, --protocol)
logs View ZTAP logs (with --follow, --level, --policy filters)
metrics Start Prometheus metrics server
user Manage users (create, login, list, change-password)
discovery Service discovery (register, resolve, list)
audit Audit log management (view, verify, stats, keygen)API Server
# Start REST API server (reads config.yaml or file set via ZTAP_CONFIG)
ztap api serve
# Start gRPC API server (default 127.0.0.1:9092)
ztap grpc serve
# If you run both REST and gRPC servers on one host in etcd mode, set a unique
# node id per process (or use `cluster.node_id` in config.yaml)
# export ZTAP_NODE_ID=ztap-grpc-1
# Liveness
curl -s http://127.0.0.1:8080/healthz
# Readiness
curl -s http://127.0.0.1:8080/readyz
# Login (default users DB: ~/.ztap/users.json)
# Sessions persist by default in ~/.ztap/sessions.db (SQLite)
token=$(curl -sS http://127.0.0.1:8080/v1/auth/login \
-H 'Content-Type: application/json' \
-d '{"username":"admin","password":"ztap-admin-change-me"}' | jq -r .token)
# Who am I
curl -sS http://127.0.0.1:8080/v1/auth/whoami -H "Authorization: Bearer $token"Core endpoints:
POST /v1/auth/login,GET /v1/auth/whoami
Rate limiting:
- Config (default: disabled):
api.rate_limit,grpc.rate_limitinconfig.yaml - CLI flags:
ztap api serve --rate-limit ...,ztap grpc serve --rate-limit ... - REST: invalid/expired
Authorizationtokens fall back to unauthenticated bucket - REST: probe endpoints (
/healthz,/readyz) are always exempt from rate limiting - gRPC: rate-limited calls return
RESOURCE_EXHAUSTEDand includeRetryInfo(retry delay) - gRPC: health RPCs (
/grpc.health.v1.Health/Check,/grpc.health.v1.Health/Watch) are always exempt from rate limiting POST /v1/config/backup(download bundle; requiresbackup_restore)POST /v1/config/restore?dry_run=1&force=1(upload bundle; requiresbackup_restore)POST /v1/compliance/report(requiresview_compliance)POST /v1/compliance/export(requiresview_compliance)GET /v1/statusGET /v1/enforcement/status,POST /v1/enforcement/start,POST /v1/enforcement/stop(Linux only)GET /v1/flows/stream(SSE)GET /metricsGET /v1/policies,GET/PUT/DELETE /v1/policies/{tenant}/{name}GET /v1/policies/{tenant}/{name}/revisions,GET /v1/policies/{tenant}/{name}/revisions/{version},POST /v1/policies/{tenant}/{name}/rollbackGET/POST /v1/users,GET/PATCH/DELETE /v1/users/{username},POST /v1/users/{username}/passwordGET /v1/cluster/status,GET/POST/DELETE /v1/cluster/nodes...
Config backup/restore notes:
- Backup request body is optional JSON; implemented flags:
include_users,include_sessions,include_config(defaults: true). Optional:include_policy_current(defaults: false). - Export is best-effort: unavailable items are skipped and recorded in
manifest.warningsinside the bundle. - Restore supports
dry_run=1to preview changes andforce=1to apply; withoutforce=1, destructive restores return409. - Restore request body is capped (default: 100 MiB); oversized uploads return
413. - Restore writes files atomically but requires a process restart to take effect.
gRPC services (v1):
ztap.api.v1.AuthService(Login,WhoAmI)ztap.api.v1.StatusService(GetStatus)ztap.api.v1.EnforcementService(GetStatus,Start,Stop)ztap.api.v1.FlowsService(Streamserver-streaming)ztap.api.v1.PolicyService(ListPolicies,GetPolicy,PutPolicy,DeletePolicy,ListPolicyRevisions,GetPolicyRevision,RollbackPolicy)ztap.api.v1.UsersService(ListUsers,GetUser,CreateUser,UpdateUser,SetUserPassword,DeleteUser)ztap.api.v1.ClusterService(GetClusterStatus,ListNodes,RegisterNode,DeregisterNode)
Auth: send authorization: Bearer <token> as gRPC metadata.
User Management
# Create users with roles (admin, operator, viewer)
echo "password" | ztap user create alice --role operator
ztap user list
ztap user change-password aliceService Discovery
# Register and resolve services by labels
ztap discovery register web-1 10.0.1.1 --labels app=web,tier=frontend
ztap discovery resolve --labels app=web
ztap discovery listConfiguration (optional):
# config.yaml (or file set via ZTAP_CONFIG)
discovery:
backend: dns # inmemory (default) or dns
dns:
domain: example.com
cache:
ttl: 30s # optional cache layer for the selected backendCluster & Policy Management
# Cluster operations
ztap cluster status # View cluster state
ztap cluster join node-2 192.168.1.2:9090 # Join a node
ztap cluster list # List all nodes
# Policy synchronization (leader-initiated)
ztap policy sync examples/web-to-db.yaml # Sync policy to all nodes
ztap policy list # List all policies
ztap policy watch # Watch real-time updates
ztap policy show web-to-db # Show policy details
ztap policy history web-to-db # Show revision history
ztap policy rollback web-to-db --to 3 # Roll back by creating a new latest versionFlow Monitoring
# View recent flow events
ztap flows
# Stream flow events in real-time
ztap flows --follow
# Filter by action/protocol/direction
ztap flows --action blocked --protocol TCP
ztap flows --direction egress --limit 100
# Output formats
ztap flows --output table # Default
ztap flows --output jsonOn Linux, if ztap enforce is active, ztap flows --follow streams real events from the pinned eBPF ring buffer map (/sys/fs/bpf/ztap/flow_events).
On Windows, ztap flows --follow streams WFP NetEvents (requires an elevated terminal). By default it emits only ZTAP-attributable decisions (ztap-only), so run ztap enforce first.
On macOS, flow output remains simulated.
Audit Logging
# View audit log with tamper-evident cryptographic verification
ztap audit view # View recent entries
ztap audit view --actor admin # Filter by actor
ztap audit view --type policy.created # Filter by event type
ztap audit view --resource web-policy # Filter by resource
ztap audit view --limit 100 # Limit results
# Verify cryptographic integrity
ztap audit verify # Detect tampering
ztap audit keygen --output-dir ~/.ztap # Generate Ed25519 keypair
# Display statistics
ztap audit stats # Show log stats| Metric | Description |
|---|---|
ztap_policies_enforced_total |
Number of policies enforced |
ztap_flows_allowed_total |
Allowed flows counter |
ztap_flows_blocked_total |
Blocked flows counter |
ztap_anomaly_score |
Current anomaly score (0-100) |
ztap_policy_load_duration_seconds |
Policy load time histogram |
ztap_policies_synced_total |
Total policy sync operations |
ztap_policy_sync_duration_seconds |
Policy sync duration histogram |
ztap_policy_version_current |
Current version of each policy |
ztap_policy_enforcement_duration_seconds |
Policy enforcement duration histogram |
ztap_policy_subscribers_active |
Active policy subscribers count |
ztap_flows_total |
Flow events by action/protocol/direction |
docker compose up -d # Access at http://localhost:3000 (admin/ztap)Dashboard auto-provisioned from deployments/grafana/dashboards/ztap-dashboard.json
| Component | Requirement | Notes |
|---|---|---|
| OS | Linux (kernel ≥5.7) or macOS 12+ | Linux for production, macOS for dev |
| Go | 1.24+ | Build requirement |
| eBPF Tools | clang, llvm, make, linux-headers | Linux production only |
| Privileges | Root or CAP_BPF + CAP_NET_ADMIN | Linux eBPF enforcement |
| AWS | EC2/VPC access (optional) | For cloud integration |
| Docker | Latest (optional) | For Prometheus/Grafana stack |
| Python | 3.8+ (optional) | For anomaly detection service |
# Build
go build
# Run tests
go test ./...
# eBPF integration test (Linux + root required)
sudo go test -tags=integration ./pkg/enforcer -run TestEBPFIntegration -v
# Coverage
go test ./... -cover
# Lint
go fmt ./... && go vet ./..../demo.sh # Interactive demo with RBAC, service discovery, and policy enforcementMIT License - See LICENSE
- Security policy:
SECURITY.md - Contributing guide:
CONTRIBUTING.md - Code of Conduct:
CODE_OF_CONDUCT.md - Changelog:
CHANGELOG.md
- NIST SP 800-207 Zero Trust Architecture
- Kubernetes NetworkPolicy specification
- Cilium and Tetragon for eBPF inspiration
- MITRE ATT&CK framework
Note: macOS enforcement (pf) is for development only. Use Linux + eBPF for production.