Skip to content

Zero-trust microsegmentation CLI with eBPF enforcement, policy-as-code, and hybrid cloud support. Kernel-level network filtering, distributed cluster coordination, and NIST SP 800-207 compliant.

License

Notifications You must be signed in to change notification settings

msaadshabir/ZTAP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

359 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

ZTAP: Zero Trust Access Platform

Open-source zero-trust microsegmentation with eBPF enforcement, policy-as-code, and hybrid cloud support

Go Version eBPF Kubernetes AWS Test Coverage NIST SP 800-207 License: MIT


Quick Start

Installation

# Build and install (Linux/macOS/Windows)
go build -o ztap
sudo mv ztap /usr/local/bin/

# Note for Linux: The binary includes pre-compiled eBPF bytecode.
# No clang/llvm dependency is required at runtime.

First Steps

# 1. Authenticate
echo "ztap-admin-change-me" | ztap user login admin
ztap user change-password admin

# 2. Register services
ztap discovery register web-1 10.0.1.1 --labels app=web,tier=frontend
ztap discovery register db-1 10.0.2.1 --labels app=database,tier=backend

# 3. Enforce a policy
# Validate the policy first (CI/CD friendly)
ztap policy validate -f examples/web-to-db.yaml

# macOS (pf)
ztap enforce -f examples/web-to-db.yaml

# Windows (WFP)
# Note: run in an elevated terminal (Administrator).
# Supports IPv4/IPv6 `ipBlock.cidr` (arbitrary CIDRs) and TCP/UDP/ICMP.
# For `protocol: ICMP`, the policy `port` is accepted by validation but ignored during enforcement.
# Optional strict default-deny can be enabled with: ZTAP_WFP_STRICT=1
ztap enforce -f policy.yaml

# Linux (eBPF)
# Note: `ztap enforce` keeps running while enforcement is active.
# Supports IPv4/IPv6 `ipBlock.cidr` (arbitrary CIDRs) and TCP/UDP/ICMP (ICMP ignores `port`).
# Policies that use selector targets (`podSelector` with optional `namespaceSelector`) are enforced by resolving selectors into concrete `ipBlock` rules via discovery:
# - In-cluster: run `ztap agent`
# - Local/CLI: run `ztap enforce` with `discovery.backend: k8s` configured (auto-resolves and refreshes while running)
#   - Control refresh with `--resolve-labels-interval` (default: `5s`; set to `0` to resolve once)
#   - If a selector currently resolves to zero targets, enforcement still starts; the rule becomes active when targets appear and resolution refreshes
# In multi-namespace Kubernetes deployments:
# - `ztap agent --namespaces ns-a,ns-b` or `ztap agent --all-namespaces`
# - Tenant isolation requires Linux eBPF (iptables fallback can't guarantee isolation)
sudo ztap enforce -f policy.yaml

# Dry-run mode (all platforms)
# Simulate enforcement without making system changes
ztap enforce -f policy.yaml --dry-run
ztap agent --dry-run

# 4. Check status
ztap status

Cluster backend:

  • Default: in-memory (single-process)
  • Production: configure etcd via cluster.* in config.yaml (see config.yaml.example) or env vars like ZTAP_ETCD_ENDPOINTS

Full Setup Guide | Architecture | eBPF Setup


Features

Security & Enforcement

  • Kernel-Level Filtering – Real eBPF on Linux
  • Zero-Downtime Updates – Graceful, atomic policy reloads using eBPF bpf_link
  • Older Kernel Support – iptables fallback for pre-5.7 kernels or non-BPF environments
  • Bidirectional Enforcement – Ingress and egress policies
  • Secure Communication – HTTPS/TLS support for API and gRPC endpoints
  • RBAC – Admin, Operator, Viewer roles
  • Session Management – Configurable TTL with persistent sessions (SQLite default)
  • Tamper-Evident Audit Logging – Cryptographic hash chaining (optional signing + checkpoints)
  • NIST SP 800-207 compliant

Distributed Architecture

  • Leader Election – Automatic cluster coordination
  • Policy Synchronization – Real-time policy distribution with auto-enforcement
  • Multi-Node Support – High-availability deployments
  • Version Tracking & Rollback – Revision history with rollback to prior versions
  • Prometheus Metrics – 7 metrics for sync and enforcement monitoring

Cloud Integration

  • AWS Security Groups – Auto-sync policies (supports inventory export + offline selector/IP resolution)
  • Azure NSGs – Reconcile policies into NSG security rules
  • GCP Firewall Rules – Reconcile policies into VPC firewall rules
  • EC2 Auto-Discovery – Tag-based labeling
  • Hybrid View – Unified on-prem + cloud status

Observability

  • Flow Monitoring – Real-time on Linux with eBPF enforcement active; Windows via WFP NetEvents (Admin; ztap-only); simulated on macOS
  • Alerting (Webhooks) – Slack and PagerDuty notifications
  • Prometheus Metrics – Pre-built exporters
  • Grafana Dashboards – Auto-provisioned
  • ML Anomaly Detection – Isolation Forest
  • Structured Logs – Filter & follow

Developer Experience

  • Kubernetes-Style YAML – Familiar syntax
  • Label-Based Discovery – Kubernetes API, DNS, and caching
  • Compliance Reporting – PCI-DSS, SOC2, HIPAA policy mapping exports and reports
  • REST API Server – Minimal v1 endpoints via ztap api serve
  • gRPC API Server – Minimal v1 RPCs via ztap grpc serve
  • 43.7% Test Coverage – Production-ready
  • Multi-Platform – Linux (eBPF) + macOS (pf) + Windows (WFP)

Documentation

Guide Description
Setup Guide Installation and configuration
Architecture System design and components
eBPF Enforcement Linux kernel-level enforcement
Cluster Coordination Multi-node clustering and leader election
Audit Logging Tamper-evident audit log system
Compliance Reporting Compliance mapping exports and reports
Testing Guide Comprehensive testing documentation
Roadmap Delivered and planned features
Windows Flow Runbook Manual validation for WFP flows
Anomaly Detection ML service setup

Example Policies

Web to Database (Label-based)
apiVersion: ztap/v1
kind: NetworkPolicy
metadata:
  name: web-to-db
spec:
  podSelector:
    matchLabels:
      app: web
  egress:
    - to:
        podSelector:
          matchLabels:
            app: db
      ports:
        - protocol: TCP
          port: 5432
PCI Compliant (IP-based)
apiVersion: ztap/v1
kind: NetworkPolicy
metadata:
  name: pci-compliant
  annotations:
    ztap.io/compliance.pci-dss: "10.2.1"
spec:
  podSelector:
    matchLabels:
      app: payment-processor
  egress:
    - to:
        ipBlock:
          cidr: 10.0.0.0/8
      ports:
        - protocol: TCP
          port: 443
Bidirectional (Ingress + Egress)
apiVersion: ztap/v1
kind: NetworkPolicy
metadata:
  name: web-tier
spec:
  podSelector:
    matchLabels:
      tier: web
  egress:
    - to:
        podSelector:
          matchLabels:
            tier: database
      ports:
        - protocol: TCP
          port: 5432
  ingress:
    - from:
        ipBlock:
          cidr: 10.0.0.0/24
      ports:
        - protocol: TCP
          port: 443
Compliance Exports
# JSON export (canonical)
ztap compliance export -f examples/pci-compliant.yaml --format json

# CSV export (spreadsheets)
ztap compliance export -f examples/pci-compliant.yaml --format csv --out compliance.csv

# Human-readable report
ztap compliance report -f examples/pci-compliant.yaml --format md

See docs/compliance.md for policy annotations and mapping files.

More examples in examples/


CLI Commands

ztap [command]

Commands:
  api         Run REST API server (serve)
  grpc        Run gRPC API server (serve)
  aws         AWS Security Group synchronization (sg-sync, inventory)
  azure       Azure NSG synchronization (nsg-sync)
  gcp         GCP firewall rule synchronization (firewall-sync)
  agent       Run node agent (Kubernetes / in-cluster)
  compliance  Compliance mapping exports and reports
  enforce     Enforce zero-trust network policies
  version     Print ZTAP version
  status      Show on-premises and cloud resource status
  cluster     Manage cluster coordination (status, join, leave, list)
  policy      Distributed policy management (sync, list, watch, show, history, rollback)
  flows       Real-time flow event monitoring (--follow, --action, --protocol)
  logs        View ZTAP logs (with --follow, --level, --policy filters)
  metrics     Start Prometheus metrics server
  user        Manage users (create, login, list, change-password)
  discovery   Service discovery (register, resolve, list)
  audit       Audit log management (view, verify, stats, keygen)
API Server
# Start REST API server (reads config.yaml or file set via ZTAP_CONFIG)
ztap api serve

# Start gRPC API server (default 127.0.0.1:9092)
ztap grpc serve

# If you run both REST and gRPC servers on one host in etcd mode, set a unique
# node id per process (or use `cluster.node_id` in config.yaml)
# export ZTAP_NODE_ID=ztap-grpc-1

# Liveness
curl -s http://127.0.0.1:8080/healthz

# Readiness
curl -s http://127.0.0.1:8080/readyz

# Login (default users DB: ~/.ztap/users.json)
# Sessions persist by default in ~/.ztap/sessions.db (SQLite)
token=$(curl -sS http://127.0.0.1:8080/v1/auth/login \
  -H 'Content-Type: application/json' \
  -d '{"username":"admin","password":"ztap-admin-change-me"}' | jq -r .token)

# Who am I
curl -sS http://127.0.0.1:8080/v1/auth/whoami -H "Authorization: Bearer $token"

Core endpoints:

  • POST /v1/auth/login, GET /v1/auth/whoami

Rate limiting:

  • Config (default: disabled): api.rate_limit, grpc.rate_limit in config.yaml
  • CLI flags: ztap api serve --rate-limit ..., ztap grpc serve --rate-limit ...
  • REST: invalid/expired Authorization tokens fall back to unauthenticated bucket
  • REST: probe endpoints (/healthz, /readyz) are always exempt from rate limiting
  • gRPC: rate-limited calls return RESOURCE_EXHAUSTED and include RetryInfo (retry delay)
  • gRPC: health RPCs (/grpc.health.v1.Health/Check, /grpc.health.v1.Health/Watch) are always exempt from rate limiting
  • POST /v1/config/backup (download bundle; requires backup_restore)
  • POST /v1/config/restore?dry_run=1&force=1 (upload bundle; requires backup_restore)
  • POST /v1/compliance/report (requires view_compliance)
  • POST /v1/compliance/export (requires view_compliance)
  • GET /v1/status
  • GET /v1/enforcement/status, POST /v1/enforcement/start, POST /v1/enforcement/stop (Linux only)
  • GET /v1/flows/stream (SSE)
  • GET /metrics
  • GET /v1/policies, GET/PUT/DELETE /v1/policies/{tenant}/{name}
  • GET /v1/policies/{tenant}/{name}/revisions, GET /v1/policies/{tenant}/{name}/revisions/{version}, POST /v1/policies/{tenant}/{name}/rollback
  • GET/POST /v1/users, GET/PATCH/DELETE /v1/users/{username}, POST /v1/users/{username}/password
  • GET /v1/cluster/status, GET/POST/DELETE /v1/cluster/nodes...

Config backup/restore notes:

  • Backup request body is optional JSON; implemented flags: include_users, include_sessions, include_config (defaults: true). Optional: include_policy_current (defaults: false).
  • Export is best-effort: unavailable items are skipped and recorded in manifest.warnings inside the bundle.
  • Restore supports dry_run=1 to preview changes and force=1 to apply; without force=1, destructive restores return 409.
  • Restore request body is capped (default: 100 MiB); oversized uploads return 413.
  • Restore writes files atomically but requires a process restart to take effect.

gRPC services (v1):

  • ztap.api.v1.AuthService (Login, WhoAmI)
  • ztap.api.v1.StatusService (GetStatus)
  • ztap.api.v1.EnforcementService (GetStatus, Start, Stop)
  • ztap.api.v1.FlowsService (Stream server-streaming)
  • ztap.api.v1.PolicyService (ListPolicies, GetPolicy, PutPolicy, DeletePolicy, ListPolicyRevisions, GetPolicyRevision, RollbackPolicy)
  • ztap.api.v1.UsersService (ListUsers, GetUser, CreateUser, UpdateUser, SetUserPassword, DeleteUser)
  • ztap.api.v1.ClusterService (GetClusterStatus, ListNodes, RegisterNode, DeregisterNode)

Auth: send authorization: Bearer <token> as gRPC metadata.

User Management
# Create users with roles (admin, operator, viewer)
echo "password" | ztap user create alice --role operator
ztap user list
ztap user change-password alice
Service Discovery
# Register and resolve services by labels
ztap discovery register web-1 10.0.1.1 --labels app=web,tier=frontend
ztap discovery resolve --labels app=web
ztap discovery list

Configuration (optional):

# config.yaml (or file set via ZTAP_CONFIG)
discovery:
  backend: dns # inmemory (default) or dns
  dns:
    domain: example.com
  cache:
    ttl: 30s # optional cache layer for the selected backend
Cluster & Policy Management
# Cluster operations
ztap cluster status                          # View cluster state
ztap cluster join node-2 192.168.1.2:9090   # Join a node
ztap cluster list                            # List all nodes

# Policy synchronization (leader-initiated)
ztap policy sync examples/web-to-db.yaml    # Sync policy to all nodes
ztap policy list                             # List all policies
ztap policy watch                            # Watch real-time updates
ztap policy show web-to-db                   # Show policy details
ztap policy history web-to-db                # Show revision history
ztap policy rollback web-to-db --to 3        # Roll back by creating a new latest version
Flow Monitoring
# View recent flow events
ztap flows

# Stream flow events in real-time
ztap flows --follow

# Filter by action/protocol/direction
ztap flows --action blocked --protocol TCP
ztap flows --direction egress --limit 100

  # Output formats
  ztap flows --output table   # Default
  ztap flows --output json

On Linux, if ztap enforce is active, ztap flows --follow streams real events from the pinned eBPF ring buffer map (/sys/fs/bpf/ztap/flow_events).

On Windows, ztap flows --follow streams WFP NetEvents (requires an elevated terminal). By default it emits only ZTAP-attributable decisions (ztap-only), so run ztap enforce first.

On macOS, flow output remains simulated.

Audit Logging
# View audit log with tamper-evident cryptographic verification
ztap audit view                                   # View recent entries
ztap audit view --actor admin                     # Filter by actor
ztap audit view --type policy.created             # Filter by event type
ztap audit view --resource web-policy             # Filter by resource
ztap audit view --limit 100                       # Limit results

# Verify cryptographic integrity
ztap audit verify                                 # Detect tampering
ztap audit keygen --output-dir ~/.ztap             # Generate Ed25519 keypair

# Display statistics
ztap audit stats                                  # Show log stats

Observability

Prometheus Metrics

Metric Description
ztap_policies_enforced_total Number of policies enforced
ztap_flows_allowed_total Allowed flows counter
ztap_flows_blocked_total Blocked flows counter
ztap_anomaly_score Current anomaly score (0-100)
ztap_policy_load_duration_seconds Policy load time histogram
ztap_policies_synced_total Total policy sync operations
ztap_policy_sync_duration_seconds Policy sync duration histogram
ztap_policy_version_current Current version of each policy
ztap_policy_enforcement_duration_seconds Policy enforcement duration histogram
ztap_policy_subscribers_active Active policy subscribers count
ztap_flows_total Flow events by action/protocol/direction

Grafana Dashboard

docker compose up -d  # Access at http://localhost:3000 (admin/ztap)

Dashboard auto-provisioned from deployments/grafana/dashboards/ztap-dashboard.json


Requirements

Component Requirement Notes
OS Linux (kernel ≥5.7) or macOS 12+ Linux for production, macOS for dev
Go 1.24+ Build requirement
eBPF Tools clang, llvm, make, linux-headers Linux production only
Privileges Root or CAP_BPF + CAP_NET_ADMIN Linux eBPF enforcement
AWS EC2/VPC access (optional) For cloud integration
Docker Latest (optional) For Prometheus/Grafana stack
Python 3.8+ (optional) For anomaly detection service

Full eBPF Setup Guide


Development

# Build
go build

# Run tests
go test ./...

# eBPF integration test (Linux + root required)
sudo go test -tags=integration ./pkg/enforcer -run TestEBPFIntegration -v

# Coverage
go test ./... -cover

# Lint
go fmt ./... && go vet ./...

Demo

./demo.sh  # Interactive demo with RBAC, service discovery, and policy enforcement

License

MIT License - See LICENSE

Project Hygiene

  • Security policy: SECURITY.md
  • Contributing guide: CONTRIBUTING.md
  • Code of Conduct: CODE_OF_CONDUCT.md
  • Changelog: CHANGELOG.md

Acknowledgments


Note: macOS enforcement (pf) is for development only. Use Linux + eBPF for production.

eBPF Setup Guide | Get Started | Open an Issue

About

Zero-trust microsegmentation CLI with eBPF enforcement, policy-as-code, and hybrid cloud support. Kernel-level network filtering, distributed cluster coordination, and NIST SP 800-207 compliant.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •