A general-purpose integration testing framework designed to perform fuzz testing between different frontends and backends.
Clone the repository:
git clone https://github.com/<your-org>/pto-testing-framework.git
cd pto-testing-frameworkThe framework provides an intelligent build script that automatically detects available dependencies and builds what it finds:
# Simple build - automatically detects and builds all available components
./build_and_install.sh
# The script will:
# 1. Check for PyPTO (environment variable, 3rdparty/, pip install, or clone from GitHub)
# 2. Check for Simpler (same detection priority)
# 3. Build whatever is availableCommon Build Options:
# Clean build from scratch
./build_and_install.sh --clean
# Debug build
./build_and_install.sh --type Debug
# Install pto-test package in editable mode
./build_and_install.sh --install
# Parallel build with 8 jobs
./build_and_install.sh --jobs 8
# Combine options
./build_and_install.sh --clean --type Release --installBuild Script Options:
Build Configuration:
-t, --type TYPE— Build type: Debug, Release, RelWithDebInfo (default: RelWithDebInfo)-c, --clean— Clean build directory before building-i, --install— Install pto-test package in editable mode-j, --jobs N— Number of parallel jobs (default: auto-detect)
Dependency Configuration:
--pypto-repo URL— PyPTO repository URL (default: https://github.com/hw-native-sys/pypto)--simpler-repo URL— Simpler repository URL (default: https://github.com/ChaoWao/simpler)--pypto-branch NAME— PyPTO branch/tag (default: main)--simpler-branch NAME— Simpler branch/tag (default: main)
Other:
-h, --help— Show help message
After building, the environment is automatically configured. The script generates and sources build/setup_env.sh, setting up all required environment variables.
For new terminal sessions, source the environment:
source build/setup_env.shThe build script automatically detects dependencies in the following priority order:
Detection Priority:
- Environment variables (
PYPTO_ROOT,SIMPLER_ROOT) - Existing
3rdparty/pyptoand3rdparty/simplerdirectories - pip editable install (automatically detected)
- Auto-clone from GitHub (if not found)
Using Existing Dependencies:
# Method 1: Set environment variables before building
export PYPTO_ROOT=/path/to/your/pypto
export SIMPLER_ROOT=/path/to/your/simpler
./build_and_install.sh
# Method 2: Use pip editable install (auto-detected)
cd /path/to/pypto && pip install -e .
cd /path/to/simpler && pip install -e .
cd /path/to/pto-testing-framework
./build_and_install.sh
# Method 3: Place dependencies in 3rdparty/ (auto-detected)
# Just put pypto and simpler in 3rdparty/ directory
./build_and_install.shWhat Gets Built:
The script will show you what it detected:
Detecting Dependencies
========================================
Checking for PyPTO...
✓ PyPTO found (environment variable)
→ PyPTO will be built
Checking for Simpler...
✓ Simpler found (3rdparty directory)
→ Simpler will be built
Build Plan:
• Framework: ✓ (always)
• PyPTO: ✓ (detected)
• Simpler: ✓ (detected)
The framework serves as an integration layer that decouples frontends and backends, enabling comprehensive end-to-end testing between PyPTO (compiler frontend) and Simpler (runtime backend).
Frontend:
- pypto — Python DSL for tensor programming
Backend:
- simpler — Runtime execution engine
Testing Framework (pto-test): The framework bridges frontend and backend through three core modules:
- test_case.py — Base classes for test definitions
PTOTestCase: Abstract base class for user-defined testsTensorSpec: Tensor specification with flexible initialization patternsTestConfig: Configuration options (platform, tolerances, save options)TestResult: Test execution results and validation metrics
- test_runner.py — Test execution engine
TestRunner: Orchestrates the complete test pipelineTestSuite: Batch execution and summary reporting
- validators.py — Result validation utilities
ResultValidator: Compares outputs with configurable tolerances
- kernel_generator.py — PyPTO IR → CCE C++ kernels
- Runs PyPTO pass pipeline with optimization strategies
- Generates per-function kernel files
- orch_generator.py — Auto-generates orchestration C++ skeleton
- config_generator.py — Generates
kernel_config.pyfor simpler runtime - golden_generator.py — Generates
golden.pywith reference computations
- standalone_runner.py — Tool for testing manually completed orchestration code
pto-testing-framework/
├── src/pto_test/ # Testing framework source
│ ├── core/ # Core test infrastructure
│ │ ├── test_case.py # PTOTestCase, TensorSpec, TestConfig
│ │ ├── test_runner.py # TestRunner, TestSuite
│ │ └── validators.py # ResultValidator
│ ├── codegen/ # Code generation pipeline
│ │ ├── kernel_generator.py # PyPTO IR → CCE C++ kernels
│ │ ├── orch_generator.py # Auto-generate orchestration
│ │ ├── config_generator.py # Generate kernel_config.py
│ │ └── golden_generator.py # Generate golden.py
│ └── tools/ # Utilities
│ └── standalone_runner.py # Manual orchestration testing
├── tests/ # Test cases
│ ├── conftest.py # pytest configuration and fixtures
│ └── test_cases/ # Actual test implementations
│ └── test_elementwise.py
├── 3rdparty/ # Dependencies (auto-managed, in .gitignore)
│ ├── pypto/ # PyPTO frontend (if auto-cloned)
│ └── simpler/ # Simpler runtime backend (if auto-cloned)
├── build/ # Build artifacts (generated)
│ ├── pypto/ # PyPTO build output
│ ├── setup_env.sh # Auto-generated environment setup
│ └── outputs/ # Test artifacts (when --save-kernels)
│ └── output_{timestamp}/
│ └── {test_name}/
│ ├── kernels/
│ ├── pass_dump/ # (if --dump-passes)
│ └── metadata.json
├── build_and_install.sh # Build script for PyPTO
└── pyproject.toml # Project configuration
# Run all tests (simulation platform by default)
pytest tests/ -v
# Run specific test file
pytest tests/test_cases/test_elementwise.py -v
# Run specific test class
pytest tests/test_cases/test_elementwise.py::TestTileAdd -v
# Run with verbose output
pytest tests/ -vv# Save generated kernels and artifacts
pytest tests/ -v --save-kernels
# Save to custom directory
pytest tests/ -v --save-kernels --kernels-dir ./my_outputs
# Enable IR pass dumps for debugging compiler transformations
pytest tests/ -v --save-kernels --dump-passes
# Generate code only, skip runtime execution
pytest tests/ -v --codegen-only --save-kernels# Run on real hardware (requires NPU device)
pytest tests/ -v --platform=a2a3 --device=0
# Run on hardware with saved kernels
pytest tests/ -v --platform=a2a3 --device=0 --save-kernels# Use PTOAS optimization strategy
pytest tests/ -v --strategy=PTOAS
# Combine with other options
pytest tests/ -v --strategy=PTOAS --save-kernels --dump-passesDefine test cases by inheriting from PTOTestCase and implementing required methods:
from pto_test.core.test_case import PTOTestCase, TensorSpec, DataType
import pypto.lang as pl
class TestTileAdd(PTOTestCase):
def get_name(self) -> str:
"""Return a unique name for this test."""
return "tile_add_128x128"
def define_tensors(self) -> list[TensorSpec]:
"""Define input and output tensors with initialization."""
return [
TensorSpec("a", [128, 128], DataType.FP32, init_value=2.0),
TensorSpec("b", [128, 128], DataType.FP32, init_value=3.0),
TensorSpec("c", [128, 128], DataType.FP32, is_output=True),
]
def get_program(self):
"""Define the PyPTO program to test."""
@pl.program
class TileAddProgram:
@pl.function
def tile_add(
self,
a: pl.Tensor[[128, 128], pl.FP32],
b: pl.Tensor[[128, 128], pl.FP32],
c: pl.Tensor[[128, 128], pl.FP32],
):
tile_a = pl.op.block.load(a, 0, 0, 128, 128)
tile_b = pl.op.block.load(b, 0, 0, 128, 128)
tile_c = pl.op.block.add(tile_a, tile_b)
pl.op.block.store(tile_c, 0, 0, 128, 128, c)
return TileAddProgram
def compute_expected(self, tensors: dict, params=None):
"""Compute expected result using NumPy."""
tensors["c"][:] = tensors["a"] + tensors["b"]TensorSpec supports multiple initialization patterns:
# Scalar initialization (broadcast to all elements)
TensorSpec("a", [128, 128], DataType.FP32, init_value=1.0)
# NumPy array initialization
TensorSpec("b", [4, 4], DataType.FP32, init_value=np.eye(4))
# Callable initialization (for random data)
TensorSpec("c", [256, 256], DataType.FP32, init_value=lambda: np.random.randn(256, 256))
# Zero initialization (default)
TensorSpec("output", [128, 128], DataType.FP32, is_output=True)Use pytest fixtures for shape variations:
import pytest
class TestTileAddParameterized(PTOTestCase):
def __init__(self, rows: int, cols: int):
self.rows = rows
self.cols = cols
def get_name(self) -> str:
return f"tile_add_{self.rows}x{self.cols}"
def define_tensors(self) -> list[TensorSpec]:
return [
TensorSpec("a", [self.rows, self.cols], DataType.FP32, init_value=1.0),
TensorSpec("b", [self.rows, self.cols], DataType.FP32, init_value=2.0),
TensorSpec("c", [self.rows, self.cols], DataType.FP32, is_output=True),
]
# ... rest of implementation
@pytest.mark.parametrize("rows,cols", [(64, 64), (128, 128), (256, 256)])
def test_tile_add_shapes(test_runner, rows, cols):
test_case = TestTileAddParameterized(rows, cols)
result = test_runner.run(test_case)
assert result.passed, f"Test failed: {result.error}"pytest command-line options for controlling test execution:
| Option | Default | Description |
|---|---|---|
--platform |
a2a3sim | Target platform (a2a3sim for simulation, a2a3 for hardware) |
--device |
0 | Device ID for hardware tests |
--strategy |
Default | PyPTO optimization strategy (Default or PTOAS) |
--save-kernels |
False | Save generated kernels and artifacts |
--kernels-dir |
build/outputs/output_{timestamp}/ | Custom output directory for kernels |
--dump-passes |
False | Dump intermediate IR after each pass |
--codegen-only |
False | Only generate code, skip runtime execution |
--fuzz-count |
10 | Number of fuzz test iterations (planned feature) |
--fuzz-seed |
random | Random seed for fuzz tests (planned feature) |
Note: Fuzz testing infrastructure is available but test cases are not yet implemented. The --fuzz-count and --fuzz-seed options are reserved for future fuzz testing functionality.
By default, generated kernels and artifacts are in temporary directories. Use --save-kernels to persist them:
# Save to default location (build/outputs/output_{timestamp}/)
pytest tests/ -v --save-kernels
# Save to custom directory
pytest tests/ -v --save-kernels --kernels-dir ./my_outputs
# Run specific test with saved kernels
pytest tests/test_cases/test_elementwise.py::TestTileAdd -v --save-kernelsOutput structure:
build/outputs/output_{timestamp}/
└── {test_name}/
├── kernels/
│ ├── aiv/
│ │ └── {func_name}.cpp # Generated kernel code
│ ├── orchestration/
│ │ └── orch.cpp # Orchestration skeleton
│ ├── kernel_config.py # Config for simpler runtime
│ └── golden.py # Reference computation
└── metadata.json # Test metadata
Dump intermediate IR representations after each compiler pass:
pytest tests/ -v --save-kernels --dump-passesThis creates a pass_dump/ directory in each test output with IR snapshots at each optimization stage, useful for debugging compiler transformations.
Generate code without executing on the runtime (useful for validating codegen):
pytest tests/ -v --codegen-only --save-kernelsOverride the default optimization strategy at runtime:
# Use PTOAS optimization strategy
pytest tests/ -v --strategy=PTOASOr override in your test case:
class MyTest(PTOTestCase):
def get_strategy(self):
from pypto.ir.pass_manager import OptimizationStrategy
return OptimizationStrategy.PTOASThe standalone_runner.py tool allows you to manually complete and test orchestration code. This is useful when you need custom task creation logic beyond what the auto-generator provides.
Typical workflow:
- Generate skeleton code using
--codegen-only:
pytest tests/test_cases/test_elementwise.py::TestTileAdd -v --codegen-only --save-kernels- Locate the generated orchestration file:
build/outputs/output_{timestamp}/tile_add_128x128/kernels/orchestration/orch.cpp
- Edit the orchestration file to replace the
TODOsection with your custom task creation code:
// TODO: Add your task creation code here
// Example:
// auto task = runtime->create_task(...);
// runtime->submit_task(task);- Run the completed test using the standalone runner:
# Run on simulator
python -m pto_test.tools.standalone_runner \
--run \
--test-dir build/outputs/output_20260202_012129/tile_add_128x128 \
--platform a2a3sim
# Run on hardware
python -m pto_test.tools.standalone_runner \
--run \
--test-dir ./my_test \
--platform a2a3 \
--device-id 0Programmatic usage:
from pto_test.tools.standalone_runner import StandaloneRunner
runner = StandaloneRunner()
runner.run_completed_test(
test_dir='./my_test',
platform='a2a3sim',
device_id=0
)Tool features:
- Validates presence of required files (kernel_config.py, golden.py, orch.cpp)
- Warns if orchestration still contains TODO markers
- Executes via simpler's CodeRunner
- Provides clear pass/fail feedback
add,sub,mul,div
adds,subs,muls,divs
exp,sqrt,neg,abs
The build_and_install.sh script automatically configures your environment. For new terminal sessions:
# Source the auto-generated environment script
source build/setup_env.shThis sets up:
FRAMEWORK_ROOT— Testing framework root directoryPYPTO_ROOT— Path to PyPTO installation (if used)SIMPLER_ROOT— Path to Simpler installation (if used)PYTHONPATH— Includes all required Python packages
Verify your setup:
# Check environment variables
echo $FRAMEWORK_ROOT
echo $PYPTO_ROOT
echo $SIMPLER_ROOT
# Test imports
python -c "import pypto; print(f'PyPTO version: {pypto.__version__}')"
python -c "import pto_compiler; print('Simpler imported successfully')"
python -c "import pto_test; print('pto-test imported successfully')"Manual Environment Setup (advanced):
If you need fine-grained control, set these environment variables before running tests:
export FRAMEWORK_ROOT=/path/to/pto-testing-framework
export PYPTO_ROOT=/path/to/pypto
export SIMPLER_ROOT=/path/to/simpler
export PYTHONPATH="$PYPTO_ROOT/python:$SIMPLER_ROOT/python:$FRAMEWORK_ROOT/src:$PYTHONPATH"┌─────────────────────────────────────────────────────────────┐
│ Test Case Definition │
│ (TensorSpec + build_ir + compute_expected) │
└───────────────────────────┬─────────────────────────────────┘
│
┌──────────────────┼──────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌───────────────────┐ ┌─────────────────┐
│ 1. Build IR │ │ 2. Generate Kernel│ │ 3. Prepare Data │
│ (PyPTO) │ │ (KernelGenerator) │ │ (NumPy arrays) │
└────────┬────────┘ └─────────┬─────────┘ └────────┬────────┘
│ │ │
└────────────────────┼────────────────────┘
│
▼
┌────────────────────────┐
│ 4. Compile Components │
│ - Kernel → .o/.text │
│ - Orchestration → .so │
└───────────┬────────────┘
│
▼
┌────────────────────────┐
│ 5. Execute on Simpler │
│ - bind_host_binary() │
│ - register_kernel() │
│ - launch_runtime() │
└───────────┬────────────┘
│
▼
┌────────────────────────┐
│ 6. Validate Results │
│ - Compare with NumPy │
│ - Return TestResult │
└────────────────────────┘
Problem: ModuleNotFoundError for pypto or simpler.
Solution:
# Option 1: Rebuild (will auto-detect and clone dependencies if needed)
./build_and_install.sh --clean
# Option 2: Set environment variables to existing installations
export PYPTO_ROOT=/path/to/pypto
export SIMPLER_ROOT=/path/to/simpler
source build/setup_env.shProblem: ImportError: No module named 'pypto' or similar.
Solution:
source build/setup_env.shProblem: CMake configuration errors or compilation failures.
Solutions:
# Try a clean build
./build_and_install.sh --clean
# Check Python installation
python3 --version
python3 -c "import sys; print(sys.prefix)"
# Ensure nanobind is installed
pip install nanobindProblem: CMake error about missing nanobind_DIR.
Solution:
pip install nanobindThe build script will automatically install nanobind if missing, but manual installation may be needed in some environments.
Problem: pytest doesn't discover tests or fixtures are missing.
Solutions:
# Ensure you're in the project root
cd /path/to/pto-testing-framework
# Check conftest.py is present
ls tests/conftest.py
# Run with verbose output
pytest tests/ -vvProblem: Tests marked with @pytest.mark.hardware are skipped.
Solution:
# Use the hardware platform
pytest tests/ -v --platform=a2a3 --device=0Simulation tests run by default (--platform=a2a3sim).
Apache-2.0