feat: Add comprehensive pipeline validation test infrastructure#271
feat: Add comprehensive pipeline validation test infrastructure#271debasishbsws merged 6 commits intochainguard-dev:mainfrom
Conversation
|
I like the idea of using real world packages to verify pipelines are working as expected. However, the proposed infrastructure results in a significant amount of code duplication (melange yamls for the test infra). I propose a slightly different approach: to autogenerate them at test time. Imagine this Imagine this other We could have a number of yaml files like these. Then, we will have a runner script (in python, go, whatever) that for each package, generates the same melange yamls as you envisioned, then copies the pipelines as defined in the test case, executes the tests, and verifies the exit code. What do you think? We could have several pipelines in a single test case: The behavior is the same, our runner will generate the melange yamls for each package and copy all the pipelines, test, and verify the exit_code. |
39ef0e8 to
40b4f05
Compare
aborrero
left a comment
There was a problem hiding this comment.
good work! some comments inline.
aborrero
left a comment
There was a problem hiding this comment.
some cleanups are still required.
| Tests the pipeline validators located in `pipelines/test/tw/` using test packages in `tests/`. | ||
|
|
||
| ```bash | ||
| make test-pipelines |
There was a problem hiding this comment.
maybe have a prompt here so is the same format as the other commands.
87ed540 to
d94016b
Compare
Claude code description: Introduce a new testing framework for validating pipeline checkers with both positive and negative test cases. This ensures pipeline validators work correctly before release and prevents regressions. - Add tests/docs-test.yaml with comprehensive docs pipeline validation - Positive test: giflib-doc (real Wolfi package with valid docs) - Negative tests: glibc (non-docs package), binaries, empty packages - All negative tests capture and display checker output for debugging - Add tests/README.md documenting test structure and best practices - Configure tests to use provider-priority: 0 for proper Wolfi precedence - Restructure test targets into three categories: - test-melange: Tests main tw package - test-projects: Tests individual project tools - test-pipelines: Tests pipeline validators (new) - Add test-all target to run complete test suite - Add granular targets: build-pipeline-tests, run-pipeline-tests - Fix MELANGE_TEST_OPTS to include proper repository configuration - Add TEST_DIR and TEST_OUT_DIR for test package isolation - Include both main and test package repositories - Add pipeline-dirs, keyring, and arch configuration - Rename pattern rule from test-% to test-project/% - Prevents conflicts with test-pipelines, test-melange targets - Uses slash separator for clearer intent (e.g., test-project/gosh) - Update clean target to remove test package output directory - Add informative echo statements for better CI/CD visibility - Update workflow to use test-projects (renamed from test) - Add test-pipelines step to validate pipeline checkers - Ensures all three test types run in CI - Expand testing section with comprehensive test type documentation - Document all make targets with usage examples - Explain positive vs negative test concepts - Add test files structure and purpose - Rename "Testing locally" to "Testing with Original Repositories" - Clarify workflow for testing changes in wolfi-os/enterprise-packages - Add explanation of --repository-append usage - Ignore .DS_Store files (macOS) - Ignore tests/packages/ directory (test build artifacts) Previously, pipeline validators were tested manually or not at all. This led to: 1. Regressions when modifying checkers 2. Inconsistent behavior across different package types 3. Difficulty validating edge cases This infrastructure provides: 1. Automated validation of pipeline checkers 2. Both positive (should pass) and negative (should fail) test coverage 3. Clear documentation for adding new pipeline tests 4. Separation of test artifacts from main package builds 5. Reproducible local testing matching CI environment The test-pipelines target runs a clean build to ensure tests use the latest checker implementations, preventing false positives from stale builds. Verified all test targets work correctly: - make test-melange: ✓ - make test-projects: ✓ - make test-pipelines: ✓ - make test-all: ✓ - make test-project/package-type-check: ✓ Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
… staticpackage tests Complete rewrite of tests/README.md to provide comprehensive guidance on writing pipeline validation tests, emphasizing critical requirements and common pitfalls. Add staticpackage-test.yaml with both positive and negative test scenarios. **New Structure:** - Focus on "how to write tests" rather than "what tests exist" - Step-by-step guidance for creating new pipeline tests - Comprehensive examples with explanations **Critical Configuration Rules Section:** 1. Always use version `0.0.0` - explains precedence behavior 2. Set `provider-priority: 0` - explains Wolfi package testing 3. Don't test main package - explains organizational benefits 4. Use subpackages for scenarios - explains test isolation **Testing Real Wolfi Packages:** - Detailed explanation of how to test real packages (giflib-doc, glibc) - How version `0.0.0` + `provider-priority: 0` enables this - Benefits: validates against real-world package structures **Positive Test Guidelines:** - Simple, focused examples - No special test logic needed - Create realistic package content **Negative Test Requirements (Critical Section):** Four critical requirements with detailed explanations: 1. `set +e` - why it's needed to continue after checker fails 2. Capture output - debugging and documentation benefits 3. Validate failure - how to check exit codes correctly 4. Add package-type-check - why it must be in environment **Common Mistakes Section:** - 6 common errors with solutions - Based on real development experience - Helps prevent repetitive debugging **Test Checklist:** - Actionable 14-point checklist - Covers all critical requirements - Ensures consistency across test files **Best Practices:** - 8 practical guidelines - Emphasizes maintainability and clarity - Real-world testing strategies **Simplified Environment:** - Changed from `build-base` + `busybox` to just `busybox` - `busybox` provides `/bin/sh` and basic utilities (sufficient) - Reduces unnecessary dependencies Complete test suite for static package pipeline validation: **Positive Tests:** 1. `gdcm-static` - Real Wolfi static package (production validation) 2. `contains-only-static` - Synthetic package with only .a files **Negative Tests:** 3. `glibc` - Real Wolfi non-static package (should be rejected) 4. `contains-static-and-more` - Synthetic with .so files (should be rejected) All negative tests follow best practices: - Use `set +e` to handle expected failures - Capture and display checker output - Validate rejection with proper exit codes **Configuration:** - Version `0.0.0` for proper precedence - `provider-priority: 0` enables Wolfi package testing - Minimal environment (busybox only) - Main package has only log line **Version Fix:** - Changed from `0.0.1` → `0.0.0` for consistency - Ensures proper precedence with Wolfi packages **Environment Simplification:** - Removed `build-base` (not needed for simple tests) - Keep only `busybox` (provides /bin/sh) **Pipeline Clarification:** - Changed from empty echo to descriptive message - Clearly states this is a test package Added link to tests/README.md for detailed pipeline test documentation, making it discoverable from main README. The original tests/README.md was more of a catalog than a guide. It listed what tests existed but didn't explain how to create new ones or why certain patterns were important. Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
The tests/ directory contains pipeline validation test packages, not project code with test targets. Exclude it from PROJECT_DIRS to prevent `make test-projects` from attempting to run `make -C tests test`. Fixes CI error: make[1]: *** No rule to make target 'test'. Stop. make: *** [test-project/tests] Error 2 Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
Previously, header-check would exit 0 (success) when a package contained zero headers, reporting '0/0 passed'. This created a false positive where packages without any headers would incorrectly pass validation. This fix adds a HEADERS_TESTED counter to track whether any headers were actually tested. If zero headers are found, the script now exits with an error message indicating no headers were found in the specified packages or files. Why manual tests are required: The header-check pipeline validator cannot be properly tested through standard melange package builds because: 1. **Edge cases require synthetic packages**: Testing the validator's behavior with zero headers, headers in wrong paths, or partially invalid headers requires constructing artificial packages that intentionally violate normal packaging standards. Real packages would never be built this way. 2. **Testing the validator itself, not packages**: These tests verify the header-check tool behaves correctly when encountering malformed or edge-case packages. The goal is to ensure the validator catches problems, not to test legitimate package builds. 3. **Negative test cases**: Several test cases (no headers, wrong paths) are designed to fail. Standard package tests expect success, making them unsuitable for testing validator error handling. 4. **Controlled test environment**: Manual test packages allow precise control over package contents to trigger specific validator code paths that would be difficult or impossible to test otherwise. The header-check-test.yaml file provides three critical test scenarios: - Packages with zero headers (tests error detection) - Headers in non-standard paths like /opt/include (tests path validation) - Mix of valid and invalid headers (tests partial failure reporting) These manual tests ensure the header-check validator correctly identifies packaging issues before packages reach production. Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
74d9812 to
ca52d2a
Compare
|
Why manual tests are required:
The header-check-test.yaml file provides three critical test scenarios:
I have found and fix(header-check): Fail when no headers found instead of false positive This fix adds a HEADERS_TESTED counter to track whether any headers were |
|
@aborrero as this Pr was created first for the manual tests as describe in the pr description. I am letting this as it was previously and creating a new Pr with the autogen test infra. |
Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
Claude code description:
Introduce a new testing framework for validating pipeline checkers with both positive and negative test cases. This ensures pipeline validators work correctly before release and prevents regressions.
What's New:
1. Test Files
tests/docs-test.yaml: Validates docs pipelinetests/README.md: Documents test structure and best practices.gitignore: Ignore .DS_Store and tests/packages/ (build artifacts)2. Test Targets
Reorganized into three categories:
make test-melange: Tests main tw packagemake test-projects: Tests individual project tools (e.g.,make test-project/gosh)make test-pipelines: Tests pipeline validators (new)make test-all: Runs complete test suite3. Configuration Updates
provider-priority: 0for proper Wolfi precedenceTEST_DIRandTEST_OUT_DIRfor test package isolationMELANGE_TEST_OPTSwith proper repository and keyring configuration4. Pattern Rule Changes
test-%totest-project/%to prevent naming conflictstest-project/gosh)5. CI/CD Updates
test-pipelinesstep to validate checkersbuild-pipeline-tests,run-pipeline-testsVerified Working: