Skip to content

feat: Add comprehensive pipeline validation test infrastructure#271

Merged
debasishbsws merged 6 commits intochainguard-dev:mainfrom
debasishbsws:add-test-for-tw
Jan 28, 2026
Merged

feat: Add comprehensive pipeline validation test infrastructure#271
debasishbsws merged 6 commits intochainguard-dev:mainfrom
debasishbsws:add-test-for-tw

Conversation

@debasishbsws
Copy link
Member

@debasishbsws debasishbsws commented Jan 22, 2026

Claude code description:

Introduce a new testing framework for validating pipeline checkers with both positive and negative test cases. This ensures pipeline validators work correctly before release and prevents regressions.

What's New:

1. Test Files

  • tests/docs-test.yaml: Validates docs pipeline
    • Positive test: giflib-doc (valid Wolfi package with docs)
    • Negative tests: glibc (no docs), binaries, empty packages
    • All negative tests capture output for debugging
  • tests/README.md: Documents test structure and best practices
  • .gitignore: Ignore .DS_Store and tests/packages/ (build artifacts)

2. Test Targets
Reorganized into three categories:

  • make test-melange: Tests main tw package
  • make test-projects: Tests individual project tools (e.g., make test-project/gosh)
  • make test-pipelines: Tests pipeline validators (new)
  • make test-all: Runs complete test suite

3. Configuration Updates

  • Set provider-priority: 0 for proper Wolfi precedence
  • Added TEST_DIR and TEST_OUT_DIR for test package isolation
  • Fixed MELANGE_TEST_OPTS with proper repository and keyring configuration
  • Added architecture and pipeline-dirs settings

4. Pattern Rule Changes

  • Renamed test-% to test-project/% to prevent naming conflicts
  • Uses slash separator for clarity (e.g., test-project/gosh)

5. CI/CD Updates

  • Added test-pipelines step to validate checkers
  • Added granular targets: build-pipeline-tests, run-pipeline-tests
  • Added informative echo statements for visibility
  • Updated workflow to run all three test types

Verified Working:

  • ✓ make test-melange
  • ✓ make test-projects
  • ✓ make test-pipelines
  • ✓ make test-all
  • ✓ make test-project/package-type-check

@debasishbsws debasishbsws requested review from aborrero and smoser and removed request for smoser January 22, 2026 09:39
@aborrero
Copy link
Contributor

I like the idea of using real world packages to verify pipelines are working as expected. However, the proposed infrastructure results in a significant amount of code duplication (melange yamls for the test infra).

I propose a slightly different approach: to autogenerate them at test time.

Imagine this testcase1.yaml file:

name: my positive test for test/tw/docs
packages:
 - glibc-doc
 - someother-doc
 - yetanother-doc
 pipelines:
   - uses: test/tw/docs
exit_code: 0

Imagine this other testcase2.yaml file:

name: my negative test for test/tw/docs
packages:
 - somerandom-nodocs-package
 - somerandom-nodocs-package2
 pipelines:
  - uses: test/tw/docs
exit_code: 1

We could have a number of yaml files like these.

Then, we will have a runner script (in python, go, whatever) that for each package, generates the same melange yamls as you envisioned, then copies the pipelines as defined in the test case, executes the tests, and verifies the exit code.

What do you think?

We could have several pipelines in a single test case:

name: testing multiple pipelines on the same test case
packages:
 - somerandom-package
 - somerandom-package2
 pipelines:
  - uses: test/tw/contains-files
    with:
      files: |
         /somefile1
         /somefile2
  - uses: test/tw/virtualpackage
    with:
       virtual-pkg-name: something
 exit_code: 0

The behavior is the same, our runner will generate the melange yamls for each package and copy all the pipelines, test, and verify the exit_code.

Copy link
Contributor

@aborrero aborrero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good work! some comments inline.

@debasishbsws debasishbsws requested a review from aborrero January 27, 2026 13:51
Copy link
Contributor

@aborrero aborrero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some cleanups are still required.

Tests the pipeline validators located in `pipelines/test/tw/` using test packages in `tests/`.

```bash
make test-pipelines
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe have a prompt here so is the same format as the other commands.

Claude code description:

Introduce a new testing framework for validating pipeline checkers with
both positive and negative test cases. This ensures pipeline validators
work correctly before release and prevents regressions.

- Add tests/docs-test.yaml with comprehensive docs pipeline validation
  - Positive test: giflib-doc (real Wolfi package with valid docs)
  - Negative tests: glibc (non-docs package), binaries, empty packages
  - All negative tests capture and display checker output for debugging
- Add tests/README.md documenting test structure and best practices
- Configure tests to use provider-priority: 0 for proper Wolfi precedence

- Restructure test targets into three categories:
  - test-melange: Tests main tw package
  - test-projects: Tests individual project tools
  - test-pipelines: Tests pipeline validators (new)
- Add test-all target to run complete test suite
- Add granular targets: build-pipeline-tests, run-pipeline-tests
- Fix MELANGE_TEST_OPTS to include proper repository configuration
  - Add TEST_DIR and TEST_OUT_DIR for test package isolation
  - Include both main and test package repositories
  - Add pipeline-dirs, keyring, and arch configuration
- Rename pattern rule from test-% to test-project/%
  - Prevents conflicts with test-pipelines, test-melange targets
  - Uses slash separator for clearer intent (e.g., test-project/gosh)
- Update clean target to remove test package output directory
- Add informative echo statements for better CI/CD visibility

- Update workflow to use test-projects (renamed from test)
- Add test-pipelines step to validate pipeline checkers
- Ensures all three test types run in CI

- Expand testing section with comprehensive test type documentation
- Document all make targets with usage examples
- Explain positive vs negative test concepts
- Add test files structure and purpose
- Rename "Testing locally" to "Testing with Original Repositories"
- Clarify workflow for testing changes in wolfi-os/enterprise-packages
- Add explanation of --repository-append usage

- Ignore .DS_Store files (macOS)
- Ignore tests/packages/ directory (test build artifacts)

Previously, pipeline validators were tested manually or not at all. This
led to:
1. Regressions when modifying checkers
2. Inconsistent behavior across different package types
3. Difficulty validating edge cases

This infrastructure provides:
1. Automated validation of pipeline checkers
2. Both positive (should pass) and negative (should fail) test coverage
3. Clear documentation for adding new pipeline tests
4. Separation of test artifacts from main package builds
5. Reproducible local testing matching CI environment

The test-pipelines target runs a clean build to ensure tests use the
latest checker implementations, preventing false positives from stale
builds.

Verified all test targets work correctly:
- make test-melange: ✓
- make test-projects: ✓
- make test-pipelines: ✓
- make test-all: ✓
- make test-project/package-type-check: ✓

Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
… staticpackage tests

Complete rewrite of tests/README.md to provide comprehensive guidance on
writing pipeline validation tests, emphasizing critical requirements and
common pitfalls. Add staticpackage-test.yaml with both positive and
negative test scenarios.

**New Structure:**
- Focus on "how to write tests" rather than "what tests exist"
- Step-by-step guidance for creating new pipeline tests
- Comprehensive examples with explanations

**Critical Configuration Rules Section:**
1. Always use version `0.0.0` - explains precedence behavior
2. Set `provider-priority: 0` - explains Wolfi package testing
3. Don't test main package - explains organizational benefits
4. Use subpackages for scenarios - explains test isolation

**Testing Real Wolfi Packages:**
- Detailed explanation of how to test real packages (giflib-doc, glibc)
- How version `0.0.0` + `provider-priority: 0` enables this
- Benefits: validates against real-world package structures

**Positive Test Guidelines:**
- Simple, focused examples
- No special test logic needed
- Create realistic package content

**Negative Test Requirements (Critical Section):**
Four critical requirements with detailed explanations:
1. `set +e` - why it's needed to continue after checker fails
2. Capture output - debugging and documentation benefits
3. Validate failure - how to check exit codes correctly
4. Add package-type-check - why it must be in environment

**Common Mistakes Section:**
- 6 common errors with solutions
- Based on real development experience
- Helps prevent repetitive debugging

**Test Checklist:**
- Actionable 14-point checklist
- Covers all critical requirements
- Ensures consistency across test files

**Best Practices:**
- 8 practical guidelines
- Emphasizes maintainability and clarity
- Real-world testing strategies

**Simplified Environment:**
- Changed from `build-base` + `busybox` to just `busybox`
- `busybox` provides `/bin/sh` and basic utilities (sufficient)
- Reduces unnecessary dependencies

Complete test suite for static package pipeline validation:

**Positive Tests:**
1. `gdcm-static` - Real Wolfi static package (production validation)
2. `contains-only-static` - Synthetic package with only .a files

**Negative Tests:**
3. `glibc` - Real Wolfi non-static package (should be rejected)
4. `contains-static-and-more` - Synthetic with .so files (should be rejected)

All negative tests follow best practices:
- Use `set +e` to handle expected failures
- Capture and display checker output
- Validate rejection with proper exit codes

**Configuration:**
- Version `0.0.0` for proper precedence
- `provider-priority: 0` enables Wolfi package testing
- Minimal environment (busybox only)
- Main package has only log line

**Version Fix:**
- Changed from `0.0.1` → `0.0.0` for consistency
- Ensures proper precedence with Wolfi packages

**Environment Simplification:**
- Removed `build-base` (not needed for simple tests)
- Keep only `busybox` (provides /bin/sh)

**Pipeline Clarification:**
- Changed from empty echo to descriptive message
- Clearly states this is a test package

Added link to tests/README.md for detailed pipeline test documentation,
making it discoverable from main README.

The original tests/README.md was more of a catalog than a guide. It
listed what tests existed but didn't explain how to create new ones or
why certain patterns were important.

Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
The tests/ directory contains pipeline validation test packages, not
project code with test targets. Exclude it from PROJECT_DIRS to prevent
`make test-projects` from attempting to run `make -C tests test`.

Fixes CI error:
  make[1]: *** No rule to make target 'test'.  Stop.
  make: *** [test-project/tests] Error 2

Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
Previously, header-check would exit 0 (success) when a package contained
zero headers, reporting '0/0 passed'. This created a false positive where
packages without any headers would incorrectly pass validation.

This fix adds a HEADERS_TESTED counter to track whether any headers were
actually tested. If zero headers are found, the script now exits with an
error message indicating no headers were found in the specified packages
or files.

Why manual tests are required:

The header-check pipeline validator cannot be properly tested through
standard melange package builds because:

1. **Edge cases require synthetic packages**: Testing the validator's
   behavior with zero headers, headers in wrong paths, or partially
   invalid headers requires constructing artificial packages that
   intentionally violate normal packaging standards. Real packages
   would never be built this way.

2. **Testing the validator itself, not packages**: These tests verify
   the header-check tool behaves correctly when encountering malformed
   or edge-case packages. The goal is to ensure the validator catches
   problems, not to test legitimate package builds.

3. **Negative test cases**: Several test cases (no headers, wrong paths)
   are designed to fail. Standard package tests expect success, making
   them unsuitable for testing validator error handling.

4. **Controlled test environment**: Manual test packages allow precise
   control over package contents to trigger specific validator code paths
   that would be difficult or impossible to test otherwise.

The header-check-test.yaml file provides three critical test scenarios:
- Packages with zero headers (tests error detection)
- Headers in non-standard paths like /opt/include (tests path validation)
- Mix of valid and invalid headers (tests partial failure reporting)

These manual tests ensure the header-check validator correctly identifies
packaging issues before packages reach production.

Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
@debasishbsws
Copy link
Member Author

Why manual tests are required:

  1. Edge cases require synthetic packages: Testing the validator's
    behavior with zero headers, headers in wrong paths, or partially
    invalid headers requires constructing artificial packages. Real packages
    would never be built this way.

  2. Testing the validator itself, not packages: These tests verify
    the header-check tool behaves correctly when encountering malformed
    or edge-case packages. The goal is to ensure the validator catches
    problems.

  3. Negative test cases: Several test cases (no headers, wrong paths)
    are designed to fail. Standard package tests expect success.

The header-check-test.yaml file provides three critical test scenarios:

  • Packages with zero headers (tests error detection) BUG we found while testing.
  • Headers in non-standard paths like /opt/include (tests path validation)
  • Mix of valid and invalid headers (tests partial failure reporting)

I have found and fix(header-check): Fail when no headers found instead of false positive
ca52d2a
Previously, header-check would exit 0 (success) when a package contained
zero headers, reporting '0/0 passed'. This created a false positive where
packages without any headers would incorrectly pass validation.

This fix adds a HEADERS_TESTED counter to track whether any headers were
actually tested. If zero headers are found, the script now exits with an
error message indicating no headers were found in the specified packages
or files.

@debasishbsws
Copy link
Member Author

@aborrero as this Pr was created first for the manual tests as describe in the pr description. I am letting this as it was previously and creating a new Pr with the autogen test infra.

Signed-off-by: Debasish Biswas <debasishbsws.dev@gmail.com>
@debasishbsws
Copy link
Member Author

@aborrero the draft Pr is #286. I will start working on that after this gets merged.

Copy link
Contributor

@aborrero aborrero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@debasishbsws debasishbsws merged commit 7c54b2d into chainguard-dev:main Jan 28, 2026
5 checks passed
@debasishbsws debasishbsws deleted the add-test-for-tw branch January 28, 2026 09:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants