Integration Test Documentation and Skills #393
Conversation
- Add .claude/skills/integration-tests.md with proper skill frontmatter - Add principle #11: prefer adding tests to existing files over creating new ones - Add anti-pattern #9: no unnecessary new test files - Add to PR review checklist - Update repo structure to reflect consolidated test files (test_memory_session.py now covers provisioned + autoscaling) Replaces the old .claude/commands/integration-tests.md command. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
| Existing gates: `RUN_VS_INTEGRATION_TESTS`, `RUN_GENIE_INTEGRATION_TESTS`, `RUN_MCP_INTEGRATION_TESTS`, `LAKEBASE_INTEGRATION_TESTS`. | ||
|
|
||
| ### CI Runner | ||
| - Tests run in `databricks-eng/ai-oss-integration-tests-runner` (private repo) |
There was a problem hiding this comment.
Is it okay for us to mention the repo name here? Or this is a security concern? @annzhang-db
Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
jennsun
left a comment
There was a problem hiding this comment.
mostly looks good - had some suggestions of improvements we could make to reduce additional maintenance burden (I see value in having repo structure listed so wonder if we can highlight that LLM assistant should update the relevant sections after adding new tests)
Also should add some coverage to make sure tests for extra package dependencies are taken care of
| 2. **Check existing tests**: Search for existing integration tests that cover the affected code paths. | ||
| 3. **Identify gaps**: Determine what new tests are needed or what existing tests need updating. | ||
| 4. **Write tests**: Follow the patterns and principles below exactly. | ||
| 5. **Run linting**: Run `ruff check` and `ruff format` on any files you create or modify. |
There was a problem hiding this comment.
Can we add a step after linting like "Run the new tests locally to verify they pass before pushing"
and maybe pushing step after this? to make it clear we should run tests locally first - I see we have the "Verify locally (optional)" section but it's not required in the workflow
There was a problem hiding this comment.
The verify locally is actually only to verify the collection works with the skip gate. It's a little more complex to run the tests locally since we will need the integration-test SP credentials locally to make this work. Along with any other relevant secrets from the github runner.
- Replace sed with cross-platform uv run python one-liner - Remove Ann's testimonial quote from skill - Replace file listing with high-level directory pointers in skill - Fix "no new files" example to avoid citing existing integrations - Add pytest.importorskip anti-pattern for extra dependencies - Note UI trigger instructions are for human users - Fix Lakebase description wording (specified via, not dedicated) - Add maintenance note to directory structure in README - Simplify AGENTS.md integration test section, point to README - Use uv run python -m pytest consistently Co-authored-by: Isaac
|
Also you can push an empty commit to re-trigger your tests |
…testing Co-authored-by: Isaac
jennsun
left a comment
There was a problem hiding this comment.
LGTM - thanks for including coverage on testing for extra dependency edge cases
Summary
.claude/skills/integration-tests.md)tests/README.md) covering all 6 test areas: VS, Genie, MCP, FMAPI, Lakebase, OBOCLAUDE.mdandAGENTS.mdfor AI agent repo contextTest plan
/integration-tests, used it to generate lakebase autoscaling teststests/README.mdrenders on GitHub when openingtests/directory