Skip to content

Integration Test Documentation and Skills #393

Merged
dhruv0811 merged 17 commits intomainfrom
convert-integration-tests-to-skill
Apr 2, 2026
Merged

Integration Test Documentation and Skills #393
dhruv0811 merged 17 commits intomainfrom
convert-integration-tests-to-skill

Conversation

@dhruv0811
Copy link
Copy Markdown
Contributor

@dhruv0811 dhruv0811 commented Mar 24, 2026

Summary

  • Add Claude Code skill for integration test authoring (.claude/skills/integration-tests.md)
  • Add integration test documentation (tests/README.md) covering all 6 test areas: VS, Genie, MCP, FMAPI, Lakebase, OBO
  • Add CLAUDE.md and AGENTS.md for AI agent repo context

Test plan

  • Skill triggers correctly via /integration-tests, used it to generate lakebase autoscaling tests
  • tests/README.md renders on GitHub when opening tests/ directory

- Add .claude/skills/integration-tests.md with proper skill frontmatter
- Add principle #11: prefer adding tests to existing files over creating new ones
- Add anti-pattern #9: no unnecessary new test files
- Add to PR review checklist
- Update repo structure to reflect consolidated test files
  (test_memory_session.py now covers provisioned + autoscaling)

Replaces the old .claude/commands/integration-tests.md command.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@dhruv0811 dhruv0811 changed the title Convert integration-tests command to Claude skill Integration Test Documentation and Skills Mar 24, 2026
Comment thread AGENTS.md
Comment thread .claude/skills/integration-tests.md
- Add app-templates CUJ motivation to tests/README.md
- Add customer regression examples (PRs #269, #333) to FMAPI section
- Clarify OBO app lifecycle managed by CI runner
- Add ty check to AGENTS.md development section
- Add CI trigger instructions to integration-tests skill

Co-authored-by: Isaac
Existing gates: `RUN_VS_INTEGRATION_TESTS`, `RUN_GENIE_INTEGRATION_TESTS`, `RUN_MCP_INTEGRATION_TESTS`, `LAKEBASE_INTEGRATION_TESTS`.

### CI Runner
- Tests run in `databricks-eng/ai-oss-integration-tests-runner` (private repo)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it okay for us to mention the repo name here? Or this is a security concern? @annzhang-db

Copy link
Copy Markdown
Contributor

@jennsun jennsun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly looks good - had some suggestions of improvements we could make to reduce additional maintenance burden (I see value in having repo structure listed so wonder if we can highlight that LLM assistant should update the relevant sections after adding new tests)

Also should add some coverage to make sure tests for extra package dependencies are taken care of

Comment thread AGENTS.md Outdated
Comment thread .claude/skills/integration-tests.md Outdated
Comment thread .claude/skills/integration-tests.md Outdated
Comment thread .claude/skills/integration-tests.md
Comment thread .claude/skills/integration-tests.md Outdated
Comment thread .claude/skills/integration-tests.md
2. **Check existing tests**: Search for existing integration tests that cover the affected code paths.
3. **Identify gaps**: Determine what new tests are needed or what existing tests need updating.
4. **Write tests**: Follow the patterns and principles below exactly.
5. **Run linting**: Run `ruff check` and `ruff format` on any files you create or modify.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a step after linting like "Run the new tests locally to verify they pass before pushing"

and maybe pushing step after this? to make it clear we should run tests locally first - I see we have the "Verify locally (optional)" section but it's not required in the workflow

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The verify locally is actually only to verify the collection works with the skip gate. It's a little more complex to run the tests locally since we will need the integration-test SP credentials locally to make this work. Along with any other relevant secrets from the github runner.

Comment thread tests/README.md Outdated
Comment thread tests/README.md
Comment thread AGENTS.md Outdated
- Replace sed with cross-platform uv run python one-liner
- Remove Ann's testimonial quote from skill
- Replace file listing with high-level directory pointers in skill
- Fix "no new files" example to avoid citing existing integrations
- Add pytest.importorskip anti-pattern for extra dependencies
- Note UI trigger instructions are for human users
- Fix Lakebase description wording (specified via, not dedicated)
- Add maintenance note to directory structure in README
- Simplify AGENTS.md integration test section, point to README
- Use uv run python -m pytest consistently

Co-authored-by: Isaac
@dhruv0811 dhruv0811 requested a review from jennsun March 26, 2026 19:04
Comment thread .claude/skills/integration-tests.md Outdated
Comment thread .claude/skills/integration-tests.md Outdated
@annzhang-db
Copy link
Copy Markdown
Contributor

Also you can push an empty commit to re-trigger your tests

Copy link
Copy Markdown
Contributor

@jennsun jennsun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - thanks for including coverage on testing for extra dependency edge cases

@dhruv0811 dhruv0811 merged commit adfe003 into main Apr 2, 2026
46 checks passed
@dhruv0811 dhruv0811 deleted the convert-integration-tests-to-skill branch April 2, 2026 17:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants