Conversation
drj11
left a comment
There was a problem hiding this comment.
sure, it seems reasonable for now.
Really it seems to me that these tests should be separated so that the test framework can "see" them individually and run and report separately.
Possibly using parameterized tests: https://docs.pytest.org/en/stable/parametrize.html#basic-pytest-generate-tests-example
Tests that can't succeed should be skipped: https://docs.pytest.org/en/stable/skipping.html
I'm not suggesting you do that in this PR, it's another PR. But note that doing either of things i suggest above involves really pinning down which test framework you want to go forward with.
|
Thanks @drj11 - I completely agree, it's (hopefully) an improvement, but not ideal. If the resources are available, I'd favour moving from |
Kernel tests are executed against each kernel using the
check_kernel_gradient_functionsfunction which, in itself, calls and executes a number of tests. Currently, if one of these tests fails or errors, the subsequent tests do not run. This PR modifies the code such that if any one test fails or errors, a failure is reported, but the additional tests still run. This means that a developer can fix problems causing tests to fail in any order they like.