-
Notifications
You must be signed in to change notification settings - Fork 20
trunner: Implementing pytest support #445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @lukkrusz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly extends the testing capabilities of the trunner framework by integrating Pytest. It provides a dedicated Pytest harness that allows users to define and execute tests using the popular Pytest framework, with results seamlessly reported back to the TestRunner. The changes include new plugins to manage Pytest's output and map its test outcomes to the framework's subtest structure, alongside a new TestType enum for improved test configuration and validation. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces pytest support, which is a great addition to the testing framework. The implementation is solid, with good use of pytest plugins to bridge with the existing TestRunner and to control output. The new TestType enum improves the configuration parsing by making it more robust and readable.
I've left a few comments with suggestions for improvement:
- In the sample test file, there's a fragile test that depends on execution order. I've suggested how to make it more robust.
- The error message for unknown test types in the configuration could be more descriptive.
- Most importantly, for failed pytest tests, the failure details were not being captured. I've suggested a change to include these details in the test report, which will be very helpful for debugging.
- There is some code duplication in the sample
conftest.pyfile which could be refactored.
mateusz-bloch
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only quick look.
I’m not sure what the expected behaviour were, but wouldn’t it be better to parse specific failures per test case? For example if there were hundreds of test cases, this could be more readable, and we would still have the full output available via -S. @damianloew
trunner/harness/pytest.py
Outdated
| result.status = Status.FAIL | ||
| elif all(sub.status == Status.SKIP for sub in result.subresults) and result.subresults: | ||
| result.status = Status.SKIP | ||
| elif not result.subresults and exit_code != 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I'm on it, in a case when the teardown fails, do we want to treat the entire case as FAIL or just the ultimate test result?
One could argue that the test case passed but it was a post-test operation that failed.
As is now, in case of such a failure, the full pytest trace is being released like on the provided screenshot
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The decision has been made to treat setup/teardown ERROR as the entire test-case FAIL.
| elif not result.subresults and exit_code != 0: | ||
| result.status = Status.FAIL | ||
| result.msg = f"Pytest execution failed (Exit Code {exit_code})." | ||
| else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I can see, xpass is currently taken as OK. Are we sure this is what we want? We might be missing information that could be useful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@damianloew , this is implemented based on the expected output displayed in the issue description. Would we want to display a message next to such a case? (if there is one)
e.g. test_always_xpass: OK [Always passes but it should not]
Technically the xpass is passing the test, albeit unexpectedly, however we could invert the logic and treat the xpass as a failure if we reach such a consensus
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both approaches have pros and cons - I think that we can stick to treating xpass (unexpected pass) as OK to not stop CI when some issue had been resolved and we used mark.xfail when it was present.
trunner/harness/pytest.py
Outdated
| *options, | ||
| "-v", | ||
| "-s", | ||
| "--tb=no", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to disable the traceback?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The traceback suppression can shorten the logs, thus improving the readability when we hit many errors during a test run but I could change it to default and support a kwarg to set it in test yamls. What do you think, @damianloew ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that in case of failure we should provide the traceback output next to the result of the particular test case. In the current approach there can be situation where the error message is not sufficient - when I simply removed dut fixture there is no information about that:

Referring to @mateusz-bloch comment about parsing specific failures per test case - I agree on that. We should provide the solution where we have only the message about the failure, not the whole pytest output when we have fail. Now the output looks very similar to this with streaming option when we have at least one fail.
Our python tests in phoenix-rtos-tests repo are outdated a bit, but after quickly rewriting one test case to the new approach with adding subresults the behavior in case of failure would be like that:

We want to have the similar behavior in pytest tests - traceback/longer error message printed next to the test case result (without the whole streaming output when this option is not set). When we have streaming option active I don't see any contraindications to not show the traceback 2 times - in streaming output and then next to the results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry about misleading screenshot - of course the overall result should be FAIL.
damianloew
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First suggestions, just to order the changes first
|
|
||
| test_path = ctx.project_path / kwargs["path"] | ||
| options = kwargs.get("options", "").split() | ||
| options = shlex.split(kwargs.get("options", "")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few words of explanation why such change is needed would be good to be placed in commit description
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I initially thought that it's in the existing trunner code - if it's included in your new harness then it should be in one commit and I don't see the need to explain that.
trunner/harness/pytest.py
Outdated
|
|
||
|
|
||
| def pytest_harness(dut: Dut, ctx: TestContext, result: TestResult, **kwargs) -> Optional[TestResult]: | ||
| test_re = r"::(?P<name>[^\x1b]+?) (?P<status>PASSED|SKIPPED|FAILED|XFAIL|XPASS|ERROR)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the reason of pushing the old changes that you modified in later commits. We try to keep the commit history minimal and rather avoid adding sth and then removing it within the scope of one PR.
We stick to such approach, because then it's simpler to read the history - showing only the crucial changes.
Commits like trunner: replace split with schlex.split or trunner: fix typo in target/base.py doc are good examples and should be separated as you did, but commits like sample/pytest: implement code review suggestions shouldn't be visible in git history. When you havesuggestions, you should edit the particular commits via interactive rebase.
Of course, adding some basic changes in the first commits and then expanding them/adding new utilities in the next ones is fine.
Additionally, use imperative mood as suggested in the golden rules.
JIRA: CI-614
…unction-scope fixture JIRA: CI-614
Adding TestType will improve code's readability JIRA: CI-614
9c3ab8d to
10a1be0
Compare
Makes the delay more controllable (like changing to 0). Allows for better verbosity control of test related output. JIRA: CI-614
JIRA: CI-614
fix linter warning, fix improper use of 'specname' JIRA: CI-614
Using shlex.split() is generally safer for parsing command-line arguments as it correctly handles quoted strings. JIRA: CI-614
10a1be0 to
2ead454
Compare
The previous xpass case had an incorrect reason and the new changes clarify the use. JIRA: CI-614
Major changes to pytest output handling, following a PR discussion JIRA: CI-614
By default, it is better for multiline message to start from a new line JIRA: CI-614
2ead454 to
75b6ffa
Compare
nalajcie
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pointing out some issues in the current code. Please note that the commit list is too verbose, the final history should not include fixes to your own code (added within the same PR) - I would expect 2-3 commits in the final history.
Adding separate commits after review might be beneficial, for tracking review progress, but they should be squashed sooner than later.
trunner/harness/pytest.py
Outdated
|
|
||
| def get_logs(self) -> str: | ||
| """Get the full, unsuppressed pytest log | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't multiline docstring if not needed
trunner/harness/pytest.py
Outdated
| def get_logs(self) -> str: | ||
| """Get the full, unsuppressed pytest log | ||
| """ | ||
| return self.buffer.getvalue() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it work if we're streaming? (we're attaching the buffer only if _suppress == true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's indeed a dead functionality.
We are removing the log retention since the important to us bits are being parsed to trunner anyway.
I will also make use of trunner logging for this purpose and modify the pytest output to fit our needs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, the test stdout will still be caught by DUT wrapping? (I'm wondering what will finally be available in JUnit XML output)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I follow. JUnit XML will contain testsuites, testcases, fail/skip messages followed by system-out traceback (in case of failure).
Each testcase result is being handled before the terminal output, internally.
The expected terminal output, as established with @damianloew, would be a mix of output from DUT and test stdout, it would look like:
/sample/test/pytest.py::test_send_hello...
[DUT] Hello
/sample/test/pytest.py::test_send_hello PASS
trunner/harness/pytest.py
Outdated
| self._result = result | ||
| self._test_outcomes = {} | ||
| self._broken_fixtures = set() | ||
| self._unknown_fixture_str = "[UNKNOWN FIXTURE]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems like a class const , not instance attribute? Applies elsewhere also.
trunner/harness/pytest.py
Outdated
| else: | ||
| msg = reason | ||
|
|
||
| self._test_outcomes[report.nodeid] = {"status": status, "msg": error_msg if report.failed else msg} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just set msg = error_msg in if report.failed:
trunner/harness/pytest.py
Outdated
| fixture_name = self._extract_broken_fixture(error_msg) | ||
| self._test_outcomes[report.nodeid] = { | ||
| "status": Status.FAIL, | ||
| "msg": f"{error_msg}", # Newline for improved readability |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what newline?
trunner/config.py
Outdated
| self.test.harness = PyHarness(self.ctx.target.dut, self.ctx, unity_harness, self.test.kwargs) | ||
|
|
||
| def _parse_pytest(self): | ||
| self.test.shell = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why overwriting shell? If the shell must not be set, then the parser should probably fail when the key is present?
can't we have shell for pytest-based tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We established that for now we keep the assumption that all pytest tests won't use the shell. The current pytest tests that we want to introduce into the trunner do not interact with the system shell, and the primary goal now is to run them in the unified environment.
If we planned to write tests that interact with our system using pytest, we would extend this support. Tt would probably be a nice idea, but we want to carry out this process gradually.
About failing - now test.shell is always being set - depending on config in yaml ShellOptions with additional arguments or not is being set (always we want to parse prompt). I think that we can keep this approach and just treat pytest tests separately. That's why we cannot raise fail in such case.
However, @lukkrusz we should leave a comment noting that it's temporary - ultimately I'd require to set shell: False or sth similar in test.yaml for tests, which don't interact with a system shell.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you do some design decision (like "interacting with shell via pytest is not possible") you need to write them down (at least in the code) - otherwise it looks like a bug / hack / unfinished implementation.
I feel that this is done only to make running tests on host-generic-pc easier, having shell on other targets won't complicate running pytest-based tests at all, so probably this assumption can be removed in the future (I might be wrong, as I've only read the code).
I would appreciate adding at least one real test which actually interacts with the DUT in CI, as I'm not sure what's the intended use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that @lukkrusz left the following comment elsewhere:
# Specific case where we do not need the shell command
# TODO: In the future, we could support shell communication
However, we can also clarify that in the commit description and maybe express it in a better way.
The main reason of this support now is providing the possibility to write or run the existent functional tests using pytest in our internal project, where we don't interact with a system console - to have the ability to run them even if we don't have a port with this console available (release fw).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You've quoted the Host test runner implementation, so I consider it a limitation of this target, not an overall requirement. Overwriting self.test.shell = None while parsing a config (generic code) without any comment is a bad code pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we want to accept the scenario where we don't have access to the system console - empty DUT like on host target, but also the situation where dut device is available and we can parse some additional information there.
We will try to show it somehow in a test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed a new approach and I hope that setting shell to None and changing Host target won't be needed then. Thanks for the suggestions!
trunner/target/host.py
Outdated
| builder.add(fail) | ||
| return builder.get_harness() | ||
| if test.type is not TestType.PYTEST: | ||
| # Specific case where we do not need the shell command |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
invalid comment in regards to the if - here you have all other cases.
Probably should be restructured to if pytest: [...] else: Fail
Still, I don't like hardcoding test type here, but it's a host runner so we can keep it.
sample/pytest/test.yaml
Outdated
| - name: pytest | ||
| type: pytest | ||
| kwargs: | ||
| path: phoenix-rtos-tests/sample/pytest/test-pytest.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO this should be relative to the test.yaml, not to the CWD we expect the end user to run trunner from.
Maybe it should be a custom field for pytest test type (handled in a special way when parsing the config)?
sample/pytest/conftest.py
Outdated
| print(text) | ||
|
|
||
|
|
||
| class FakeDUT: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what we're trying to do, this class doesn't follow the DUT interface, IMHO naming it DUT is misleading. As in real tests you should always use the DUT provided by test runner - this is a bad example.
I don't know what are real-world examples of objects we want to keep during the full session (the testcases should not depend on each other or on the testcase order) - maybe something regarding optimization - eg. you want to have long-standing protocol connection with some authorization done on first usage (eg. ssh where login can take "some time" and you don't want to do it in every testcase because of delays)?
Or maybe we would like to run some executable for the duration of the tests? (buuut this case should probably be handled by trunner if the need would arise)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point regarding the misleading name. The idea was to simulate a communication between a test and a device. I can change the name to avoid further confusion.
I also think that we can keep the device alive for the entire session without each test being dependent on one another. This way, we eliminate the need to re-initialize the connection/device for each case, thus potentially reducing the test runtime. It could also be a good way to test the device's longevity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
implementing private DUT for basic communication is an anti-pattern in pytest examples (you should always use DUT fixture provided by trunner), show other possible use-case as described above
trunner/types.py
Outdated
| # multi-line message that does not exceed width limit | ||
| elif remaining_lines: | ||
| out += ": " + first_line + "\n" + "\n".join( | ||
| out += ":\n" + "\n".join( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this change? Not sure it's within the scope of this PR, probably can be done here.
This used to be a lot simpler (see PR: #385) , maybe just not align it at all tylko use \t\t prefix always (now we would have different indentation based on some cryptic logic :p)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it can be done as a separate PR for a sub task of this issue instead.
We decided earlier that the double tab indentation should be kept for output exceeding 120 in length and shift_len indentation should be used otherwise.
It was also established that we want the multi-line message to start from a new line every time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overcomplicated code for minor visual difference
JIRA: CI-614
JIRA: CI-614
sample/pytest/main.c
Outdated
| int process_command(char *input) { | ||
| if (strlen(input) == 0) return 1; | ||
| else if (strcmp(input, "EXIT") == 0) { | ||
| return 0; | ||
| } else if (strcmp(input, "ping") == 0) { | ||
| printf("[Ping Complete]\n"); | ||
| } else printf("%s [OK]\n", input); | ||
| return 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| int process_command(char *input) { | |
| if (strlen(input) == 0) return 1; | |
| else if (strcmp(input, "EXIT") == 0) { | |
| return 0; | |
| } else if (strcmp(input, "ping") == 0) { | |
| printf("[Ping Complete]\n"); | |
| } else printf("%s [OK]\n", input); | |
| return 1; | |
| int process_command(char *input) | |
| { | |
| if (strlen(input) == 0) | |
| return 1; | |
| else if (strcmp(input, "EXIT") == 0) { | |
| return 0; | |
| } | |
| else if (strcmp(input, "ping") == 0) { | |
| printf("[Ping Complete]\n"); | |
| } | |
| else | |
| printf("%s [OK]\n", input); | |
| return 1; |
sample/pytest/main.c
Outdated
| return 1; | ||
| } | ||
|
|
||
| int main(int argc, char** argv) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| int main(int argc, char** argv) | |
| int main(int argc, char **argv) |
sample/pytest/main.c
Outdated
| char c = '\0'; | ||
| char buffer[BUF_SZ]; | ||
| int pos = 0; | ||
| int ret_code = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| char c = '\0'; | |
| char buffer[BUF_SZ]; | |
| int pos = 0; | |
| int ret_code = 1; | |
| char c = '\0'; | |
| char buffer[BUF_SZ]; | |
| int pos = 0; | |
| int ret_code = 1; |
sample/pytest/main.c
Outdated
| int pos = 0; | ||
| int ret_code = 1; | ||
|
|
||
| printf("[Commence Fake Communication]\n"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| printf("[Commence Fake Communication]\n"); | |
| printf("[Commence Fake Communication]\n"); |
sample/pytest/main.c
Outdated
|
|
||
| printf("[Commence Fake Communication]\n"); | ||
|
|
||
| memset(buffer, 0, BUF_SZ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| memset(buffer, 0, BUF_SZ); | |
| memset(buffer, 0, BUF_SZ); |
sample/pytest/main.c
Outdated
| // Run the loop until `EXIT` | ||
| while (ret_code) { | ||
| if (read(STDIN_FILENO, &c, 1) != 1) | ||
| return -1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| // Run the loop until `EXIT` | |
| while (ret_code) { | |
| if (read(STDIN_FILENO, &c, 1) != 1) | |
| return -1; | |
| // Run the loop until `EXIT` | |
| while (ret_code) { | |
| if (read(STDIN_FILENO, &c, 1) != 1) | |
| return -1; |
sample/pytest/main.c
Outdated
| if (c == '\n') { | ||
| buffer[pos] = '\0'; | ||
| ret_code = process_command(buffer); | ||
| memset(buffer, 0, pos); | ||
| pos = 0; | ||
| } else if (pos < BUF_SZ -1) { | ||
| buffer[pos++] = c; | ||
| } else { | ||
| printf("[Failure!]\n"); | ||
| return -1; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| if (c == '\n') { | |
| buffer[pos] = '\0'; | |
| ret_code = process_command(buffer); | |
| memset(buffer, 0, pos); | |
| pos = 0; | |
| } else if (pos < BUF_SZ -1) { | |
| buffer[pos++] = c; | |
| } else { | |
| printf("[Failure!]\n"); | |
| return -1; | |
| } | |
| } | |
| if (c == '\n') { | |
| buffer[pos] = '\0'; | |
| ret_code = process_command(buffer); | |
| memset(buffer, 0, pos); | |
| pos = 0; | |
| } | |
| else if (pos < BUF_SZ - 1) { | |
| buffer[pos++] = c; | |
| } | |
| else { | |
| printf("[Failure!]\n"); | |
| return -1; | |
| } | |
| } |
sample/pytest/main.c
Outdated
| printf("[Success!]\n"); | ||
| return 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| printf("[Success!]\n"); | |
| return 0; | |
| printf("[Success!]\n"); | |
| return 0; |
damianloew
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestions for the code before the recent force push. I haven't seen much unnecessary code, which is good - changes are quite clear for now
trunner/types.py
Outdated
| return NotImplemented | ||
|
|
||
|
|
||
| class TestType(str, Enum): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't remember whether it wasn't mentioned somewhere else, but that's a change that should be added in a separate PR. Anyway, I don't see any issues in this new approach.
|
|
||
| test_path = ctx.project_path / kwargs["path"] | ||
| options = kwargs.get("options", "").split() | ||
| options = shlex.split(kwargs.get("options", "")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I initially thought that it's in the existing trunner code - if it's included in your new harness then it should be in one commit and I don't see the need to explain that.
trunner/harness/pytest.py
Outdated
|
|
||
| def pytest_harness(dut: Dut, ctx: TestContext, result: TestResult, **kwargs) -> TestResult: | ||
| test_path = ctx.project_path / kwargs.get("path") | ||
| options = kwargs.get("options", "").split() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that we won't want to change pytest options per test (all of the command line options are rather for the whole pytest campaign configuration), so probably we can remove that.
trunner/harness/pytest.py
Outdated
| class PytestBridgePlugin: | ||
| """Plugin that bridges the TestRunner and PyTest frameworks. | ||
|
|
||
| Each unit test is run as a TestRunner's subtest, accurately |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Each unit test is run as a TestRunner's subtest, accurately | |
| Each test case is run as a TestRunner's subtest, accurately |
unit test expression may be misleading
trunner/harness/pytest.py
Outdated
| @pytest.hookimpl(tryfirst=True) | ||
| def pytest_runtest_setup(self, item): | ||
| for fixture in item.fixturenames: | ||
| # If a fixture failed previously during setup, fail immediately |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the hooks are described using docstrings, let's move this info there.
trunner/harness/pytest.py
Outdated
| if isinstance(report.longrepr, tuple): | ||
| reason = report.longrepr[2].replace("Skipped: ", "") | ||
| elif hasattr(report, "wasxfail"): | ||
| reason = report.wasxfail.replace("reason: ", "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably unnecessary as report.wasxfail just stores that we passed in reason argument.
trunner/harness/pytest.py
Outdated
|
|
||
| try: | ||
| exit_code = pytest.main(cmd_args, plugins=[bridge_plugin, log_plugin]) | ||
| except Exception as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's handle various exception types separately - when you provided dut to be used in pytest tests - you have to be ready for failures typical for pexpect, so similarly to pyharness.
UnicodeDecodeError- should be handledAssertionError- seems to be fine (printing traceback from pytest)- EOF, or TIMEOUT from pexpect - I am not sure how it behaves now, to be verified
HarnessError- Probably won't be raised from pytest, so we could skip that
trunner/harness/pytest.py
Outdated
| try: | ||
| exit_code = pytest.main(cmd_args, plugins=[bridge_plugin, log_plugin]) | ||
| except Exception as e: | ||
| result.fail_harness_exception(e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather unknown exception.
JIRA: CI-614
JIRA:CI-614
sample/pytest/main.c
Outdated
|
|
||
| #define BUF_SZ 256 | ||
|
|
||
| int process_command(char *input) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| int process_command(char *input) | |
| int process_command(char *input) |
sample/pytest/main.c
Outdated
| if (strcmp(input, "EXIT") == 0) | ||
| return 0; | ||
| else if (strcmp(input, "ping") == 0) | ||
| printf("[Ping Complete]\n"); | ||
| else if (strlen(input) > 0) | ||
| printf("%s [OK]\n", input); | ||
| return 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[clang-format-pr] reported by reviewdog 🐶
suggested fix
| if (strcmp(input, "EXIT") == 0) | |
| return 0; | |
| else if (strcmp(input, "ping") == 0) | |
| printf("[Ping Complete]\n"); | |
| else if (strlen(input) > 0) | |
| printf("%s [OK]\n", input); | |
| return 1; | |
| if (strcmp(input, "EXIT") == 0) | |
| return 0; | |
| else if (strcmp(input, "ping") == 0) | |
| printf("[Ping Complete]\n"); | |
| else if (strlen(input) > 0) | |
| printf("%s [OK]\n", input); | |
| return 1; |
JIRA: CI-614
JIRA: CI-614
JIRA: CI-614
JIRA: CI-614
JIRA: CI-614

PyTest support implementation for our existing testing framework.
Description
The implementation includes
Motivation and Context
Types of changes
How Has This Been Tested?
Checklist:
Special treatment
Previous conversation: #443