Skip to content

Conversation

@lukkrusz
Copy link

@lukkrusz lukkrusz commented Jan 19, 2026

PyTest support implementation for our existing testing framework.

Description

The implementation includes

  • pytest harness
  • pytest plugins:
    • plugin for IO Stream control
    • plugin bridging PyTest with our TestRunner (report)
  • sample pytest tests with conftest.py
  • TestType Enum for simpler and more controllable input validation

Motivation and Context

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Chore (refactoring, style fixes, git/CI config, submodule management, no code logic changes)

How Has This Been Tested?

  • Already covered by automatic testing.
  • New test added: (add PR link here).
  • Tested by hand on: (list targets here).

Checklist:

  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • All new and existing linter checks and tests passed.
  • My changes generate no new compilation warnings for any of the targets.

Special treatment

  • This PR needs additional PRs to work (#449).
  • I will merge this PR by myself when appropriate.

Previous conversation: #443

@gemini-code-assist
Copy link

Summary of Changes

Hello @lukkrusz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly extends the testing capabilities of the trunner framework by integrating Pytest. It provides a dedicated Pytest harness that allows users to define and execute tests using the popular Pytest framework, with results seamlessly reported back to the TestRunner. The changes include new plugins to manage Pytest's output and map its test outcomes to the framework's subtest structure, alongside a new TestType enum for improved test configuration and validation.

Highlights

  • Pytest Integration: Introduces full support for Pytest within the existing testing framework, enabling the use of Pytest for test execution.
  • Custom Pytest Plugins: Implements two key Pytest plugins: one for controlling and capturing I/O stream output, and another for bridging Pytest's detailed test reports with the framework's TestRunner subresult system.
  • TestType Enum: Adds a new TestType Enum to trunner.types.py for clearer and more robust input validation and configuration of different test types (e.g., HARNESS, PYTEST, UNITY).
  • Sample Pytest Tests: Includes example Pytest tests and a conftest.py file to demonstrate how to write and configure Pytest tests within the framework.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces pytest support, which is a great addition to the testing framework. The implementation is solid, with good use of pytest plugins to bridge with the existing TestRunner and to control output. The new TestType enum improves the configuration parsing by making it more robust and readable.

I've left a few comments with suggestions for improvement:

  • In the sample test file, there's a fragile test that depends on execution order. I've suggested how to make it more robust.
  • The error message for unknown test types in the configuration could be more descriptive.
  • Most importantly, for failed pytest tests, the failure details were not being captured. I've suggested a change to include these details in the test report, which will be very helpful for debugging.
  • There is some code duplication in the sample conftest.py file which could be refactored.

@github-actions
Copy link

github-actions bot commented Jan 19, 2026

Unit Test Results

9 725 tests  +263   9 096 ✅ +223   50m 36s ⏱️ -37s
  580 suites + 19     629 💤 + 40 
    1 files   ±  0       0 ❌ ±  0 

Results for commit 20229dc. ± Comparison against base commit bb76df5.

♻️ This comment has been updated with latest results.

Copy link
Member

@mateusz-bloch mateusz-bloch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only quick look.

I’m not sure what the expected behaviour were, but wouldn’t it be better to parse specific failures per test case? For example if there were hundreds of test cases, this could be more readable, and we would still have the full output available via -S. @damianloew

Image

result.status = Status.FAIL
elif all(sub.status == Status.SKIP for sub in result.subresults) and result.subresults:
result.status = Status.SKIP
elif not result.subresults and exit_code != 0:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, there is a case where a test may return a non-zero exit code and we would still treat it as OK. For example, when a cleanup fixture fails, pytest usually reports an ERROR. From what I can see, this is not being parsed either.

Example:

Image

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I'm on it, in a case when the teardown fails, do we want to treat the entire case as FAIL or just the ultimate test result?

One could argue that the test case passed but it was a post-test operation that failed.
As is now, in case of such a failure, the full pytest trace is being released like on the provided screenshot

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The decision has been made to treat setup/teardown ERROR as the entire test-case FAIL.

elif not result.subresults and exit_code != 0:
result.status = Status.FAIL
result.msg = f"Pytest execution failed (Exit Code {exit_code})."
else:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I can see, xpass is currently taken as OK. Are we sure this is what we want? We might be missing information that could be useful

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@damianloew , this is implemented based on the expected output displayed in the issue description. Would we want to display a message next to such a case? (if there is one)

e.g. test_always_xpass: OK [Always passes but it should not]

Technically the xpass is passing the test, albeit unexpectedly, however we could invert the logic and treat the xpass as a failure if we reach such a consensus

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both approaches have pros and cons - I think that we can stick to treating xpass (unexpected pass) as OK to not stop CI when some issue had been resolved and we used mark.xfail when it was present.

*options,
"-v",
"-s",
"--tb=no",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to disable the traceback?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The traceback suppression can shorten the logs, thus improving the readability when we hit many errors during a test run but I could change it to default and support a kwarg to set it in test yamls. What do you think, @damianloew ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that in case of failure we should provide the traceback output next to the result of the particular test case. In the current approach there can be situation where the error message is not sufficient - when I simply removed dut fixture there is no information about that:
Screenshot from 2026-01-20 11-33-52

Referring to @mateusz-bloch comment about parsing specific failures per test case - I agree on that. We should provide the solution where we have only the message about the failure, not the whole pytest output when we have fail. Now the output looks very similar to this with streaming option when we have at least one fail.

Our python tests in phoenix-rtos-tests repo are outdated a bit, but after quickly rewriting one test case to the new approach with adding subresults the behavior in case of failure would be like that:
Screenshot from 2026-01-20 12-07-18

We want to have the similar behavior in pytest tests - traceback/longer error message printed next to the test case result (without the whole streaming output when this option is not set). When we have streaming option active I don't see any contraindications to not show the traceback 2 times - in streaming output and then next to the results.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry about misleading screenshot - of course the overall result should be FAIL.

Copy link
Contributor

@damianloew damianloew left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First suggestions, just to order the changes first


test_path = ctx.project_path / kwargs["path"]
options = kwargs.get("options", "").split()
options = shlex.split(kwargs.get("options", ""))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few words of explanation why such change is needed would be good to be placed in commit description

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially thought that it's in the existing trunner code - if it's included in your new harness then it should be in one commit and I don't see the need to explain that.



def pytest_harness(dut: Dut, ctx: TestContext, result: TestResult, **kwargs) -> Optional[TestResult]:
test_re = r"::(?P<name>[^\x1b]+?) (?P<status>PASSED|SKIPPED|FAILED|XFAIL|XPASS|ERROR)"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the reason of pushing the old changes that you modified in later commits. We try to keep the commit history minimal and rather avoid adding sth and then removing it within the scope of one PR.

We stick to such approach, because then it's simpler to read the history - showing only the crucial changes.

Commits like trunner: replace split with schlex.split or trunner: fix typo in target/base.py doc are good examples and should be separated as you did, but commits like sample/pytest: implement code review suggestions shouldn't be visible in git history. When you havesuggestions, you should edit the particular commits via interactive rebase.

Of course, adding some basic changes in the first commits and then expanding them/adding new utilities in the next ones is fine.

Additionally, use imperative mood as suggested in the golden rules.

@lukkrusz lukkrusz force-pushed the lukkrusz/pytest_support_add branch from 9c3ab8d to 10a1be0 Compare January 22, 2026 11:11
Makes the delay more controllable (like changing to 0).
Allows for better verbosity control of test related output.

JIRA: CI-614
fix linter warning,
fix improper use of 'specname'

JIRA: CI-614
Using shlex.split() is generally safer for parsing command-line arguments as it correctly handles quoted strings.

JIRA: CI-614
@lukkrusz lukkrusz force-pushed the lukkrusz/pytest_support_add branch from 10a1be0 to 2ead454 Compare January 22, 2026 11:22
The previous xpass case had an incorrect reason and the new changes clarify the use.

JIRA: CI-614
Major changes to pytest output handling, following a PR discussion

JIRA: CI-614
By default, it is better for multiline message to start from a new line

JIRA: CI-614
@lukkrusz lukkrusz force-pushed the lukkrusz/pytest_support_add branch from 2ead454 to 75b6ffa Compare January 22, 2026 11:39
Copy link
Member

@nalajcie nalajcie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pointing out some issues in the current code. Please note that the commit list is too verbose, the final history should not include fixes to your own code (added within the same PR) - I would expect 2-3 commits in the final history.

Adding separate commits after review might be beneficial, for tracking review progress, but they should be squashed sooner than later.


def get_logs(self) -> str:
"""Get the full, unsuppressed pytest log
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't multiline docstring if not needed

def get_logs(self) -> str:
"""Get the full, unsuppressed pytest log
"""
return self.buffer.getvalue()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it work if we're streaming? (we're attaching the buffer only if _suppress == true

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's indeed a dead functionality.

We are removing the log retention since the important to us bits are being parsed to trunner anyway.
I will also make use of trunner logging for this purpose and modify the pytest output to fit our needs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, the test stdout will still be caught by DUT wrapping? (I'm wondering what will finally be available in JUnit XML output)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I follow. JUnit XML will contain testsuites, testcases, fail/skip messages followed by system-out traceback (in case of failure).

Each testcase result is being handled before the terminal output, internally.
The expected terminal output, as established with @damianloew, would be a mix of output from DUT and test stdout, it would look like:
/sample/test/pytest.py::test_send_hello...
[DUT] Hello
/sample/test/pytest.py::test_send_hello PASS

self._result = result
self._test_outcomes = {}
self._broken_fixtures = set()
self._unknown_fixture_str = "[UNKNOWN FIXTURE]"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like a class const , not instance attribute? Applies elsewhere also.

else:
msg = reason

self._test_outcomes[report.nodeid] = {"status": status, "msg": error_msg if report.failed else msg}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just set msg = error_msg in if report.failed:

fixture_name = self._extract_broken_fixture(error_msg)
self._test_outcomes[report.nodeid] = {
"status": Status.FAIL,
"msg": f"{error_msg}", # Newline for improved readability
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what newline?

self.test.harness = PyHarness(self.ctx.target.dut, self.ctx, unity_harness, self.test.kwargs)

def _parse_pytest(self):
self.test.shell = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why overwriting shell? If the shell must not be set, then the parser should probably fail when the key is present?

can't we have shell for pytest-based tests?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We established that for now we keep the assumption that all pytest tests won't use the shell. The current pytest tests that we want to introduce into the trunner do not interact with the system shell, and the primary goal now is to run them in the unified environment.

If we planned to write tests that interact with our system using pytest, we would extend this support. Tt would probably be a nice idea, but we want to carry out this process gradually.

About failing - now test.shell is always being set - depending on config in yaml ShellOptions with additional arguments or not is being set (always we want to parse prompt). I think that we can keep this approach and just treat pytest tests separately. That's why we cannot raise fail in such case.

However, @lukkrusz we should leave a comment noting that it's temporary - ultimately I'd require to set shell: False or sth similar in test.yaml for tests, which don't interact with a system shell.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you do some design decision (like "interacting with shell via pytest is not possible") you need to write them down (at least in the code) - otherwise it looks like a bug / hack / unfinished implementation.
I feel that this is done only to make running tests on host-generic-pc easier, having shell on other targets won't complicate running pytest-based tests at all, so probably this assumption can be removed in the future (I might be wrong, as I've only read the code).

I would appreciate adding at least one real test which actually interacts with the DUT in CI, as I'm not sure what's the intended use case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that @lukkrusz left the following comment elsewhere:

# Specific case where we do not need the shell command
# TODO: In the future, we could support shell communication

However, we can also clarify that in the commit description and maybe express it in a better way.

The main reason of this support now is providing the possibility to write or run the existent functional tests using pytest in our internal project, where we don't interact with a system console - to have the ability to run them even if we don't have a port with this console available (release fw).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You've quoted the Host test runner implementation, so I consider it a limitation of this target, not an overall requirement. Overwriting self.test.shell = None while parsing a config (generic code) without any comment is a bad code pattern.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we want to accept the scenario where we don't have access to the system console - empty DUT like on host target, but also the situation where dut device is available and we can parse some additional information there.

We will try to show it somehow in a test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed a new approach and I hope that setting shell to None and changing Host target won't be needed then. Thanks for the suggestions!

builder.add(fail)
return builder.get_harness()
if test.type is not TestType.PYTEST:
# Specific case where we do not need the shell command
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

invalid comment in regards to the if - here you have all other cases.

Probably should be restructured to if pytest: [...] else: Fail

Still, I don't like hardcoding test type here, but it's a host runner so we can keep it.

- name: pytest
type: pytest
kwargs:
path: phoenix-rtos-tests/sample/pytest/test-pytest.py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO this should be relative to the test.yaml, not to the CWD we expect the end user to run trunner from.

Maybe it should be a custom field for pytest test type (handled in a special way when parsing the config)?

print(text)


class FakeDUT:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what we're trying to do, this class doesn't follow the DUT interface, IMHO naming it DUT is misleading. As in real tests you should always use the DUT provided by test runner - this is a bad example.

I don't know what are real-world examples of objects we want to keep during the full session (the testcases should not depend on each other or on the testcase order) - maybe something regarding optimization - eg. you want to have long-standing protocol connection with some authorization done on first usage (eg. ssh where login can take "some time" and you don't want to do it in every testcase because of delays)?

Or maybe we would like to run some executable for the duration of the tests? (buuut this case should probably be handled by trunner if the need would arise)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point regarding the misleading name. The idea was to simulate a communication between a test and a device. I can change the name to avoid further confusion.

I also think that we can keep the device alive for the entire session without each test being dependent on one another. This way, we eliminate the need to re-initialize the connection/device for each case, thus potentially reducing the test runtime. It could also be a good way to test the device's longevity.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

implementing private DUT for basic communication is an anti-pattern in pytest examples (you should always use DUT fixture provided by trunner), show other possible use-case as described above

trunner/types.py Outdated
# multi-line message that does not exceed width limit
elif remaining_lines:
out += ": " + first_line + "\n" + "\n".join(
out += ":\n" + "\n".join(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this change? Not sure it's within the scope of this PR, probably can be done here.

This used to be a lot simpler (see PR: #385) , maybe just not align it at all tylko use \t\t prefix always (now we would have different indentation based on some cryptic logic :p)

Copy link
Author

@lukkrusz lukkrusz Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it can be done as a separate PR for a sub task of this issue instead.

We decided earlier that the double tab indentation should be kept for output exceeding 120 in length and shift_len indentation should be used otherwise.
It was also established that we want the multi-line message to start from a new line every time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overcomplicated code for minor visual difference

@lukkrusz lukkrusz marked this pull request as draft January 23, 2026 15:47
Comment on lines 7 to 14
int process_command(char *input) {
if (strlen(input) == 0) return 1;
else if (strcmp(input, "EXIT") == 0) {
return 0;
} else if (strcmp(input, "ping") == 0) {
printf("[Ping Complete]\n");
} else printf("%s [OK]\n", input);
return 1;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
int process_command(char *input) {
if (strlen(input) == 0) return 1;
else if (strcmp(input, "EXIT") == 0) {
return 0;
} else if (strcmp(input, "ping") == 0) {
printf("[Ping Complete]\n");
} else printf("%s [OK]\n", input);
return 1;
int process_command(char *input)
{
if (strlen(input) == 0)
return 1;
else if (strcmp(input, "EXIT") == 0) {
return 0;
}
else if (strcmp(input, "ping") == 0) {
printf("[Ping Complete]\n");
}
else
printf("%s [OK]\n", input);
return 1;

return 1;
}

int main(int argc, char** argv)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
int main(int argc, char** argv)
int main(int argc, char **argv)

Comment on lines 19 to 22
char c = '\0';
char buffer[BUF_SZ];
int pos = 0;
int ret_code = 1;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
char c = '\0';
char buffer[BUF_SZ];
int pos = 0;
int ret_code = 1;
char c = '\0';
char buffer[BUF_SZ];
int pos = 0;
int ret_code = 1;

int pos = 0;
int ret_code = 1;

printf("[Commence Fake Communication]\n");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
printf("[Commence Fake Communication]\n");
printf("[Commence Fake Communication]\n");


printf("[Commence Fake Communication]\n");

memset(buffer, 0, BUF_SZ);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
memset(buffer, 0, BUF_SZ);
memset(buffer, 0, BUF_SZ);

Comment on lines 28 to 31
// Run the loop until `EXIT`
while (ret_code) {
if (read(STDIN_FILENO, &c, 1) != 1)
return -1;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
// Run the loop until `EXIT`
while (ret_code) {
if (read(STDIN_FILENO, &c, 1) != 1)
return -1;
// Run the loop until `EXIT`
while (ret_code) {
if (read(STDIN_FILENO, &c, 1) != 1)
return -1;

Comment on lines 33 to 44
if (c == '\n') {
buffer[pos] = '\0';
ret_code = process_command(buffer);
memset(buffer, 0, pos);
pos = 0;
} else if (pos < BUF_SZ -1) {
buffer[pos++] = c;
} else {
printf("[Failure!]\n");
return -1;
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
if (c == '\n') {
buffer[pos] = '\0';
ret_code = process_command(buffer);
memset(buffer, 0, pos);
pos = 0;
} else if (pos < BUF_SZ -1) {
buffer[pos++] = c;
} else {
printf("[Failure!]\n");
return -1;
}
}
if (c == '\n') {
buffer[pos] = '\0';
ret_code = process_command(buffer);
memset(buffer, 0, pos);
pos = 0;
}
else if (pos < BUF_SZ - 1) {
buffer[pos++] = c;
}
else {
printf("[Failure!]\n");
return -1;
}
}

Comment on lines 46 to 47
printf("[Success!]\n");
return 0;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
printf("[Success!]\n");
return 0;
printf("[Success!]\n");
return 0;

Copy link
Contributor

@damianloew damianloew left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestions for the code before the recent force push. I haven't seen much unnecessary code, which is good - changes are quite clear for now

trunner/types.py Outdated
return NotImplemented


class TestType(str, Enum):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't remember whether it wasn't mentioned somewhere else, but that's a change that should be added in a separate PR. Anyway, I don't see any issues in this new approach.


test_path = ctx.project_path / kwargs["path"]
options = kwargs.get("options", "").split()
options = shlex.split(kwargs.get("options", ""))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially thought that it's in the existing trunner code - if it's included in your new harness then it should be in one commit and I don't see the need to explain that.


def pytest_harness(dut: Dut, ctx: TestContext, result: TestResult, **kwargs) -> TestResult:
test_path = ctx.project_path / kwargs.get("path")
options = kwargs.get("options", "").split()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that we won't want to change pytest options per test (all of the command line options are rather for the whole pytest campaign configuration), so probably we can remove that.

class PytestBridgePlugin:
"""Plugin that bridges the TestRunner and PyTest frameworks.

Each unit test is run as a TestRunner's subtest, accurately
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Each unit test is run as a TestRunner's subtest, accurately
Each test case is run as a TestRunner's subtest, accurately

unit test expression may be misleading

@pytest.hookimpl(tryfirst=True)
def pytest_runtest_setup(self, item):
for fixture in item.fixturenames:
# If a fixture failed previously during setup, fail immediately
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the hooks are described using docstrings, let's move this info there.

if isinstance(report.longrepr, tuple):
reason = report.longrepr[2].replace("Skipped: ", "")
elif hasattr(report, "wasxfail"):
reason = report.wasxfail.replace("reason: ", "")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably unnecessary as report.wasxfail just stores that we passed in reason argument.


try:
exit_code = pytest.main(cmd_args, plugins=[bridge_plugin, log_plugin])
except Exception as e:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's handle various exception types separately - when you provided dut to be used in pytest tests - you have to be ready for failures typical for pexpect, so similarly to pyharness.

  • UnicodeDecodeError - should be handled
  • AssertionError - seems to be fine (printing traceback from pytest)
  • EOF, or TIMEOUT from pexpect - I am not sure how it behaves now, to be verified
  • HarnessError - Probably won't be raised from pytest, so we could skip that

try:
exit_code = pytest.main(cmd_args, plugins=[bridge_plugin, log_plugin])
except Exception as e:
result.fail_harness_exception(e)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather unknown exception.


#define BUF_SZ 256

int process_command(char *input)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
int process_command(char *input)
int process_command(char *input)

Comment on lines 9 to 15
if (strcmp(input, "EXIT") == 0)
return 0;
else if (strcmp(input, "ping") == 0)
printf("[Ping Complete]\n");
else if (strlen(input) > 0)
printf("%s [OK]\n", input);
return 1;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[clang-format-pr] reported by reviewdog 🐶
suggested fix

Suggested change
if (strcmp(input, "EXIT") == 0)
return 0;
else if (strcmp(input, "ping") == 0)
printf("[Ping Complete]\n");
else if (strlen(input) > 0)
printf("%s [OK]\n", input);
return 1;
if (strcmp(input, "EXIT") == 0)
return 0;
else if (strcmp(input, "ping") == 0)
printf("[Ping Complete]\n");
else if (strlen(input) > 0)
printf("%s [OK]\n", input);
return 1;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants