Skip to content

Conversation

@wmaroneAMD
Copy link

Basic help is in the root, detailed documentation of the CI support script and configuration is in docs/ci-aspeed.md

This also refines the functional test structure to make it more machine parseable. This is effectively a manual convention due to lack of a test infrastructure to enforce it, but it ensures the script only has to key on 4 tokens.

rusty1968 and others added 9 commits November 19, 2025 15:19
Add GitHub Actions workflow for automated hardware testing on AST1060.
The workflow runs in two stages:

1. Precommit checks on ubuntu-22.04:
   - Verify Cargo.lock
   - Run cargo xtask precommit (format/lint/build)
   - Upload bloat reports

2. Hardware functional tests on self-hosted runner with AST1060:
   - Build firmware with test features (hmac, hash, rsa, ecdsa)
   - Generate UART boot image with proper 4-byte size header
   - Upload firmware via UART following AST1060 boot protocol
   - Monitor test execution and parse PASS/FAIL/SKIP results
   - Upload test logs and artifacts

The workflow supports manual triggering with test suite selection and
only runs hardware tests if precommit checks pass.
- Added instructions on how to configure the test host environment.
- Added scripts required to package the binary and kick off the test.
This fully encapsulates the test flow into Python. Use of tio has been removed
and replaced entirely by pyserial, removing the complication of an external
tool.

TODO: The test firmware itself needs a clear delineator when it is complete
that we can look for. As it stands, tests just sort of run and complete with
inconsistent syntax, and eventually we run out of tests and get stuck looking
at timer ISR test leftovers.

As it stands the script will time out after 600 seconds. All test output is
logged to a file within the working directory. The script also serves
as a tool to manipulate the GPIOs controlling boot functionality, and
can be configured to target different GPIOs if needed.
Made tests outputs more consistent, but further refinements are necessary. Also added a test end print and adjusted the script to key off of it.
The --upload-only parameter lets us target devices that we have only manual SRST
and UART control over.
Copy link
Collaborator

@stevenlee7189 stevenlee7189 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see my review comment and test results below.

@stevenlee7189 stevenlee7189 self-requested a review January 5, 2026 09:39
return False

# Keep only last few lines in buffer
buffer = '\n'.join(lines[-10:])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The line buffer = '\n'.join(lines[-10:]) retains already-processed lines, causing the loop to re-scan them in the next iteration. This leads to double-counting of PASS/FAIL results.

Test execution completed!
Results: {'passed': 119, 'failed': 6, 'skipped': 0}
❌ Test execution failed!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I verified that applying the following patch fixes the issue by only retaining the last incomplete fragment:

diff --git a/scripts/uart-test-exec.py b/scripts/uart-test-exec.py
index 9530f94..05ba45c 100644
--- a/scripts/uart-test-exec.py
+++ b/scripts/uart-test-exec.py
@@ -309,7 +309,7 @@ class UartTestExecutor:

                 # Parse test results
                 lines = buffer.split('\n')
-                for line in lines:
+                for line in lines[:-1]:
                     if 'PASS' in line:
                         test_results['passed'] += 1
                     elif 'FAIL' in line:
@@ -328,7 +328,7 @@ class UartTestExecutor:
                         return False

                 # Keep only last few lines in buffer
-                buffer = '\n'.join(lines[-10:])
+                buffer = lines[-1]

With this fix, the counts are correct (showing 3 fails instead of 6):

Test execution completed!
Results: {'passed': 43, 'failed': 3, 'skipped': 0}
❌ Test execution failed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants