OA Example Test: Difference between revisions
No edit summary |
|||
| Line 17: | Line 17: | ||
* Progress meter - can be indicated with a 4-byte string: 0 - 100 (percent complete). | * Progress meter - can be indicated with a 4-byte string: 0 - 100 (percent complete). | ||
* Failure "extra info" - can be stored after the above in free-form text. | * Failure "extra info" - can be stored after the above in free-form text. | ||
== Discussion == | |||
AL: I think using examples as tests is problematic. The examples have a very specific audience. | |||
Anything that adds noise in that context would be detrimental to their primary purpose. | |||
I would not object to such additions if there was a way to strip out the test specific code, | |||
but we'd then presumably have to run the examples twice, once as a test, once as an example. | |||
== Proposal for Applicability == | |||
There is another concern, specifically how does OpenArbor determine whether an example can be | |||
integrated on a platform, or even built for a given target architecture? | |||
The proposal here is that an example supplies an (optional) script that when called returns a yes or no answer. | |||
The script would have some simple interface: | |||
applicable --target=XXX or perhaps just applicable --targets, that would either return "yes that target is supported", | |||
or list the targets that the example supports (presumably an "all", or a reference to a generic script that returns | |||
all currently supported architectures would be supplied to support the "applicable" script. | |||
applicable --platform name | |||
Like --target, but would determine whether the app can be integrated into a target. Presumably the output would supply | |||
some rationale/explanation as to why it might return "no", e.g.: | |||
platform foo does not supply a driver-mumble.pia.xml file. | |||
= Changes to examples = | = Changes to examples = | ||
Latest revision as of 21:59, 29 October 2025
Problem
Need a simple/automatable method for OA to determine whether an example it is using as a test is:
- In progress - pass/fail status yet to be determined.
- Do we need a "progress meter"? OA could terminate an example run if the progress meter hasn't changed in X seconds.
- Passed - example completed successfully.
- Failed - example did not complete successfully.
- Some additional information regarding the failure could be captured to help OA/example developers understand what went wrong.
Nice to have:
- Integrating/running an example requires no "copy-for-edit" steps.
Proposal
Examples can be updated to use vfile to access /proc/res/TestResults to store the information described above.
- Pass/fail - can be indicated with a 4-byte string: PASS or FAIL.
- Progress meter - can be indicated with a 4-byte string: 0 - 100 (percent complete).
- Failure "extra info" - can be stored after the above in free-form text.
Discussion
AL: I think using examples as tests is problematic. The examples have a very specific audience. Anything that adds noise in that context would be detrimental to their primary purpose. I would not object to such additions if there was a way to strip out the test specific code, but we'd then presumably have to run the examples twice, once as a test, once as an example.
Proposal for Applicability
There is another concern, specifically how does OpenArbor determine whether an example can be integrated on a platform, or even built for a given target architecture?
The proposal here is that an example supplies an (optional) script that when called returns a yes or no answer.
The script would have some simple interface:
applicable --target=XXX or perhaps just applicable --targets, that would either return "yes that target is supported", or list the targets that the example supports (presumably an "all", or a reference to a generic script that returns all currently supported architectures would be supplied to support the "applicable" script.
applicable --platform name Like --target, but would determine whether the app can be integrated into a target. Presumably the output would supply some rationale/explanation as to why it might return "no", e.g.:
platform foo does not supply a driver-mumble.pia.xml file.
Changes to examples
- XML
- example's PD XML would need to own TestResults resource.
- code - example example source to do the above (could be captured in common/example-utils or somesuch):
int statusFd = open("/proc/res/TestResults",O_WRONLY);
if ( statusFd < 0 ) output something to videostream and exit?
write(statusFd,"--------\0",9); // pass/fail status not yet determined, example is in progress.
...
// do example stuff, perhaps periodically doing:
lseek(statusFd,4,SEEK_SET);
write(statusFd,progressString,4);
...
// write example status (PASS)
lseek(statusFd,0,SEEK_SET);
write(statusFd,"PASS",4);
// write example status (FAIL)
lseek(statusFd,0,SEEK_SET);
write(statusFd,"FAIL",4);
lseek(statusFd,8,SEEK_SET);
write(statusFd,"The frobulator didn't frob.",...);
// finish.
close(statusFd);
PCRs
- PCR:15828 - Automatable video capture.
- PCR:16021 - PCR to update a driver to make it truly plug-and-play to meet the "nice to have" goal.
Proposal #2
Add a screen check application to snapshot and report correctness of video output at know checkpoints.
- code - example
#include <deos.h> #include <screen-check.h> #include <videobuf.h>
ScreenCheck sc("Hello");
VideoStream cout(0,0,2,40);
int main() {
sc.begin();
cout << "Hello World!" << endl;
sc.test("hello.sc.bcfg");
sc.end();
}
- Binary configuration file would be created by simple script defined by a simple text file:
- hello.sc.txt
0,0,2,40 # first, the mask ************ 0,0,2,40 # second, the expected Hello World!
- The end.