OpenArbor Testing: Difference between revisions
| Line 12: | Line 12: | ||
== Updating the boot/hyperstart images on a target == | == Updating the boot/hyperstart images on a target == | ||
On the Windows Test Machine, build a hyperstart image, boot for the bsp<br> | On the Windows Test Machine, build a hyperstart image, boot for the bsp<br> | ||
This example is for the zcu102-3 target board | |||
composite.darc<br> | composite.darc<br> | ||
deosBoot.bin<br> | deosBoot.bin<br> | ||
Revision as of 22:34, 4 March 2021
Executing & Monitoring OpenArbor Automated Test Runs
OpenArbor testing involves having the "client" OpenArbor Test Harness GUI running on one machine (Test Monitoring Machine) and the "server" for the OpenArbor Test Harness running on a Test Machine. The following steps describe how to execute and monitor an OpenArbor automated test run for a DDS.
- Setup the Test Monitoring Machine
- Verify Test Machine is running the OpenArbor Test Harness Server:
- Connect to the Test Machine via VNC Viewer
- Start the OpenArbor Test Harness Server, if it is not already running.
- Cleanup the Test Machine
- Add a DDS and Test Config to the Test Machine
- Test the DDS
Updating the boot/hyperstart images on a target
On the Windows Test Machine, build a hyperstart image, boot for the bsp
This example is for the zcu102-3 target board
composite.darc
deosBoot.bin
Start a "DDC-I cygwin64 DESK Bash Terminal"
ssh lcj@linux03.ddci.com -p 47734
cd /tftpboot/zcu102/
Start a second "DDC-I cygwin64 DESK Bash Terminal"
cd to the platform project's output folder
scp -P 47734 composite.darc lcj@linux03.ddci.com:/tftpboot/zcu102/composite-lcj.darc
In the first terminal:
rm composite-3.darc
ln -s composite-lcj.darc composite-3.darc
chmod 775 composite-lcj.darc
exit from both terminals.
Cleanup the Test Machine
Since the test machines are shared, the general rule is to NOT nuke a test run that you did not initiate.
To remove ALL existing DDS' installed on any test machine, right-click the test machine name and select "Nuke Everything on <machineName>...". This will remove the DDS installation, test workspace and test logs.
To remove a particular DDS, right-click a DDS on a test machine and select "Nuke this DDS..."
Add a DDS and Test Config to the Test Machine
- In the OpenArbor Test Harness GUI on the Test Monitoring Machine, right-click on the Test Machine name, and select "Add DDS" | and select a DDS to test from the list of available DDS'.
- Right-click on the "Add"ed DDS, and select "Add Test Config" | and select a target board selection.
Adding a new "Test Config" (a.k.a Target Board)
When selecting "Add Test Config", the drop-down list of available test configurations for each target board is populated by the files in the "MessagesFiles" directory. This directory can be found here: C:/oaTestHarness/workspace/branches/"branch_name_under_test"/TestResources/MessagesFiles, where "branch_name_under_test" is the branch being tested e.g. "7.2.0". The test harness performs an "svn co" of this location using the branch name selected on the "SVN Branch for Test Resources" drop-down list on the DDS Manager tab to populate the list.
To add a new Test Config:
- Create a new text file name following the convention: <product>_<arch>_<board name>, e.g. DEOS_ARM_DEOSNAI75ARM1, where <board name> matches the board's name on X9.
- Edit the file, specifying the projectType, deosPlatformName, target, and x9target.
#----------------deos_arm_DEOSNAI75ARM1.txt-----------------------------
projectType=Deos Executable Project
deosPlatformName=nai75arm1
target=ARM
x9target=DeosNAI75Arm1 - Check in the new file using the PCR for test updates for the release
svn add DEOS_ARM_DEOSNAI75ARM1.txt
svn propset DDCI:pcr-required true DEOS_ARM_DEOSNAI75ARM1.txt
svn ci -m"PCR 3336 Added new messages file for arm board" - Now the Test Config should appear in the list.
Generating a license file for a new customer
The test harness will generate a license file based on the settings in the licenses.properties file. This file can be found here: C:/oaTestHarness/workspace/branches/"branch_name_under_test", where "branch_name_under_test" is the branch being tested e.g. "7.2.0". The default license file will contain the proper license features for X86 and PPC. If a customer needs other architectures, do the following:
- Edit the file to add required architectures for the customer
booyah=X86,PPC,ARM - Check in the new file using the PCR for test updates for the release
svn ci -m"PCR 3336 Updated licenses.properties file for new booyah customer" - Delete any existing license.lic file.
C:/oaTestHarness/workspace/ddses/DDS-booyah-deos-multicore-20170228/license.lic - Start a test run. A new license.lic file will be generated in the above location.
Test the DDS on the Test Machine
The OpenArbor Test Harness GUI has several widgets to aid in running tests. The currently selected configuration will be displayed in the "Configuration:" text box. The "Tests:" text box is useful for manually entering subsets of tests to run (see #Tests textbox)
- Click the "Test" button
The configuration you selected will automatically be installed and the test system will start running tests for that DDS configuration.
For a particular release, the OpenArbor Test Plan and Report wiki page will indicate which DDS should be tested on which test machine. (ie. OpenArbor_5.3.0_Test_Plan_and_Report).
Installing a DDS on the Test Machine
Use the OpenArbor Test Harness GUI to install a DDS. Multiple DDS installations can exist on a test machine, but the tests can only be execute for one DDS installation at a time. On the test machine, each DDS is installed in the C:\DDS-list directory in a folder matching it's DDS name. For a particular DDS, you can test for multiple target boards as well. There is a separate test workspace directory and test result logs directory for each target board under test. Testing is *usually* done on an SVN branch corresponding to the release (ie. 12E). The OpenArbor Test Harness GUI allows for a particular SVN branch of the tests to be checked out.
The OpenArbor Test Harness GUI has an "Installation" group of widgets which are used just to install a DDS, and/or a specific branch of the OpenArbor plugins. Not tests will be started using the "Install" button.
- Right-click a test machine name, and select "Add DDS" | and select a DDS to install from the list of available DDS'. For a particular release, the OpenArbor Test Plan and Report wiki page will indicate which DDS should be tested on which test machine. (ie. OpenArbor_12E_Test_Plan_and_Report).
- On the OpenArbor Test Harness tab, check the "Build and install OpenArbor from branch:" check box and select a branch to test (ie 12E).
- Right-click on the "Add"ed DDS, and select "Add Test Config" | and select a target board selection.
The configuration you selected will automatically be installed and the test system will start running tests for that DDS configuration.
Dry Run Testing
Once the DDS has been installed, you can stop the test system using the Stop button on the OpenArbor Test Harness tab. Using TightVNC (OpenArbor_Testing#Connecting_via_VNC_Viewer), you can access the test machine directly. You can then use the cygwin installer to add "unreleased" items to the installation as you wish. You can then restart the testing using the Test button.
Find the correct desk_setup.exe on a test machine here: C:\DDS-list\<DDS config name>\DDC-I\desk\bin (ie. C:\DDS-list\DDS-hosmer-deos-elbert-20121120\DDC-I\desk\bin)
OpenArbor Test Harness GUI
The OpenArbor Test Harness GUI consists of two parts, the Tree View and the Tab Viewer. The Tree View is a hierarchical view of all the available test machines. The Tab Viewer has two tabs, one for the OpenArbor Test Harness and one for viewing test logs.
Tree View
At the top level, there is one entry for each test machine. Each test machine indicates it's "readiness" for usage via an icon and textual indication following the test machine name. If a test machine reports it is "(not available)" then, the server script is not running on the machine. See OpenArbor_Testing#Setup_Instructions_for_the_Test_Monitoring_Machine_(a.k.a._Client)
Context Menu Options
Right-clicking on a test machine entry in the tree view provides a context menu with several menu options.
New Machine...
Use this option to add a new test machine to the Tree View. Each test machine must have RFT installed and an available RFT license.
Remove Machine <machineName>
Use this option to remove a test machine from the Tree View.
Add DDS>
This option will display a list of all available DDS' that exist on \\scorebuild\d$\DDCI_integration, \\scorebuild\d$\DDCI_integration and \\nx3000\ship\dds\windows\{approved|other}. Select a DDS and it will be added to the tree view for the test machine. media:addingADDS.png
Nuke Everything on <machineName>...
Use this option to remove all DDS installations on this test machine, including any test logs or workspaces that exist. Basically, this will clean the test machine.
Kill OpenArbor/lmgrd on <machineName>
Use this option to stop the OpenArbor.exe and lmgrd.exe (license manager) on the machine. This is useful if the test gets into a "hung" state.
Add Test Config >
Once a DDS has been added, a Test Configuration for a particular rtos-architecture-target_board can be added for the DDS. A list of all known target boards is displayed. Selecting one will prepare that test config for use, but install/test will not start until the appropriate button is pressed. media:addingAConfig.png
Modify Config...
Once a config has been added, you can right-click on the config to modify it. This will allow you to override any values in the config, as well as specify additional values of your own. Some of the things you might want to do are:
- Lock targets as a different user. Specify
x9user=username. - Change the ipaddress, bsp, or x9name.
Nuke this DDS...
Individual DDS installations can be removed using this option. All test logs and test workspaces will be removed.
Nuke config
Deletes a test config, with its workspaces and test results, and only that config. Other configs and the DDS are unaffected.
Tab Viewer
The Tab Viewer has two tabs, one for the OpenArbor Test Harness and one for viewing test logs.
OpenArbor Test Harness Tab
When a DDS entry in the Tree View is selected, the OpenArbor Test Harness tab shows all the settings for the selected DDS.
Installation
The installation section of the OpenArbor Test Harness tab is used to indicate what SVN branch should be used when installing the test system. The drop down defaults to the branch that the currently installed OpenArbor. To install OpenArbor for a particular branch, click the "Build and install OpenArbor from branch" checkbox and select an available branch from the drop down list.
The Install button will install the selected configuration using the setting in the Installation section.
Testing
The Testing section of the OpenArbor Test Harness tab shows what test configuration is currently under test.
Tests textbox
The "Tests:" textbox can be used to enter the name of a particular test (or set of tests to execute). If blank, the test system will run all the tests in the entire test suite.
Test names are essentially the fully qualifed names of the tests(in fact, they have to be). Multiple tests can be added, separated by spaces. Any set of tests defined as a method in Launcher can be called by the method name. Subscript tests can be run all at once by just providing the parent test name.
- To run a single test
- com.ddci.openarbortests.other.PreferencesDialogTest
- To run a single sub-test
- com.ddci.openarbortests.compileRunDebug.Build.fibonacci
- To run a super-test
- com.ddci.openarbortests.compileRunDebug.Compile
- To run a launcher method containing one or more tests to run
- loadMldTests
- To run any combination thereof
- loadMldTests com.ddci.openarbortests.compileRunDebug.Compile com.ddci.openarbortests.compileRunDebug.Build.fibonacci com.ddci.openarbortests.other.PreferencesDialogTest
- OR loadMldTests \
com.ddci.openarbortests.compileRunDebug.Compile \
com.ddci.openarbortests.compileRunDebug.Build.fibonacci \
com.ddci.openarbortests.other.PreferencesDialogTest
These values are case-sensitive. An arbitrary number of values can be specified, separated by spaces. Multiline (for easier readability) is done by ending each line except the last with a \. Uniqueness is handled by the script itself, it doesn't matter if certain test sets overlap in some way.
The Test button starts the test system executing tests.
Examples to test exclusively
This section contains checkboxes that correspond to the groups of examples installed on a given DDS. If one or more of these checkboxes are selected, only those examples will be imported into the OpenArbor workspace. Compile, Run, and Debug tests will be performed on all of the examples in each selected category.
When any checkbox is active, the Tests textbox will be disabled since it will not be used when running the tests.
Log Viewer Tab
The Log Viewer tab shows the state of all the tests that have been run. You can also view the contents of a selected test, Rerun a particular test or Publish the test results. You can watch the tests run in the Log Viewer tab. Try clicking around on the underlined things.
Updating Tests
- Follow the development environment setup instructions if there isn't a development environment on the test machine.
- Follow the workspace setup instructions using the DDS and branch you are testing for your install location and workspace. The license file for self-hosting is in the DDS root.
- You can either create launch configurations or run the tests from the OpenArbor Test Harness to see if your changes worked.
- When checking in your changes, make sure you never store your credentials. From the command line, use --no-auth-cache with svn.
Publishing Test Results
In the Log Viewer tab, click Publish Test Results. In the Input Dialog, enter the formality, one of DryRuns or FormalRuns. The version is figured out automatically.
Document Test Results
Each OpenArbor release has a corresponding Test Plan and Report wiki page (ie. OpenArbor_12E_Test_Plan_and_Report). Update the wiki page for the release being tested with the results.
Create a Test Report
At the top of the test report wiki page, once the page is complete, click on the history tab. Copy the link for the latest revision (the top timestamp hyperlink) and send it to QA.
SWTBot Development/Testing
SWTBot Development
Getting Started
Follow the instructions at OpenArbor Development to setup the OpenArbor self-hosting development environment.
Writing Tests
- Take a look at the existing tests to see how they're organized, and put your test in the appropriate package, if you're creating a new test. If you're creating a new test, you'll also need to add it to a group in the Launcher, again, see the existing code for how that should work.
- All tests extend DdciTest, so make sure to do that first.
- Go ahead and start writing a test now. Use the utility classes in the common package. Tests shouldn't need to know anything about the test harness, or directly access widgets. If the functionality you need isn't in an existing utility function, the preferred solution is to add it to the utility classes, again using utility functions wherever possible.
- When writing utility functions, try to use the SWTBot APIs wherever possible. Trust me, it will make your life easier. If you need something not in the APIs, the preferred pattern is to extend SWTBot so you are compatible with the rest of the utility functions. See SWTBotTextCanvas for an example.
- The utility function pattern is to keep inputs as general as possible, and outputs as specific as possible. Most utility functions that take a widget should take an AbstractSWTBot<? extends Widget>. This makes life easier for tests. If they return a widget, it should be as specific an SWTBot wrapper as possible. For example, if it always returns an SWTBotTreeItem, cast the return to that first. This makes life easier for utility functions.
- In general, most or all casting and specificity should stay in the utility functions. Tests don't need to know anything about that behavior.
- There is in fact a Spy view available. In a (probably nonSWTBot) launch configuration, make sure the org.eclipse.swtbot.eclipse.spy plugin is being loaded. Launch OpenArbor and open the EclipseSpy view. Press ctrl+shift to toggle it. You might want to pop it out (right click->detach) so you can see it while you're using OpenArbor.
Running/Debugging Tests
- Using the OpenArbor Test Harness, install a DDS, and add a test config.
- To run/debug a single test, edit the C:\oaTestHarness\workspace\ddses\<ddsName>\<ddsConfigName>\override.properties
- testsToRun=com.ddci.openarbortests.<test path>.<test name>
- Launch the DDS <product> Test Launch configuration. (See OpenArbor_Development#Development_Environment_Setup)
- Note that some tests launch a child OpenArbor in order to do stuff in another workspace. However, this child OpenArbor will use whatever is installed in the DDS you are running against.
- These are more difficult to work with. The debugger won't be attached to the child launch, and you need to manually build and install any test changes for them to take effect, as the child launch uses the installed OA instead of self-hosting. This may mean you also need to pay attention to the installed OA version vs. your self-hosted version.
Note that while you're running and developing tests, debugging has issues with SWTBot. In particular, if you stop at a breakpoint, you won't be able to reliably continue the test from that breakpoint. Stepping through a test may become unreliable when SWTBot tries to locate a widget.
Lest-We-Forget Howtos
To get the source code for other people's code to show up while debugging:
- find which plugin jar the binary belongs to.
- If you get a Class File Editor, it should tell you where it says "The JAR file ... has no source attachment".
- If you don't get that information, try doing a
Alt+Shift+W, Pto show the class file in the Package Explorer. Then you'll see the jar as an ancestor of the selected item.
- find the plugin's source jar. it might be here: ftp://ftp.osuosl.org/pub/eclipse/eclipse/updates
- put the source jar in your eclipse/plugins directory.
- start eclipse with the -clean option.
If you still get a no-source error, try pointing to the source jar directly with the button in the class file editor.
Create a license file for the DDS release
DDS uses a FLEXnet licensing mechanism. We test each release for a particular customer by creating a served license file for the DDS release they will receive. By convention, the license file for a test run resides in C:\FLEXnet\License.lic.
- Stop the license server
- C:\FLEXnet\lmtools.exe
- Click Start/Stop/Reread tab
- Click Stop Server button
- Run the License Generator (Dbl-click the License Generator shortcut in the Tools folder)
- i.e., L:\LicenseGenerator_current\runme.bat where L: is mapped to \\nx3000\utils\flexnet
- At the bottom of the app, in the middle of the window, click the second button from the bottom that says: Read Existing License...
- Point the dialog at C:\FLEXnet\License.lic
- On the left hand side of the dialog, ensure the following (from top to bottom)
- An expiration (the default expiration is acceptable)
- Host: Windows (this is the default)
- Type: Floating
- Server host type: Disk Serial Number
- Server host name: <testmachine name>
- Server host value: <test machine disk serial number> (dir/p in cmd.exe to find this out)
- In the "middle" Ensure the correct set of feature are selected
| DEOS | GCC | HEARTOS | OPENARBOR | SCORE | TADS | TRAC | |
|---|---|---|---|---|---|---|---|
| Deos Release | All | MIPS, PPC, X86 | None | All | None | None | MIPS C,C++ PPC C,C++ X86 C,C++ |
| HeartOS Release | None | PPC | PPC | All | None | None | PPC |
| GCC Standalone Release | None | <Target> | None | All | None | None | <Target> C, C++ |
| SCORE Standalone Release | None | None | None | All | <Target>, SYS, GUI | If Target is 1750a, All | If Target is not 1750a, Select Target |
- Click the Generate License File button at the middle/bottom of the window
- Click "OK" when the license file is generated successfully
- Close the License Generator dialog
- Restart the license sever
- C:\FLEXnet\lmtools.exe
- Click Start/Stop/Reread tab
- Click Start Server button
Updating an Installed Release
Sometimes a fix will be released during the testing cycle that does not necessitate reinstalling the entire build. Incremental updates for a specific target/platform combo can be made using desk-setup.exe. This is normally found in the desk/bin folder for Deos and Heartos releases. However, it is not shipped with Score/GCC standalone, so in those cases you will need to copy desk-setup.exe out of a Deos or Heartos release on scorebuild, and drop it into the desk/bin folder, where it should work normally.
- Click the Run Desk Setup button
- Install from internet
- Make sure to choose the right repository for the release you are using, right now that's probably fourpeaks
- Click the view tab to switch to pending, and choose experimental
- Install the updates and rerun any tests
Target Board Setup
For testing SCORE & GCC Standalone and actual hardware, the target board must be connected to the test machine via a serial cable. The following target boards require a serial connection:
- Excimer
- 80x86 Bare Laptop
- 5554
- 5553
- Rattler
- ATB9200
There is currently one Deos Target, DeosPPC750 that requires a special boot sequence. The details for that are outlined on the OpenArbor Test Plan and Report wiki, found here. All other Deos targets communication over an ethernet connection.
The RaytheonC44 board is currently connected to TestHP (10.0.1.116). A special JTAG server, found here: C:\C40_JTAG\start_server_for_port_5678.bat, must be running on TestHP in order for C3x4x JTAG Server connections to work. If started properly,
the following is displayed
- C:\C40_JTAG>score_c4x_jtag_server.exe -initialize processors.info -driver sdgo4x.dvr -port 5678
- Open_Server_Connection
- Opened port for Channel 1 on port 5678
The SCORE_TMS324C4X_JTAG.txt contains the communication connection info for testing
Rerunning Launcher and/or Specific Tests
Use the testsToRun property in the override.properties file (C:\DDS-list\<DDS>\<target>\). Just copy the full name from the main log. Test names are essentially the fully qualifed names of the tests(in fact, they have to be). Multiple tests can be added, separated by spaces. Any set of tests defined as a method in Launcher can be called by the method name. Subscript tests can be run all at once by just providing the parent test name.
- To run a single test
- testsToRun=com.ddci.openarbortests.other.PreferencesDialogTest
- To run a single sub-test
- testsToRun=com.ddci.openarbortests.compileRunDebug.Build.fibonacci
- To run a super-test
- testsToRun=com.ddci.openarbortests.compileRunDebug.Compile
- To run a launcher method containing one or more tests to run
- testsToRun=loadMldTests
- To run any combination thereof
- testsToRun=loadMldTests com.ddci.openarbortests.compileRunDebug.Compile com.ddci.openarbortests.compileRunDebug.Build.fibonacci com.ddci.openarbortests.other.PreferencesDialogTest
- OR testsToRun=loadMldTests \
com.ddci.openarbortests.compileRunDebug.Compile \
com.ddci.openarbortests.compileRunDebug.Build.fibonacci \
com.ddci.openarbortests.other.PreferencesDialogTest
These values are case-sensitive. An arbitrary number of values can be specified, separated by spaces. Multiline (for easier readability) is done by ending each line except the last with a \. Uniqueness is handled by the script itself, it doesn't matter if certain test sets overlap in some way. This parameter is only needed if specific tests need to run, by default, loadScriptsNormally is run if this parameter is not specified.

