The Testus TestPlayer© is a versatile tool for the automatic generation of test cases and the analysis, visualization and evaluation of generated test suites.



TestPlayer© - Basic Operation



The Hello MBT World example below shows the essential steps for the automatic generation of a testsuite and the visualization of the resulting test cases.

Usage models can be created in the Model editor section of the TestPlayer© Dashboard [1] by means of a graphical editor.

First, we model the behavior and actions of users when they call the Java Swing GUI application HelloMBTWorld.jar by means of a usage model with the start state [, the state Clear (Clear button), the state Hello (Hello button), the state MBT (MBT button), the state World (World button), and the end state ] which can be reached by pressing the Bye button.

In addition, all edges between the usage states must be added to describe the dynamic usage behavior. As a result, we receive an incomplete usage model that still lacks the edge labels. Edge labels are pairs (event, probability) containing the names of the transition events and the corresponding transition probability between the usage states.

In our example, the transition events are called StartApp (calling the GUI application HelloMBTWorld.jar), ClickClear (pressing the Clear button), ClickHello (pressing the Hello button), ClickMBT (pressing the MBT button), ClickWorld (pressing the World button) and ClickBye (pressing the Bye button).

This brings us to the following incomplete usage model:

Subsequently, the generated model must be saved in the uncompressed (important!) XML format for further processing:



In the next step, the usage model is loaded into the model area (models) of the TestPlayer© via the dashboard Filemanager :


When executing dashboard actions, the first step in the Model settings section is to select the appropriate usage model, in our example HelloMBTWorld :

In order to be able to automatically generate test cases from the previous incomplete usage model without edge labels, all transition events that lead to a change of state and the associated transition probabilities must be added to the edges of the usage model. The TestPlayer© automatically takes over the necessary completion by selecting Check graph elements in the dashboard:

The extended, complete usage model can be downloaded from the Dashboard Filemanager

and now looks like this:

In the last step of the modeling process, the generic event names must still be adapted to the specific application. As a result of this adaptation, the final, complete usage model of the Java GUI application HelloMBTWorld.jar is derived:

After uploading the usage model to the models section of the TestPlayer©, the automatic generation of test suites for testing application HelloMBTWorld.jar can be performed. Now, via the TestPlayer© Dashboard [1] test cases can be automatically generated:


A look at the TestPlayer Settings in the Information section shows the following generation alternatives:

  • Selected model = HelloMBTWorld: the test suite is generated from the usage model HelloMBTWorld.xml
  • Profile usage = no: test cases are randomly generated according to a geometric distribution without using a special usage profile
  • Number of test cases = 100: the test suite contains 100 different test cases after generation
  • Generation strategy = fast overview: in addition to the generation of a test suite for the specified sort criterion in the test_cases directory of the file manager, additional diagrams and test case visualizations are created in the diagrams directory of the file manager
  • Sorting strategy = all strategies: test suites are generated for all sorting criteria.

Pressing the Start button will execute the selected options and create the test cases in the directory test_cases:

The following diagrams are results for the sorting strategy length, i.e. test cases are sorted by length (shorter first):

The statistics.txt file contains additional information about the test suite, such as the number of states (nodes) and state transitions of the usage model, the mean length of a test case in the test suite, or the number of test cases that cover all states of the usage model:

The file test_cases_for_state_coverage.txt contains all the test cases that are needed to cover all states of the usage model, as well as some additional information, such as the date of generation:

The file test_cases_for_state_coverage.json.txt contains all the test cases that that are needed to cover the states of the usage model in a compact JSON format (JavaScript Object Notation) for further processing of the test suite:

The diagrams directory contains further subdirectories for displaying the characteristic properties of the test suite:

  • analyze_testcases: various statistics for transition events and usage states
  • single_metrics: steady state distributions of usage states and visit frequencies
  • visualize_MCUM: XML files containing accumulated test cases for visualizing the state coverage of the usage model
  • visualize_testsuite: PNG files for visualizing the test suite by using various graph visualization methods

The next two diagrams show how to download the XML files for visualizing accumulated test cases as a compressed ZIP archive via the Filemanager:


After unpacking the ZIP archive, the XML files for visualizing the test cases can be loaded and viewed in the Model editor:


Afterwards, the XML files can be saved with the Model editor for further processing in various graphic formats:


Special metrics in the single_metrics directory can be used to analyze the characteristic properties of the test suite:

  • SSP: Comparison of the probability distribution of usage states in statistical equilibrium for the usage model and the relative frequencies of the corresponding usage states in the generated test suite. As you can see, the values for the individual usage states are matched very well.


  • SSV: Comparison of the average number of test cases required to visit a state in the usage model respectively during the test execution. As you can see, the values for the individual usage states are matched very well.


After the analysis of the test suite, the resulting test cases can be visualized. The edge labels show the triggering event and how often the edge is traversed during the test case. As a result, three test suites are displayed that are generated according to the sort_l sort criterion, i.e. the test cases are sorted out of 100 randomly generated test cases, sorted first according to the length of the test cases and then selected for achieving the desired coverage criterion.



  • Testsuite state coverage contains 4 (out of 100) test cases
The individual test cases are animated (emphasized by bold orange coloring) and show the current coverage of usage states as well as transitions between the states (represented by pale orange coloring). The number after the colon of the particular click event shows how often the state transition within the test case was executed.


  • Testsuite transition coverage contains 17 (out of 100) test cases
The individual test cases are animated (emphasized by bold orange coloring) and show the current coverage of usage states as well as transitions between the states (represented by pale orange coloring). The number after the colon of the particular click event shows how often the state transition within the test case was executed.


  • Testsuite path coverage contains 57 (out of 100) test cases
The individual test cases are animated (emphasized by bold orange coloring) and show the current coverage of usage states as well as transitions between the states (represented by pale orange coloring). The number after the colon of the particular click event shows how often the state transition within the test case was executed.

Publications:

  1. W. Dulz. A Comfortable TestPlayer© for Analyzing Statistical Usage Testing Strategies. Proceedings of the 6th ICSE/ACM Workshop on Automation of Software Test (AST '11), 2011.