Paper Review Test Input Generation with Java PathFinder
We should how model checking and symbolic execution can be used to generate test inputs to archive structural coverage of code that manipulates complex data structures.
The key aspect from symbolic execution is the use of symbolic values, instead of actual values.
Red-black tree code in TreeMap library of Java.
We will show that although a program model checker (without relying on abstraction) cannot always achieve good code coverage when dealing with programs manipulating complex data, augmenting it with symbolic execution (which can be seen as a form of abstrac- tion), can result in the generation of tests that will achieve high code coverage.
Whenever the environment is under-approximated (less behaviors are considered than are present in the actual environment) during model checking then model checking becomes a form of testing.
To illustrate this idea we show how one can test the Java TreeMap library by analyzing all sequences of put and remove calls on a set with maximally N elements using the JPF model checker (Figure 7).
In order to test the put and remove methods we auto- matically generated all (non-isomorphic) input trees up to a given small size from the Java description of the method’s precondition (i.e. the structural invariant),
What you can learn from this to make your research better? Choose the right case study / evaluation objective.
Details and Problems From the presenters’ point of view, what questions might audience ask?
inputs that meet a given testing criterion for a particular method under test, we model check the method.
Where does the “testing criterion” come from?
criterion: a set of properties the model checker should check for.
Counterexamples to the properties represent paths that satisfy the coverage criterion.
Here, the criterion is connected with the concept of “coverage”, i.e. use properties to encode coverage requirements.
Each path is represented by a set of constraints on the all inputs.
Actual testing will be real values that satisfies these constraints.
Similar to symbolic execution systems, model check- ers have been used to find bugs in both the design and the implementation of software [10, 12, 19, 25, 29, 30]. These approaches often require a lot of manual effort to build test harnesses. However, the approaches are some- what complementary to KLEE: the tests KLEE generates can be used to drive the model checked code, similar to the approach embraced by Java PathFinder [31, 37].
Also, I don’t understand how feedback directed random testing (like DART or randoop) is better than JPF.