Producing High-Quality Test Data

This part addresses the question what makes test data valuable for functional tests. You will understand the important concept of extreme and special values, and how to obtain test data that is highly compressed and also attains a high test coverage. The article also explains our novel idea for constructing a generator for such high-quality test data.

As explained in part 1 of this blog series, the overall challenge consists in generating test data that comply with complex constraints imposed by cross-field validation rules.

Positive Functional Tests

Software applications, in order to assure and possibly improve their quality, are usually subject to dynamic functional tests, and form-centric applications are no exception to the rule. A functional test, as we understand it here, is an end-to-end test of an unmodified software application. The test has to reduce the risk of software errors, and to assure that the application’s functionality conforms to the specification.

In the simplest situation, a test driver feeds valid test data to the application, which is requested to work as specified. No error should occur. The aim of this kind of positive tests consists in demonstrating that the application is working correctly, as illustrated in the figure below.

Positive functional test of a form-centric application. The test driver feeds valid test data to the application. They pass the internal validator mechanism, and the application provides positive feedback to the test driver.

The question is: what kind of data is of high quality, i.e. is especially valuable for positive functional tests? This is what we are going to explore next.

Obtaining a High Test Coverage

When carrying out positive functional tests, we aim at putting the application under some sort of ‘pressure’. We want to make sure as much as possible that the application works flawlessly, and that the normal execution paths in the code are all executed. And if there are errors in the code, we want to maximize the likelihood that our tests uncover them.

Experience shows that it pays off to look out for challenging values for the fields in a form-based application, such as extreme or otherwise special values (ESVs). For a numeric field, the most important special value is 0 — in view of the trouble this number causes when used as denominator in a numerical division. It also pays off to request very small values for some fields and very large ones for others, particularly when such numbers participate in additions or other numerical operations.

Let’s look at an example, namely a field representing an integer money amount of Euro without Cents. The first two (pseudo-) values that an ESV-generator will request are a generic #filled and an #empty for any field, without actually specifying any value. For each field we want it to be filled in one data record, and to be empty in another data record. Our field usually has a maximum length, say 5, thus the minimum and maximum integer values the ESV-generator will produce are 99999 and -9999. The most important special value is 0, and finally the generator will add the smallest positive and negative values, i.e. +1 and -1 to our set. Thus our seven-element ‘wish-list‘ for the Euro-amount becomes

#filled, #empty, 99999, -9999, 0, 1, -1

Of course, should the meta-information forbid the value 0, it will be dropped from our wish-list. Likewise, when negative values are not allowed, they will be dropped.

For a field representing an amount of Euro with Cents, small amounts, such as 0,01 and -0,01 should be added to the wish-list.

When the field represents a simple yes-no-decision, we can even specify a complete wish-list of desirable values:

#filled, #empty, true, false

For a field representing a calendar date, we may want to put as much as 10 or 15 different special values onto our wish-list. And, in addition to the fixed values, we may want to add some random values to most of the wish-lists.

Obviously we need an ESV-generator which considers a field’s data type and other meta-information, and generates a list of ESVs for that field. Not surprisingly, our Rule-Based Test Data Generator (R-TDG), briefly introduced in part 1 of this blog series, comprises such a module.

Clearly, if each value in our wish-lists can be placed in at least one valid test data record, we shall achieve high test coverage, when the data are used in functional tests. However, as we shall see in a moment, a good test coverage is not the only need we have.

The Quest for High Compression

Suppose for the moment that we are able to generate test data in such a way that each test data record contains exactly one ESV, i.e. one value from our wish-lists. Assume further that the forms together contain 1000 fields, with an average of 5 ESVs per field. Then, the number of data records required to assure the desired test coverage becomes as large as 5000. In reality, for reasons not explained here, the number of ESVs and thus the corresponding number of data records may be much larger. So many records cannot simply be used in a functional test of a form-centric web-application, just because of the substantial turn-around times for a single test case.

We therefore arrive at the conclusion that we need some form of “compression”, i.e. more than one value from our wish-list shall somehow be squeezed into a single test data record. Actually, we want as many values as possible from our wish-lists to appear in each data record in order to minimize the time-to-completion of the whole functional test.

At this point at latest the cross-field constraints start to play a pivotal role. It is simply not possible to loop through all the fields and select one of the ESVs from our wish-list independently from the values selected for the other fields.

Let us look at an example: suppose we have three fields A, B, and C of type Euro without Cent, and suppose that our wish list of ESVs for each field comprises the three values -999, 0, 9999. A naive test data generator would come up with 9 test data records. For each of these records, exactly one field would have one of its ESVs:

-999 * *
0 * *
9999 * *
* -999 *
* 0 *
* 9999 *
* * -999
* * 0
* * 9999

The star indicates that we don’t care about the value of the field. It may or may not be one of our ESVs.

The maximum compression we might be able to achieve is 3 ESVs per record:

-999 -999 -999
0 0 0
9999 9999 9999

However, in the presence of constraints, such a compression might not be achievable. Suppose there is a sum constraint C = A + B, relating the three fields. Then the values for the fields cannot be selected independently from each other. What we can hope for are usually at most 2 ESVs per record. A compressed set of records might look like this:

-999 0 -999
9999 -9999 0
0 9999 9999
0 -999 -999

The full constraint-handling machinery of the R-TDG is capable of achieving the desired high compression, i.e. of delivering test data records that are both valid and densely populated with ESVs.

Automated Test Data Generation

Let us briefly summarize what we have learnt so far about the problem of generating test data. The test data sets should be of high quality. By this, we mean:

  • high coverage of test cases, and also
  • high compression, as measured by the average density of extreme and special values (ESVs) in the data records.

How can the R-TDG fulfill these requirements and produce valuable test data for functional tests?

Constraint-based Test Data Generation

The key idea behind the R-TDG is a really simple one: produce the test data directly from the set of validation constraints. This idea is not a particularly new one. However, after decades of research, there are only very few solutions working in practice, and these have a very limited range of applicability, see the figure below.

At mgm technology partners, we greatly benefit from the parallel development of the rule-based validation framework. This framework captures the validation logic of a form-centric application in a central rule-base, where individual constraints are associated with the validation rules (see right side of the figure below).

The traditional way of arriving at constraints (left side). The source code of the application is analyzed, and the conditional expressions are examined. A set of constraints is extracted, which form the basis for the constraint solver inside the test data generator. Right side: The novel way of arriving at constraints. The constraints are centrally collected in a rule base, from which a code generator generates the validator code for the application. The same validation rules are ingested by the constraint solver which uses it for producing test data records.

The process of generating a test data record from constraints and the process of validating a data record are intimately related. Actually these processes are inverses of each other. What do I mean by that? First, consider the validation process as illustrated in the figure below.

Validation of a data record: The input (left) to the validator consists of the data record and the validation rules (bottom). The output (right) is a Boolean value: ‘valid’/true for a valid data set, and ‘invalid’/false (plus one or more error messages) for an invalid one.

In the validation framework, the rule base is compiled into executable validator code. When, in a positive test, a valid data record is being fed to the application, its validator will respond with “valid”. Conversely, when, in a negative test, an invalid data record is being fed in, the validator will respond with “not valid”. In addition, the application will usually return one or more error messages identifying the cause of the problem.

Next, turn to the generation of test data (see the figure below). The same validation rules that are compiled into the validator are being interpreted by the Rule-Based Test Data Generator (R-TDG).

High-level view of the constraint-based generation of test data: The main input to the data generator consists of the same validation rules (top) that form the input to the validator. In addition there is auxiliary input (right) such as a Boolean value indicating whether the data set shall be valid or invalid, and if invalid, which error conditions are supposed to be violated. The output (left) of the data generator is a corresponding valid or invalid test data record.

When both valid and invalid records may be produced, we need a Boolean value stating whether a valid or an invalid data record is requested (see bottom of the figure above). When a valid record is requested, the R-TDG will produce one, if possible. When an invalid record is requested, the R-TDG will produce one. In addition the user will normally specify a constraint to be validated. Again, if possible the R-TDG will generate an invalid test data record that violates the specified constraint, and no other.

Summary and Outlook

In this article I have discussed in some detail positive functional tests of form-centric applications. We have seen again that, in view of the complexity of the problem, severe time constraints, and particularly economic factors, it is necessary to automatically generate the data sets for such tests. In order to put the application under pressure, the test data records have to contain extreme or special values (ESVs) for all the fields on the forms. Due to time-limitations for functional tests, the test data sets have to be highly compressed, i.e. they should contain these ESVs in as few data records as possible. We have briefly discussed the general constraint-based approach used in the R-TDG for automatic generation of test data. As we have seen this approach actually inverts the process of data validation. How this inversion process is going to be accomplished will be the subject of another article in this blog series.

Series NavigationEffective and Efficient Techniques for a Rule-Based Test-Data Generator >>