Efficient QA through “Very Early Testing” – mgm’s Secret of Successful Early Testing

This entry is part of the series:
mgm’s Secret of Successful Early Testing

This two-part article describes the successful introduction of a proven methodology for quality assurance on the development side according to the principle “Very Early Testing” with ERiC (ELSTER Rich Client), a project done by mgm technology partners together with the Bavarian Tax Administration (Bayerisches Landesamt für Steuern) and their teams. Within the framework of ELSTER [1] all software producers are provided with the library ERiC by the German tax authorities and it is embedded in all commercial and governmental software to file tax reports. It validates, compresses and encrypts tax data for the communication with the tax authorities. More than 100 million tax reports are filed via ERiC every year. Due to tax legislation ERiC development must meet rigid requirements.
The article’s first part describes the very efficient QA method of “Very early Testing”. Its second part shows in detail the introduction of the method to the ERiC project.

“Very Early Testing” is a method aiming at efficient quality assurance for long running, continuously evolving and growing business critical solutions of medium or large size. Its basic idea is to shift finding and eliminating defects towards earlier phases within the development process to avoid overloading developers due to extensive bug fixing in the stabilization phase of a project.

“Very Early Testing” can be implemented using many different development methodologies (agile as well as non-agile). Crucial is working iteratively with short cycles. In an iterative development methodology as in ERiC the development of a release is broken into several iterations. Each iteration produces a delivery in which some of the requirements are implemented and testable. The delivery is tested by QA during the following iteration parallel to development of the next delivery. The final iteration in the QA phase is used to stabilize the release; no features should be developed in that phase. To facilitate “Very Early Testing”, the duration of the iteration has to be short, while it should on the other hand deliver a substantial amount of testable features.

The advantages of “Very Early Testing” are:

  • Defects are found earlier and thus are cheaper to repair. This is important especially when working with agile methods.
  • The state of the project is transparent at each delivery.
  • There is quicker feedback on the feasibility of features.
  • Integration and deployment tests occur early in the development cycle which makes hidden obstacles in these fields visible.
  • Towards the end of the project in the QA phase there is no unbearable amount of
    1. defect repair for the development team and
    2. test adaptation, bug reporting and retesting for the QA team.
1: Life cycle of a feature test.

How to do “Very Early Testing” efficiently?

“Very Early Testing” requires and encourages a close collaboration between the teams as more deliveries of the current state of development have to be made than in classical approaches which is why the development-related QA-Team is allocated also organizationally with the development project. To keep the overhead due to these deliveries low, QA and development continuously coordinate the content and the dates of the deliveries. Doing so one has to find a suitable compromise between QA which is eagerly waiting for something to test and development which wants to develop complex features without the need to deliver something testable in between.
Since the product of development is now tested after each delivery and not only once at the end, efficient testing is necessary.

The lifecycle (Fig. 1) of a test for a new feature hence consists of

  • a full test of the feature after its completion
  • regression tests of the business relevant functionality after each iteration as soon as automation is completed
  • a manual regression test of the UI functionality at the end of the release
  • a usability test in the QA phase at the end of the release

We assume that user friendliness is paid attention to already during the requirements phase and the design phase of the user interactions.

If defects are found in any test, a retest of the defect is executed usually only once after the repair.

How to keep the effort for QA at a constant level?

As a continuous flow of new requirements or changes in long running projects leads to a constant increase of features which in turn leads to a continuously growing amount of regression tests, the complexity of the solution and the test environment increase as well. To keep the effort per release at a constant level, QA tasks (e.g. maintenance, execution of tests and analysis of test results) must become more efficient over time. This requires an ongoing improvement of the used test infrastructure (test systems, test tools) and the test process. The degree of automation of these QA measures has to be continuously increased to avoid ever more manual work, especially with regression tests. That is why the development-related QA-Team primarily consists of experienced software engineers. Hardly any test task is carried out manually any more.

Automated regression tests have first and foremost to be designed for efficient maintenance. Code duplication due to independent test scripts covering similar scenarios has to be avoided. A useful strategy to facilitate this is to clearly separate test data from test procedures and to achieve path coverage by data variation. A disadvantage of this approach: tests become more dependent on one another. This requires immediate adaptation of tests to new versions of the application under test. Otherwise a considerable number of tests might not run. Once in a while test code needs to be patched due to defects in the application that cannot be fixed quickly.

QA must analyze new requirements early on to study their impact on the overall test infrastructure and test design. This analysis leads to a concept for the efficient implementation of new tests and their integration into existing test sets.

  • Test cases are added to fill gaps in the test coverage according to a well-defined strategy for path and data coverage.
  • Coverage is defined on a functional level as seen from the user’s perspective.
  • Gaps in the coverage arise from new or changed requirements or are found through the analysis of defects which were detected outside the QA measures, e.g. defects in the production .
  • Test cases are not added for specific tests of defects found in production as this would lead to many specific test cases, which are difficult to maintain for large and evolving applications.
  • There is no 1-1 relation between new requirements / changes / defects and their tests. But there is a relation between the functions of the solution respectively of the product and their tests.

How to increase automation and improve the QA process wisely?

During the first iterations of each release, the effort for test execution is usually low. The resulting spare time is used to analyze new requirements and to increase respectively keep the degree of automation of QA measures. The decision on which QA measure to automate is made on the basis of a comparison between the manual effort required for the measure per release and the effort to automate it. While it is very important to improve the degree of automation for each QA measure, it is also important to continuously improve the QA process in its entirety:

  • For each QA measure per release and per iteration, a task is created in the task tracking system with a definition of the things to do and a due date relative to the duration of the release or the iteration.
  • At the beginning of a release all those tasks are copied from the last release and the due dates are adapted.
  • The person in charge of the task also has to check, whether the task is still useful and whether it can be optimized. This also includes an improved / corrected specification for example.
  • At the end of each iteration and release, a “lessons learned” meeting is held where the entire list of tasks is evaluated. Dispensable tasks are dropped, automation of expensive tasks is discussed and new tasks are added. Typical examples for new tasks are: something that was forgotten in the last release, new test types due to new features or newly discovered gaps in the test coverage.

The list of QA tasks in a release is very specific to the project. It is a blueprint of the QA concept of the respective project.

This article was written together with Alexander von Helmersen of the Bavarian Tax Administration. It was first published in German in JavaMagazin, 2.2015. Please click the picture for PDF download of the magazine article.

Links and Literature

[1] ELSTER – The electronic tax return [Die elektronische Steuererklärung]: https://www.elster.de/ .

Martin Varendorff. A Practitioner’s Guide to Successful Software Testing, Part 1. Developers, Don’t Write Functional Tests!

Martin Varendorff. A Practitioner’s Guide to Successful Software Testing, Part 2. Why Functional Tests don’t belong in a Build Environment.

Series NavigationERiC Files the Tax Report – mgm’s Secret of Successful Early Testing >>