Architecture Migration by Means of an Iterative Newbuild

Complex monolithic architectures are not intentional – as a rule they evolve historically and their proliferation of legacy requirements increasingly slow down the teams that administer them – so much so that their maintenance and development are often almost unjustifiably expensive. That is why a hot-swap overhaul of the existing software architecture is usually the only escape route from the maintenance trap and the only way to restore system efficiency.

Not even large-scale business applications start out in life as impenetrable monoliths. At the outset of a new development applications are normally still manageable and easy to understand. Once the architecture as initially conceived has been implemented and the application has gone live, an experienced team of skilled developers will not therefore need to devote much thought to software architecture issues. Instead, the team can without difficulty keep an eye on the few initial dependencies and effortlessly adapt and expand the application.

The strategy of maintaining an application in this way over a longer period does, however, involve risks. As successful applications must constantly cater for new professional and technical requirements they will inevitably grow to a size and a level of complexity that for one is no longer easy to grasp and for another requires a larger team of developers.

This tendency is intensified by the lack of a prescribed target architecture. Unless care is taken from the outset of application development to ensure that the software is strictly modularized there will frequently be no clear interfaces for specific professional functions. A similar situation will arise if the architecture initially chosen is not adapted as soon as further requirements take shape and new functionalities are instead merely accommodated in existing modules. That will lead to either a surge in the number of professional functions or a proliferation of redundant code. Yet essential measures to adapt code structures, to cut functions better and ensure or adapt the module cut are often not undertaken due to time pressure, lack of expertise or fear of making alterations. The turnaround time between the code change and the point at which a developer can analyze the effects of the change in a running environment can range from several hours to, in extreme cases, days.

Having failed to adapt the interface to the specific application and thereby keep the number of parameters to be transferred to a minimum, the number of parameters to be taken into account when, say, launching a new feature can be substantial. Given that unintended side effects cannot be ruled out, even minor adjustments may require the entire application to be taken into consideration and tested. Testing combinations across all parameters transferred is no longer possible. Another negative effect is that build processes for monolithic applications take a long time. That, at the latest, is when errors in the architecture or the design become apparent.

Developers who work on the system are mostly not the ones who realize that work on the software is not running smoothly.

The fact that individual developers can for the most part no longer recognize and understand what the system does in complex monolithic structures and have to wait for hours or days for feedback on whether their specific adaptations function leads inevitably to frustration in the development teams. Yet developers who work on the system are mostly not, in my experience, the ones who realize that work on the software is not running smoothly. Developers working on large systems tend to believe that complexity and sluggishness are normal in projects of this kind and must be accepted as such.

The impulse to question the status quo of the software architecture is more likely to come from the professional department or the management that provides the software development budget. As the adaptation is very time-consuming the development team’s output will inevitably be lower. So the justified question why the developers are meeting fewer and fewer requirements despite costing just the same is the most frequent trigger for a review of the architecture.

Escape Routes from the Maintenance Trap: Big Bang or Iterative Newbuild

If a large-scale business application is caught in the maintenance trap described above, restructuring the legacy system is unlikely to be possible. A newbuild will as a rule be the only way to launch a new software architecture that makes efficient development work possible once more. In principle there are two ways to embark on a newbuild of this kind. The first is to start from scratch and build the new system as a “greenfield development” alongside the legacy system. Once the new application has reached an acceptable status the legacy system is switched off at a clearly defined point in time and the new system is taken into service at the same time in a Big Bang operation.

A Big Bang involve the risk that it will only be clear once and for all when it has gone live whether the application is actually suitable for operational use.

The advantage of this kind of newbuild is that it can be implemented in a relatively short time because interim solutions are largely dispensed with and the costs appear to be calculable. It does, however, involve a risk: the risk that it will only be clear once and for all when it has gone live whether the application is actually suitable for operational use. Furthermore, the requirements of the software can change abruptly during development of the new system, forcing the development team working on the newbuild to react even before it goes live. They will run after the legacy system, as it were, making the alleged calculability of the Big Bang merely virtual in terms of time and budget.

So the more promising approach is, in my opinion, an iterative newbuild of the software. This first requires both a professional and a technical definition of the target architecture. On this basis modules can then be specified that – along with individual purely technical components in the background such as process control – are cut primarily in accordance with professional criteria. Once the modules are separated from each other the development team can start to redevelop the first component. When it is ready to go live the corresponding functionality in the legacy system can be switched off. Queries will from then on be relayed to the new module. The new components and the legacy system will run in parallel for a while and must merge seamlessly in this phase. As the other modules are gradually developed and implemented the share of the legacy system will grow smaller until it is replaced completely.

In this scenario the two systems must run in parallel in the transitional phase and this is frequently cited as an argument against an iterative newbuild. Resources are arguably allocated unnecessarily to a legacy system the end of which is foreseeable. A further criticism is that a step-by-step system modernization takes longer than a Big Bang and its conclusion is not always calculable.

This criticism is not entirely unfounded. Developing new modules step by step unquestionably takes longer and the initial investment in this model is at first glance higher because further work on the parts of the legacy system that are still in use is required during the transitional phase. Functions are also developed for parallel operation that will later no longer be needed. But dividing the newbuild into many small steps reduces the complexity of the system modernization. Instead of having to hope, as with the Big Bang, that the switchover from the old to the new system will run smoothly, the risk is assigned to many Small Bangs – and their possible repercussions on the application and the business processes will be manageable.


To replace an existing software architecture by a new one in a hot swap is an extremely complex process. An iterative approach may initially require a heavier investment in time and money, but dividing the overall project into many manageable steps helps to make many largely incalculable risks of the Big Bang process manageable. In this way the iterative newbuild presents an approach that offers an escape route from the maintenance trap.

Image source: Fotolia – tektur