Risk management in the age of AI: Why test management is becoming a key discipline

Last Updated on 29. October 2025 by mgm-marketing

Artificial intelligence is changing the rules of the game in IT projects. AI components are increasingly being integrated into applications to automate processes, make predictions or support decision-making. At the same time, AI is finding its way into development itself: generative models produce code, thereby accelerating the entire development process.

Both application scenarios open up enormous potential, but also entail new risks. This is particularly true for companies in the public sector and regulated industries: anyone who wants to use AI successfully in software projects must understand and manage the risks during development and before go-live. This is the only way to ensure compliance, trust and business security.

AI as a risk multiplier

The risks associated with AI differ fundamentally from classic software errors. On the one hand, black box effects arise because decisions are often difficult to trace or validate. For example, an AI-based credit scoring system may reject certain customer groups without the underlying weighting being explainable. Added to this is the dynamics of the models: through continuous learning processes, AI systems change their behaviour and, consequently, their response patterns – often in unpredictable ways.

The use of AI in development brings its own challenges. Automatically generated code can introduce security vulnerabilities such as remote code executions or undermine quality standards – for example by using outdated libraries – if it is not systematically checked. What initially promises speed can multiply as a risk without clear controls.

At the same time, regulatory pressure has been growing since the EU AI Act came into force and was gradually implemented. In addition, the European supervisory authority EIOPA has issued a statement on AI governance and risk management in the insurance sector. Similarly, BaFin is also expected to sharpen its stance on the use of AI in the financial sector, including model validation, documentation and risk control, based on ongoing analyses.

This makes AI a risk multiplier in the development process. Those who use it uncritically increase the probability and scope of potential damage in an uncontrolled manner. Its added value can only be realised if structures and methods make these risks controllable – and this is exactly where modern test management comes in.

From classic to risk-oriented test management

Test management has long been one of the established disciplines in software development. While classic testing focuses primarily on completeness and (non-)functional correctness, test management has traditionally concentrated on planning, controlling and reporting these activities in a comprehensible manner. However, this approach is no longer sufficient for the use of AI.

Traditional test management is based on stable requirements and verifiable functions – it works in environments where behaviour can be reproduced deterministically. AI systems, however, follow different rules: their results can differ even with the same inputs, models change as a result of retraining or new data, and the internal logic remains opaque. Added to this are new risk dimensions such as distortions due to bias and regulatory requirements for traceability. Under these conditions, test management that is purely plan- and documentation-driven reaches its limits.

The consequence: test management and testing are evolving into a clearly risk-oriented discipline. Although risk-based testing is a well-known tool, it is gaining new strategic importance in view of the uncertainties associated with AI. However, it is crucial that this risk-based approach does not remain isolated in testing, but is anchored in planning, control and reporting via test management. More than ever, it is important to identify risks at an early stage and focus on prioritisation according to criticality: a chatbot in customer service that provides inaccurate answers may cause annoyance, but it is manageable – more serious would be an AI module in an insurance application that calculates incorrect risk profiles, resulting in financial or regulatory damage.

It is crucial to achieve the highest level of test depth where potential errors have the most serious impact – whether on business processes, user confidence or technical innovations. Test management is thus evolving into a strategic risk management tool – and is becoming a key discipline in the age of AI.

Test management tools as a success factor

Without methodological support, risk-oriented test management quickly becomes piecemeal. In complex projects, it is nearly impossible to comprehensively record risks manually, evaluate them on an ongoing basis, and consistently integrate the results into the test process. Specialised tools play a key role here: they create transparency, establish connections between risks and business applications, and provide decision-makers with a reliable basis for prioritisation.

One tool that consistently anchors risk management in the testing process is Q12-TMT. It offers a multi-layered, algorithm-based analysis that brings together risks from different perspectives and aggregates them into an overall overview. This creates a precise picture of the risk landscape that goes far beyond individual views based on experience. The assessments are automatically updated throughout the development cycle as new data becomes available, ensuring that decisions are always based on reliable and up-to-date data.

For decision-makers, this means:

  • Cost savings through early defect detection,
  • verifiable compliance thanks to audit-proof documentation and transparent prioritisation,
  • targeted use of resources, because testing efforts are concentrated where they offer the greatest protection.

This turns test management into a scalable, strategic tool that takes quality and security to a new level, especially in AI projects.

Conclusion: Anyone who uses AI needs risk management in test management

Artificial intelligence is both an opportunity and a risk. Companies that want to turn it into a competitive advantage in software development projects must consistently anchor their risk management in test management. Only with methodological strength and the right tools, such as Q12-TMT, can speed, innovation and security be brought into alignment.

Let’s talk about how we can make your test management ready for AI.

Maria Kramer
Maria Kramer works at mgm technology partners as a project manager and test manager. Her focus in IT is on quality assurance during development and acceptance testing in classic and agile projects. With more than ten years of experience in software quality assurance, she manages manual quality assurance as a member of the cross-project quality team and is responsible for the strategic product development of the mgm Q12-TMT test management tool.