In threat modeling, various methods can be obtain an overall picture of an application’s vulnerabilities and the various mitigation measures. Almost all available methods are based on the fact that a digital system is first designed by its architecture. This usually includes all known components within an application or IT system, how they are interconnected, and where trust boundaries lie. Early decisions about the architecture can therefore have a major impact.
Short & concise
- No single architecture is infallible from a threat modeling perspective.
- Organizations should familiarize themselves with the different architectural models to determine which is best suited for their business and security needs.
- Within certain architectures, modeling techniques can interact with each other, such as PASTA and gamification.
In short, deciding which architecture to use is critical to subsequent decisions about threats, their relative potential impact, and the countermeasures to be taken to combat them. In addition, the overall architecture of a system would plays a role in deciding who takes responsibility for detected threats or defenses against vulnerabilities and at what point in the system.
From an enterprise security planning perspective, it is important not only to model threats, but also to analyze them within an appropriate architectural structure. Therefore, when designing a new application, architectural decisions should be made considering their impact on threat modeling and analysis.
Basic Question Architecture: Zero Trust, Serverless or MACH
As mentioned earlier, organizations must make a decision when choosing an application architecture. In order to analyze which is the best from a threat modeling perspective, it is important to understand how an initial decision can impact later development. Below, we examine three common architectures to see how they can lead to threat modeling issues and where they offer advantages.
Zero Trust is a cybersecurity concept known for its strategic qualities. It implies that validation is required at virtually every stage of a digital process. Although it aims to eliminate implicit trust from systems, it can also be viewed from the other end: Namely, it involves rethinking the way explicit trust is handled in a system.
Put simply, it’s about emphasizing mutual authentication between entities. Whenever one part of an application interacts with another in a zero-trust architecture, validation of input and throughput is expected. It provides an overall higher level of security, but does not distinguish between internal trusted elements and external untrusted elements.
On the one hand, this type of architecture provides a high level of enterprise security. For another, it prevents potential lateral movement by attackers attempting to circumvent the countermeasures they encounter. It is also a proven approach to internal attacks, since employees and other LAN users who are not outsiders still encounter the same countermeasures as outsiders. In addition, this architecture makes it relatively easy to migrate applications from the enterprise environment to the cloud.
The downside is that zero-trust architectures require more attention to security. They may even appear to be driven almost exclusively by security protocols and process checks. This, in turn, can lead to applications based on such an architecture running slowly, not being user-friendly, and being more expensive to both develop and maintain.
The second architectural approach widely used today is often referred to as serverless. In fact, it is a kind of industry buzzword that means a cloud-oriented application architecture. Of course, such an approach also relies on servers, just third-party servers via the cloud rather than on-premises servers in the enterprise.
Despite this somewhat confusing name, serverless architectures allow companies to eliminate layers in their applications. For example, databases, the operating system, runtime environments and file stores can all be managed in a single infrastructure, reducing the number of potential access points or vulnerabilities. This is because the cloud provider typically takes responsibility for the infrastructure and the execution of security processes.
One of the biggest advantages of a serverless architecture is that only the application layer of the system needs to be managed directly by the business in question. This emphasizes the focus on business logic. Why? Because companies that do not have in-house technical capabilities to manage threats can effectively outsource them, reducing their technical environment.
On the other hand, when a serverless architecture is chosen, tasks must be delegated to third parties. Depending on the application, this can lead to legal risk mitigation issues, such as storing data on a server that is outside the organization’s jurisdiction or contracts. Despite the potential lock-in to a vendor, serverless architectures offer organizations a good way to balance their security and threat modeling requirements with some level of ceded control.
A set of technology principles behind many of the latest technology platforms available today are known as MACH. The acronym stands for microservices-based, API-first, cloud-native (or SaaS) and headless. Many companies choose this approach because they don’t have to reinvent the wheel every time they develop an application.
MACH is known as a best-of-breed technology platform and emphasizes scalability and flexibility. Many elements can be transferred from one application to another if they perform similar tasks. This architectural approach emphasizes the multi-channel communications that many organizations need, as well as reusability. It is therefore worth considering when many security requirements need to be addressed, as much of the architectural approach allows for repetition.
One of the advantages of MACH architectures from a threat modeling perspective is that they allow a high degree of control. This is achieved primarily through a modular approach and a strong focus on systems with secure and reusable APIs.
MACH architectural approaches also have some drawbacks that need to be considered. For example, this approach can lead to a conflict between agility and technical requirements. In addition, there may be a large number of stakeholders in MACH-based architectures, some of whom may perceive the system’s vulnerabilities differently. This may lead some to want to handle the backend microservices in a different architectural environment, even if MACH is used for the main system architecture. In this case, it is common to implement Zero Trust for such services.
Threat architecture and modeling
Although zero-trust architectures offer a higher level of security overall, they tend to be more expensive and time-consuming to model threats. Overall, serverless architecture offers organizations a good opportunity to balance their security and threat modeling needs against relinquishing some control. In summary, MACH offers a mix of opportunities and risks. As a result, each of the architectures described above will provide different results when it comes to using them for threat modeling.
It helps here that there are many different approaches to threat modeling. Thus, despite some advantages and disadvantages of each architectural approach, threat modelers can apply specific techniques to enhance the positive aspects of each approach and mitigate the most negative aspects.
What threat modeling techniques can be used for application architecture?
Given the wide range of choices available for enterprise security threat modeling, it is worth briefly reviewing some of the methods in use today.
PASTA is an acronym that stands for “Process for Attack Simulation and Threat Analysis.” It is a seven-step process that takes a holistic approach to analyzing threats and mitigating them with countermeasures. It can work with any of the above architectural approaches and often delivers comprehensive results. However, PASTA can be time-consuming to implement.
For more details on the seven steps of the process, see the first Threat Modeling article.
Persona Non Grata
The threat modeling method, also known as PnG, asks system developers to look at threats from the attacker’s perspective. Modelers not only have an idea of who might attack an application but build an entire persona around them – much like certain marketing efforts to better understand customers.
This approach can help identify threats from a human perspective and is a good start to threat modeling. One drawback is that it is nowhere near as comprehensive as other methods and can hardly be considered complete.
Modeling the assessment of mission-critical threats and vulnerabilities is commonly referred to simply as Octave. This approach focuses on organizational infrastructure and processes. Technical threats are therefore less of a focus. Modeling with Octave typically involves threat assessments based on the damage that would occur to an organization if its assets were lost or stolen.
This can mean, for example, identifying vulnerabilities in the infrastructure and taking countermeasures such as secure backups. One advantage of Octave is that it gives decision makers a comprehensive view of their systems. A disadvantage, however, is that this method can miss some technical details within a given architecture.
Elevation of Privilege” card game
The Elevation of Privilege card game is a threat modeling approach developed by Microsoft. As the name implies, it uses a game – in this case, a deck of cards – to identify threats based on a worst-case scenario. Modelers play the cards and see which threat is most problematic and which is not. Playful threat modeling can distract modelers from their real goal. It can also lead to incomplete results. On the positive side, however, this threat modeling technique can be fun. It is also easy to learn and provides a good introduction to threat modeling.
STRIDE was originally invented by Microsoft and has become a de facto standard adopted by various tools and policies, such as OWASP (including its modeling tool Threat Dragon).
It is an acronym for the threats in this area: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. Risks are mapped to one or more of these threats. For the evaluation of the identified threats, there is the corresponding DREAD model, which in turn is an acronym for the severity dimensions: Damage, Reproducibility, Exploitability, Affected Users and Discoverability. However, STRIDE can also work with other severity levels.
LINDDUN is an acronym for the threat categories in scope: link ability, identifiability, non-repudiation, detectability, disclosure of information, unawareness, and non-compliance. The process consists of 3 main steps:
- modeling the system in scope. This involves making the scope and data flow transparent.
- identification of threats using threat trees, a variation of the previously known attack trees. Threats are mapped to the system’s data flow.
- finally, a prioritization function is applied to manage and address threats and perform remediation actions. The taxonomy used is not specified by LINDDUN.
At a glance: Threat Modeling, Architecture and Techniques
To summarize, architecture enables a systematic way of thinking about application development and IT system security. However, different architectural approaches influence how threat modeling is performed down the road.
Architectural approaches do not in themselves represent an enterprise solution. Rather, they should be evaluated and assessed on a case-by-case basis for different business models. Finally, threat modeling techniques can be applied within different architectures, sometimes even using a layered approach, to create customized solutions that best fit a variety of business and operational priorities.
Contact to experts
Do you have questions about mgm’s Threat Modeling expertise? Contact us via email, call us or use our special contact form.