IntroductionI've been recently asked this question. And it still bugs me because given my experience, even if I do have a fairly good idea of what a good threat model should look like it has not only one 'good' final representation. A threat model is more than its outcomes (for simplifications: vulnerabilities and their mitigations). In this sense I'll use an analogy with software development itself, hoping it will help the audience, especially those in the software development. When creating software, the final result is typically an executable file. But this is far from enough, having only an executable, as good as it can be, it is too short-lived and it is not very useful over a long, or even a medium period. An executable software, unless in rare cases of extra simple firmware running on an electronic singing postcard, or driving the lights of your Christmas tree, is part of a wider ecosystem made up of evolving software, hardware, infrastructure, changing business requirements, components becoming obsolete, etc. To handle this always faster continuously changing environment, the focus of successful software development is not only on producing a good executable once, the focus is more on building a delivery infrastructure that over time always delivers useful versions of it. For this, we have 'delivery pipelines' of a complexity that often exceeds the complexity of the compiled executable itself. The created software is a series of executables but also documentation, support data, and compliance evidence ... In the same way, we need to clearly differentiate the process and steps of creating a threat model from the various outcomes of a Threat Model. Without this strategic distinction the Threat Model will hardly keep the pace of the software it refers to, and keep being useful for the whole software delivery process, of which of course, threat modeling is an integrated part.
Also a threat model is "good" as the use cases predefined for the TM are good, useful and successful. We can define the best theoretical approach and framework based on true presuppositions and still not have a useful o complete outcome threat model given the resources available . The many different threat model techniques proliferated often fail in that sense, they are based on correct assumption but in practice they tend to be ineffective given the limited resource applied, and limited resources is an always present constraint. Optimisation and efficiency are key to success.
When we ask ourselves "what a good Threat Model should look like?" we need to have a fairly good idea of both what the actions in the process of creating the threat mother should look like and the various actionable extractions or representations of the threat model should look like.
Logically we start with the steps, the actions of creating a threat model;
- Scope definition
- High-level security requirements/compliance level
- Level of abstraction
- Assets and dataflow enumeration (in or out of scope)
- Trust zones definition
- Captured assumptions (fed in from the analysis below)
- Analysis (STRIDE, CIA, LINDDUN, brainstorming)
- STRIDE analysis (or other taxonomy)
- Flaws and their mitigations results (with all the necessary attributes: in place yes/no...)
Those below are potential representations (or extractions, or artifacts) of the data defined in the execution steps of the Threat Model:
- Full report: a more readable version of the repository of the steps executed to define the threat model, for internal use and containing also all the confidential information.
- Unmitigated Vulnerabilities: this is the treasure here, defining priorities in software development. The input here would be from the analysis step, the extraction should have the form of tracked tickets (in Jira we could call them a story, bug, security issues ...). Should also have the form of a report summarising it and also be mergeable with other sources of risk management and security testing (static analysis results, pentest, software composition analysis, etc.)
- Non-confidential/distributable version of the TM useful for compliance, open-source, reputation, documentation, es in example 1* and 2*)
- Metrics KPI and tracking data (threat modeling practice maturity, completeness, remediations...)
On data structure
Representing the finding of the threat modeling scope definition and analysis steps with a simple and fairly generic data structure (tuples) that can graph out is a strategy that I consider the winner. But this is not the exact topic of this writing and deserves its own. In any case, by the examples, you can figure out what are the simple data structure involved.
Level of abstraction
This is a strategic initial step, getting it wrong will result in a failure of the secure design efforts.Threat modeling, like other development practices (e.g. testing), can go on indefinitely and infinitely... but of course the resources to implement the TM are never infinite. We know the good findings of threat modeling follow a Pareto distribution: most of the benefits are concentrated in the first part of the effort (lower hanging fruits). While we need a define a limit of the effort and resources, on the other hand, we still need to define a criterion of completeness. Precisely defining a level of abstraction (all abstraction are a simplification of the real system) is a way to transition from an endless exercise of finding "whatever thing that can go wrong" to a finite exercise where we gain confidence that, at a precise level of abstraction we apply a consistent analysis that will not overlook crucial security features or threats. Without this holistic approach, the risk is to apply a lot of effort to some security features while ignoring others, similar to fortifying a defensive wall in a mediaeval castle while leaving a gate or another passage accessible to attackers. While predicting the total effort requires is not a precise science, defining a level of abstraction, enumerating the systems, assets, data flows, and zones of trusts will give us a good approximation of the total effort and of advancement. The level of abstraction also helps defining some of the methodologies that best suit the analysis. For example, a big data flow diagram that represents the whole of company's data flows will be far more abstract than a single system developed using a set of micro-services. For multi-system analysis, a uber threat modeling framework like PASTA approach would be more suitable. For a software platform or service, the level of abstraction to perform the analysis should have a more detailed level, for example, the C4-model convention at a container or system level. The methodology of analysis, in this case, could follow a framework like STRIDE. The C4 container level is by the way is the more common level of abstraction TM is applied to, at least in my experience, and the following examples will follow that single-system level of abstraction. This single-system level of abstraction is usually near to the collective understanding of one (or few) development teams working in creating a software product. A level of abstraction could be also more fine-grained, then, we talk about Abuse Cases and in-Story Agile Threat Modeling. Combining different levels of analysis in a consistent framework and with consistent, or at least compatible, data structure and IDs is a key feature to having a maintained and actionable TM that grows incrementally and consistently over time. Threat modeling is not a one time heroic effort!
More about this topic in https://blog.secpillars.com/2019/06/practical-threat-modeling-series-part-4.html
ExamplesThere are not many examples freely available of threat models and even fewer complete with sensitive information and info about the analysis phase. The threat model examples listed below need to be considered extractions from the whole threat model source. Nevertheless they are useful to see their different styles while keeping the same logical structure: scope section and vulnerabilities/mitigations. Procedural analysis is omitted in the examples as they are public extraction artifacts. Some of the analysis guidelines can still be found here: https://github.com/TrustedFirmwareWebsite/website/blob/master/docs/TF-M_Generic_Threat_Model.pdf
1* - TrustedFirmware.orgTrustedFirmware.org is a reference implementation of a firmware for secure software for ARM platforms leveraging security features of the hardware architecture. The trusted firmware is services to hypervisors (for virtualizations), Operative Systems and processes. I was personally facilitating the execution in the developer team in 2019. It has lately been made publicly available.
A threat Model extraction can be found here:
License: provided under a BSD-3-Clause license (below)
2* - OAuth 2.0 Threat ModelAnother valuable example freely available is the “OAuth 2.0 Threat Model and Security Considerations” available at: https://datatracker.ietf.org/doc/html/rfc6819
Other examples presented below are purely hypothetical and used for the ease of comprehension without a specific system knowledge.
Continues in Part 2: https://blog.secpillars.com/2022/05/threat-modeling-what-good-should-look_26.html