Skip to main content

Practical Threat Modeling series, part 6 - STRIDE vulnerabilities enumeration

We saw in part 3 what STRIDE is, now we are going to apply a two-step process to list the vulnerabilities for the assets in-scope as described in part 5. Keeping always in mind that the threat model should be useful both to the development team, to perform security analysis and understand what can go wrong, as well as other stakeholders, to document and give awareness of the security status, rationale and risks of the system.
For our methodology example, the system is a simple e-commerce website with a database containing products and user.customer identities, the same architecture used previously:

Two Step process for listing vulnerabilities

Listing all the identified vulnerabilities and their associated mitigations could be a rather complex and long task. As usual, we divide a complex task into smaller, easier steps. My Advice is a two-step process. STEP 1 consists of associating threats in the taxonomy (in our case STRIDE) to assets. The second is to detail the various vulnerabilities and metadata. This is also useful when accomplishing STEP 1 as a team collective effort (together with high-level design and asset definition), based on consensus and after that dividing/delegating the STEP 2 process to more specific engineers and iterations of work.

Rule: avoid "security assumptions".

I strongly advise not to make "assumptions" about the countermeasures and protections mechanism "already" in place, whether you are performing threat modelling during or after the implementation phase (e.g. as technical debt). For example, it may be the case that DF1 is HTTPS (TLS) enabled. We'll see that this is one common mitigation for many STRIDE problems in that DataFlow.
The reasoning of Threat Models should not be "security assumed":
-"We don't have a spoofing problem because we assume we have a TLS protected connection in place".
The way the Threat Model should document it is rather "threat assumed": 
-"We have a spoofing problem, so we need authentication by the server (SSL/TLS connection with a valid certificate)"; 
Both statements are reasonable and "right" but, big BUT here, the first is security analysis GAME OVER, not documenting and managing risk, whether the second one exposes the conscious and documented decision behind:
  • why we have that protection (we'll see how many threat/vulnerabilities TLS prevents)
  • how we rate and score the many impacts associated with the lack of it
  • possible alternative and countermeasures
  • ...and many other considerations
That reasoning, in more complex enterprise systems, makes the difference between "security by faith" / "obscurity" versus clearer visibility, by all level of management (and supply chain customers), of the degree of residual risk and security analysis.
This would help also the maintenance, evolution and design refactoring of the application, reducing the probability of unexpected interaction and exposing security vulnerabilities.

STEP 1: Map high-level STRIDE threat to asset

Here's an example of a generic Threat to asset mappings:

Let's focus on DF1 (Data Flow 1), our first in scope asset. The communication is between a lower trust zone (Trust 0)  and a higher trust zone (1). We know also that this data Flow is a request/response bidirectional communication. For example, the meaning the "yes" of line 2, column 4 of this table, simply is: "A network attacker could tamper the request to the API GATEWAY".

To accomplish STEP 1 we ask ourselves a few basic questions, derived from STRIDE acronyms, one for every S.T.R.I.D.E. "term":
  • S for Spoofing: is authentication needed? Is the client sure the server (API GATEWAY) is not a fake one? Should the API GATEWAY verify the identity of the client? 
  • T for Tampering: Do we need integrity in DF1? Yes (the client CAN mess up the date it sends...)
  • R for Repudiation: Do we need accountability for what happened? Yes (e.g. we need to activate logs)
  • I for Information disclosure: Do we have sensitive info and need to hide this data from someone? Yes
  • D For Denial of Service: Do we need to provide continuity of service? Yes (Activate anti-DoS features)
  • E for Elevation of Privileges: Is authorization needed? Are there many execution levels/ungranted permissions to mess with? No (API GATEWAY is a serverless component by AWS).

Root cause rule for STEP 1

Especially for Elevation of Privileges (EoP): it is often the case that EoP is the root cause for tampering, spoofing, DoS and all sort of threats. In other words, because you have EoP you have all sort of vulnerabilities. While ALL the threats should be accounted for in the impact scoring of a single vulnerability, for the context of STEP 1, we are going to record "Yes" only for the root cause, in this case, Elevation of Privileges, even if itself may cause for tampering, repudiation and other problems.
Accomplishing STEP1 already achieves an informative view of the design threat model. It is also possible to have the architecture diagram itself carrying this information*:

STEP 2: Vulnerability list

The aim here is to create a detailed table (or a list of better tool-tracked objects) of vulnerabilities with detailed description and their associated mitigations:

The meaning of the previous example would be: "A tampered request URL parameter could hit an unprotected exposed service in the PHP server logic causing an SSRF (server-side request forgery) that could lead to a severe impact by causing a data corruption and a repudiation problem (reducing accountability by avoiding logs writing) ... and to prevent that we create a whitelist of URLs that the API GATEWAY can forward from the public internet." 

Even if it is plausible to execute STEP 2 "right away", without STEP 1, there are several risks and pitfalls. For example, STEP 1 could be some by the joined team in a "consensus way", then specific engineers can expand it into STEP 2. This would allow scaling better the execution of the Threat Model itself, keeping an agreement and a coherent approach across the whole team.

STEP 2 should not be constrained by STEP 1: if you know by experience that something can go wrong, and is not emerging from the STRIDE taxonomy, document it as a vulnerability anyway (maybe leaving the STRIDE column empty). It is often useful to put an extra sub-step to contemplate possible uncached vulnerabilities in a brainstorming way.

The vulnerability collection we created is not "frozen in time", is never complete etc... it should be something more "alive". During the implementation phase, or reduced scope analysis, similar vulnerabilities could be documented and described in an Agile fashion (Part 4 of this series).
The mitigation definition, a column in our example, can be more complex or postponed altogether into another step or phase, focusing first on listing the vulnerabilities and then at dealing with them, depending also on the maturity level of the Threat Modeling you want to perform.

End or start?

After we create the list vulnerabilities at this high level (design) we may have accomplished the first 3 of the 4 questions that Adam Shostack and others use to explain the Threat Modeling process: What are we building? What can go wrong? What are we doing about it? Somehow we need to close the loop. Are we doing a good job? may be the next question to close the circle.
Of course, arrived at this point, the challenge is to refine the threat modeling process to provide the highest value in terms of security and quality. This is really just the starting point to contemplate security into the development in a minded, structured and coherent way. This is an initial level of maturity that can be improved by mastering the awareness of the threats, vulnerabilities and their mitigations. Customizations will come after, including custom taxonomies that better represent the threats of the system, tools to track associated actions and Key Performance Indicators, reporting and even a better terminology and a standard language that is accepted across organizations.

A special Thanks to Geoff Hill from, for his support, advice and sharing his vast experience in the field*


  1. I got here much interesting stuff. The post is great! Thanks for sharing it! Secure coding training

    1. Thanks Glenn, I'm glad you also find it useful


Post a Comment