Skip to main content

The AppSec terminology catastrophe


When dealing with complex, hierarchical organizations made up of 10s or 100s of different teams/projects, of course, management needs to be able to track security issues and risk. To just document somehow what is the "criticality of a bug" without the right concepts in place, is not a problem, is an organizational catastrophe.
As a developer who got involved in security, one of the biggest challenges in performing training is to align on the same languages with dev teams. That happens, of course, as well when implementing security practices into the SDL (Secure Development Lifecycle).
Not having the right set of concepts, and their related accepted shared unique terms, is both a symptom and a cause of lack of security management in a vicious spiral. This lack of concepts/term structure, sometimes called thesaurus, is reflected in tools like development issue tracker, first of all, Jira. Jira, on the other way, offers enough flexibility to structure of the concepts I'm going to introduce here.

Notable mentions of similar efforts in defining a more complete and useful SDL language are:
Image 2: Microsoft SDL definition and terms
Image 1: ISO 15408 Evaluation criteria for IT security
The ISO 15408 terminology results particularly useful for Threat Modeling.
The Microsoft terminology was an excellent early effort to define and SDL and its terminology, but result nowadays misaligned with "agile" and DevOps terminology.
In general, I find it to be too much too soon for hook-in secure development practices into running teams with a way of communication in place, that of course, is linked to their issue tracking tool like Jira.

Simplifying is good, but oversimplifying is bad!

Inspired by the work at OWASP Open Security Summit 2018 (Photobox-group-security presentation), I propose a limited set of well-defined concepts (they may have synonyms) that are few enough to be accepted and managed with standards tools, and are enough to represent the complexity needed for articulating a mature secure development lifecycle.

Image 3: SDL terminology proposed


This proposed terminology has the following advantages.

Is simple enough 

...and tells a story, easy to memorize. It starts from a very familiar concept for a dev team member: the bug. And helps to communicate across the responsibility chain, from a very technical to a very understandable risk/impact. For example, saying "we have an unescaped use of a query API" is not the same as saying "we have a SQL injection" is not the same as saying "we could leak customer data and get bug fines and reputation damage". Those sentences all relate to this "tiny" defect yet, in both managing and communicating there are big differences. Dev tools tend to collapse all the metadata in the "Bug issue". That is simple...but too much to handle it properly in a coherent SDL fashion.

Covers Threat Modeling 

Those concepts/semantics, separated in this way, allows harmonizing defect management with design level Threat Modelling and day by day development, closing, or at least narrowing the gap, between security managers and dev teams...did I say DevSecOps?

Separate Security Requirement and a Mitigation

Make a clear distinction between Security Requirements and Security controls. Microsoft created one of the first SDLs and it is nowadays, after 10+ years, still used as a reference implementation/model.
They clearly, in a waterfall style, put Requirements before Design. Design that contains Threat Modeling. Should not Threat Modeling produces a set of security requirements? How can the result come earlier than the execution? Here stuff tends to get messy... Functional/Non-functional requirement narrative, together with the plain English meaning of the verb "to require" that means something similar to "to aks" tends to confuse. In MS terminology "Security Requirement" is something that is requires "a priori", that may be some compliance (e.g. PCI-DSS), or "waterproof" or FuSa (Functional Safety), as required stuff nobody should be able to say, we are good not doing this; someone could go to jail by saying that! On the contrary, saying "Static Code Analysis" is a "security requirement" and then saying "we are good not doing it, we are doing instead external manual code review" underlines the concept that a particular security mitigation, whose need that can come from threat modelling or an incident in production..., is in effect a "Mitigation/Countermeasure/Control" not a requirement. OF course, you can "require" one particular mitigation...but that's English, not SDL.

Where to go from here

In future posts, I'm going to investigate how those concepts will own specific metadata and will live inside security practices like Threat Modeling, tracking tools like Jira,  and in general inside SDL. As well as how to integrate risk tracking in a Top-Down fashion plan definition and Bottom-Up coherent and systematic reporting system.




Comments