Skip to main content

Content-Security-Policy and third party JS, a Threat Model view, PART 1

A lot has been done to defeat XSS and other potential vulnerabilities in nowadays frameworks (contextual encoding in React Angular... etc)  and development practices, still, XSS, and in general the ability for an attacker to execute arbitrary code (mostly Javascript) is still in the OWASP TOP 10 and at the root of many cybersecurity incidents.
As mobile applications also became a wrap on top of a browser-based web application, the attack surface for XSS and other related malicious code injection widens even more.
Given the situation leveraging browser mechanisms to restrict the Javascript code that the client can execute to only the expected one sounds like a great solution. Content-Security-Policy (some of the advanced features) has this potentiality but is mostly unused. I saw two main reasons for this: The advantage is hard to defend:
1) It needs to be fully understood, it needs clear evidence and arguments to justify it (that's why I'm writing this article).
2) It is not for free, it complicates the delivery of the code, creating interdependence in otherwise loosely couple architectural components: in this case, the components creating the response headers (frontend application/ application container / reverse proxy / API gateway / WAF...). Decoupling, especially with 3rd parties stuff, is good in software development, consider a complex system as the union of independent smaller artifacts is a clear advantage and a use of the "divide and conquer" principle. BUT...It may not be good for security that, especially in the mindset of an attacker, s/he sees the system in a holistic way. The struggle between defence in depth (overlapping mitigations) and decoupling principle here is evident, no easy win/win solution sorry! DevOps (DevSecOps) and continuous integration systems may be an opportunity to handle this situation in a more convenient way, but is understood that it will never be completely "free"... yes, security costs!

In this article series, the feature we're going to threat model is the JavaScript restriction, specifically we're going to compare the threat assessments (potential vulnerabilities and their mitigations) of three different web applications configurations S0 to S4. "S" stands for security.

Configuration S0: no JS restriction

This is the most common, default situation.
Developers will be happy, they can deliver any features without nothing impinging them. The same is true for 3rd parties JavaScript (i.e. your fancy in site chat, your marketing based optimization add on, your payment processor... and the nasty hackers)

Configuration S1: allowed domains/URLs lists

This stuff may look advanced for some companies but it is really an abuse protection attempt you put on your S3 bucket or API!
Just pointing at allowed domains (CSP allow from:,, is still considered configuration 1)

Configurations S2: allowed hashes (i.e. SHA-256) of specific JS code in HTML code

At this point, some might feel security superhero level. You definitely are in the first decile of security. British Airways would have saved some hundred million if applied this mitigation in timely.  This would have not saved you from Hack Brief: A Card-Skimming Hacker Group Hit 17K Domains—and Counting!

Configurations S3: in Content-Security-Policy header hash restrictions

If you want your company to distinguish from the other for being the most secure (it will cost you money and likely minor availability issues) your web application should implement this configuration. Some developers may disagree with you with reused and apparently good argument like "but at this point, the attacker can also do this and this...". This last argument doest not stand this argumentation and this is why that a clear, useful, always imperfect Threat Model may clarify things.

This is the high-level view of our system (Target of evaluation):

Some notes about the sketched architecture:
  • DF stands for Data Flow
  • The dashed rectangles are "trust boundaries", Trust N is less Trusted than a value > N
  • DF1 represent the main HTTP request
  • DF2 is the main HTML response
  • DF3 is the request for the third party service (javascript inclusion)
  • DF4 is the returned JavaScript content and all following content responses (a js file may include another js file)

Attackers in scope

In this analysis, we're considering the following attackers (threat agents):
  1. An attacker sending a link to the legit domain that the legit client may use (e.g. exploiting reflected XSS)
  2. An attacker compromising the content the 3rd party webserver
  3. An attacker compromising the 1st party web content (exploiting stored XSS, file inclusion or another vulnerability in the code integration supply chain)

Threat analysis

Based on this scope, we're going to threat model the "SX Configurations" to assess the potential vulnerabilities and the (lacking in this case) mitigations in place. 
To describe the results of the analysis we're going to use this simple, self-describing set of fields: ID:, DataFlow, Attacker, Threat Type, Threat Description, Mitigation, Mitigation In place.

ID: 1
DataFlow: DF1
Attacker: at1
Threat Type: Spoofing
Threat Description: CSRF (Cross Site Request Forgery), the server cannot authenticate correctly the generator of the request that could be some other malicious website or clicked link
Mitigation: CSRF token or double cookie submission stateless implementation*
Mitigation In place: not by default (depends on frameworks)

ID: 2
DataFlow: DF1, DF2
Attacker: at1
Threat Type: Tampering
Threat Description: Reflected or DOM bases XSS (Cross Site Scripting) the server renders malformed content based on a specifically crafted malicious link o web reference
Mitigation: "1) Correct input encoding processing in the output page 
2) Limit the JS that can be executed on the browser"
Mitigation In place: not by default (encoding depends on frameworks)

ID: 3
DataFlow: DF3
Attacker: at1
Threat Type: Spoofing
Threat Description: same as ID 1
Mitigation: custom delegate authentication (Oauth, or in DOM not-cookie based credential)
Mitigation In place: not by default

ID: 4
DataFlow: DF3
Attacker: at1
Threat Type: Tampering
Threat Description: HTTP parameter pollution *
Mitigation: corrent URL encoding (after parsing of location.href … not many devs implementing this!)
Mitigation In place: not by default (very common vulnerability: blueclosure testing tool*)

ID: 5
DataFlow: DF4
Attacker: at3
Threat Type: Tampering
Threat Description: Third-party server get compromised
Mitigation: JS hash whitelist restriction
Mitigation In place: not by default

ID: 6
DataFlow: DF2
Attacker: at2
Threat Type: Tampering
Threat Description: First party server get compromised: e.g. stored XSS, dev credential stolen (single internal dev attack or compromised)
Mitigation: JS hash whitelist restriction + response headers decoupled from dev source tree: WAF/API GW/F5 *
Mitigation In place: not by default

We want to focus on the potential vulnerabilities that involve JavaScript execution, those are: 2, 5, 6
If we do not consider the mitigation "Correct input encoding processing in the output page" given the fact that is not CSP based and utopistic (XSS is based 99% on wrong encoding and has ever been around Link OWAP top 10 2017), we can compare the different JS restriction confiurations and how they match to the potential vulnerabilities:


Following this reasoning is easy to spot that only S3 gives a configuration strategy that really mitigates threat 2, 5 and 6. Other configurations tend to give the false sense of secure implementation and if not considering this analysis developer can easily argue that S3 is not needed or overkill. It is not a trivial configuration, it costs in terms of integration, processes and operations, but it is actually the only one configuration that would have saved 100s of millions to the two companies in the incident examples.