Skip to main content

Defensive Security Hackathon, or how developers happily studied Secure Coding for 30+ hours in a row

In my career, I've taken before and then gave, secure coding training to developers. I consider the developers one of the most intelligent categories...yes they can code :)
Still, when it came to secure coding training it is very difficult to maintain attention. Usually, the training is structured on one, two or three full 8 hours days. It is perfectly normal, after just a couple of hours (if you are strong) to experience attention difficulties, basically falling asleep.
The request initially came from a big south-east Asian Bank. They wanted to organize a security hackathon for developers. That's what I did in collaboration with Maya7 security consultancy. For two months I created a custom made basic banking application, structured with their same languages and frameworks. Then during 2 days and one night, a series ex "missions" were assigned to the participant teams. Those "missions" or "tasks" were, of course, about implementing security features, from validation to SSL certificate pinning.; from Javascript web integration and XSS to JWT token management. Every mission gives a set of specific documentation, similar to the documentation a trainer would explain in a class... but believe me, the active engagement was much higher, and they studied it themselves, and they implemented it right away in the code.

The main goal of the Hackathon event was not to find celebrate the best developers, but to improve the secure coding capacities of all participants!

Let Devs be Devs

Usually, hackathons are in a CTF (capture the flag) fashion. But developers are creators more than exploiters. They need to understand how to create secure code even more than how to exploit insecure code. Based on my experience (and on market data about breaches), creating secure code is enormously more difficult than exploiting insecure code. A cheap example would be to close 1000s doors of a building VS entering one left open.

Needs to be custom made


Initially, I reviewed something ready to go, without spending weeks implementing a base website to create on top the tasks for the hackathon. This includes amazing products like OWASP WebGoat and JuiceShop and other intentionally flawed applications ready to be exploited. Those presented two main drawbacks:

  • You can find online the "answers" or simply can train to resolve the exercises before the hackathon (in itself a good thing, but unfair with money prizes involved)
  • There is still more "distance" with the everyday work of the dev teams participating

How it worked:

There were 7 teams (3 devs each), they created the base private Git repository with the initial banking application (Java+JSP presentation layer running on a Tomcat server and some JavaScript in it).
Every one to three hours a new "mission" was assigned, it had documentation and a requirement to implement it based on best practices in the document,  and then commit it before the two hours ended. Once the team commit the code, the organizing team, made of code reviewers and support people, review the code itself and, based on common evaluation criteria assign the points to the team's mission. A nice scoreboard shows the results. The best teams won money prices. All teams got a nice memory price for participation.

Difficulties:

  • Not easily repeatable: To repeat it the tasks should be really different, money prized are involved, every such hackathon should be different 
  • Not Scalable: Every two hours a mission/task was completed and committed. I was doing secure code reviews with prepared evaluation criteria. One reviewer person can do at most 6 or 7 teams at the time.
  • Needs a support team. Including a help chat and question channel. All teams ask and all team see the answers.

Outcomes:

  • Evaluation of current security
  • Expertise
  • Reputation
  • Practical goals!
  • Cheaper (!!!)
  • Team/network building
  • ...and most of all: it was FUN!

Evolution:

  • To be fair to the teams the evaluation criteria should be very specific. Even better create a set of unit tests to have the evaluations more impersonal and scalable.

 Some photos of the event:














Comments