Cookie Security

Cookies are small packets of data which a server can send to your browser to store some configuration or personal data. The browser automatically sends them along with each request to that same server. The contents are usually very interesting to hackers, so it’s important to know how to secure these cookies. Fortunately there are a lot of things you can do to improve cookie security. So… what do you need to know?

What data do you store?

If you want to store sensitive data, think very hard if you really need to store that particular bit of data in a cookie. By using cookies, you may prevent expensive requests to the server, but the data may also get outdated. Data is always more secure if it not stored on the client side.

Assuming you decided that you really do need cookies, you need to make sure that you configure them correctly. Cookies have several attributes and flags to do so. Below are the ones you need to know about when considering cookie security.

Session Cookie vs. Persistent Cookie

First of all, decide how long your cookie should be valid. The more sensitive the data, the sooner it should expire. Cookies allow you to specify this through the ‘expires’ and ‘max-age’ attributes. By definition, setting either of these attributes make the cookie persistent. This means that (as long as the expiration is in the future), the cookies survive a browser restart. If both fields are omitted, you get a non-persistent cookie or session-cookie. This means the cookie is automatically removed when your session ends (so when the browser is closed).

  • The more sensitive the data, the sooner you’ll want the cookie to expire, so if you explicitly want to set an expiration or max-age, choose a date in the next few months, weeks or even hours, rather than years.

Secure Flag

If the browser sends cookies over unencrypted connections, it will be possible for hackers to eavesdrop on your connection and read (or even change) the contents of your cookies. To prevent this, send cookies over encrypted connections only.

Setting the secure flag prevents the cookie from ever being sent over an unencrypted connection. It basically tells the browser to never add the cookie to any request to the server that does not use an encrypted channel. The cookie will only be added to connections such as HTTPS (HTTP over Transport Layer Security (TLS)). Note that it is up to the browser to decide what it considers ‘secure’. Typically the browser considers it secure if the protocol makes use of the secure-transport-layer. This also means that a browser may decide to send the cookie when the connection is secured with a self-signed or expired certificate.

  • You should always set the Secure flag in your cookies when they contain sensitive data, unless your website uses an insecure connection, but in that case you have much bigger problems.

You might think that setting this cookie is not relevant if your server always uses HTTPS, but that is not true. It means that the server would never send unencrypted data (including cookies) to the browser, but the other direction is not guaranteed. E.g. a network attacker could intercept outbound HTTP requests and redirect them to capture the plaintext cookies.

Even if the server uses HTTP Strict Transport Security (HSTS) and includes subdomains and the domain is on the preload list, it’s a good practice to still set the Secure Flag. Not all browsers and user agents use the preload list, so an initial request to your domain could still use an unencrypted channel.

HTTPOnly Flag

The HTTPOnly flag prevents scripts to read the cookie. As the name HTTPOnly implies, the browser will only use the cookie in HTTP requests. This prevents hackers from using XSS vulnerabilities to learn the contents of the cookie. E.g. for the sessionId cookie it is never necessary to read the cookie with client-side script, so for sessionId cookies, you can always set the HTTPOnly flag.

  • Set the HTTPOnly flag for all cookies that don’t need to be accessed by script.

It’s good to know that for a hacker there are other techniques to learn the contents of the cookie. Even if the HTTPOnly flag is set, you can use script to learn the contents. Ever heard of the HTTP TRACE method? It is a method (like GET and POST) that is intended for debugging. When using the TRACE method in a request, the server just echoes the exact contents of the request back to you (including cookies), so you can see what your browser sent. This is great for debugging! It can however also be used by malicious scripts, because even though your script cannot read the cookie directly, it can read the response of a TRACE request. This is called Cross-Site Tracing (XST).

  • Besides setting the HTTPOnly flag, you should always disable the TRACE method on any non-Development server.

Another thing to keep in mind is that there are other tools that echo HTTP requests. E.g, there are Docker containers that echo the HTTP request, to help you debug your microservices. While very useful in development environments, such services should never end up in Production.

SameSite Flag

The SameSite flag is an experimental flag, which Google added in Chrome 51. It aims to mitigate the risk of CSRF. When the server sets its value to ‘strict’, the browser will not send the cookie to your website if the request comes from a different domain, even when directly clicking a link. For example, a bank doesn’t want financial transactions to be initiated through a link on a different domain. There it makes sense. Facebook on the hand practically lives from letting users click ‘like’ buttons on different domains, so they won’t use this. Setting the SameSite flag to the value ‘lax’ makes the browser a bit more lenient in that it only blocks the cookies with ‘unsafe’ HTTP methods like ‘POST’.

  • Set SameSite to ‘strict’ if linking from other sites is not necessary. Set it to ‘lax’ otherwise.

HostOnly Flag

The HostOnly flag specifies whether the cookie is accessible by subdomains or not. It is an implicit flag that the browser sets if the domain attribute is empty. E.g. If the website sets a cookie without a Domain attribute, it is a HostOnly cookie. Subdomains like will be able to read it. If sets a cookie with, then it is no longer a HostOnly cookie, and all subdomains of will have access to its contents.

  • Leave the Domain attribute empty, unless you explicitly want to share the contents of the cookie with all subdomains and know it’s safe to share the contents with all of them.


So, to summarize:

  • Don’t store sensitive data in cookies unless you absolutely have to.
  • Use Session cookies if possible. Otherwise set a strict expiration.
  • Use the HttpOnly and the Secure flags of cookies.
  • Set the SameSite flag to avoid other websites to link to your site.
  • Explicitly set a domain to avoid subdomains to use the cookie.

After that, your cookie data should be much safer.

Here you can find more information on the cookie specification:

Cookie Security

Getting Docker Security Right

I started working with Docker in my job at TOPdesk almost a year ago. Security is an interest of mine, so I did some research. You can’t look at Docker without thinking about Microservices, although they are separate topics. It is often said that Microservices can greatly improve your security. But also, that if you do it wrong, security can actually get worse.
So, what do you need to do to improve (Docker) security, rather than get rid of it? For most security concerns there is already a good solution, although not all of them are widely adopted. Let’s have a look at our concerns and how we take care of them.

What Base Images to Use?

Perhaps our biggest concern was about where our developers would download their base images, and especially: what images will they select? If you look on Docker Hub, there’s a lot of images to start with. However, as with any software you download from the Internet, there can be all kinds of nasty surprises in them. A good way to look at this, is the “Tower of Trust”, as explained by Rory Mccune ( Basically he says that to trust the software you download (e.g. a Docker base image), you need to trust the developers of that software. Also, you need to trust the developers of all the dependencies, and the people involved in hosting that software on their servers, and the people who are responsible for the repository software, and everyone involved in the infrastructure between that server and your computer, including your own ICT department. That’s a lot of people!


You can’t get to know all these people, and figure out whether to trust them or not. What we can do, is give our developers guidelines on how to select the base images. For example, it helps to use a well known Docker registry, like Docker Hub, to select the images. The chances for compromises on such a registry are smaller than if someone is hosting a registry on some private server. ‘Official images‘ will (in general) get security updates more quickly than others. Furthermore, if more people use an image, bugs or vulnerabilities are more likely to be found, and fixed. We try to keep the amount of used images small. This makes it a little easier to keep an overview of all the known vulnerabilities. For now we are using a whitelist of images, since we don’t expect many different types of base images are needed anyway. We’ll see how that goes in the future.

How to keep Track of Vulnerabilities?

Most docker images contain known vulnerabilities. It is important to keep track of them. You should analyze those vulnerabilities to see if they are relevant for your situation. For any vulnerability, take measures to prevent misuse, because they may open the door for visitors with bad intentions. We usually use the scans from Docker Hub, but the results are hard to include in our continuous delivery pipelines. Also, there are rumours that those scans will not remain ‘free-to-use’ in the future. Not sure if this is true.


We’re looking into scanning our images with other tools like Clair ( It’s easier to make the scan part of the pipeline and scan on every build, especially since we can run the whole CVE scan in a Docker container itself. Also, we get the feeling that the results are more accurate than those from Docker Hub. There’s also a whole lot of commercial solutions that we’re looking into.

As soon as we have the scan results in the pipeline, we’ll decide whether we want to ‘block’ the pipeline when new issues show up. Alternatively, we can decide to initially just monitor the situation.

Limit a Users’ Permissions inside the Container?

In case a hacker could somehow compromise a container, we don’t want him to be able to do much harm. So, how do we limit the permissions of the user inside the container?
It turns out that Docker has already done some great work to minimize the impact in the default situation. If you change anything about permissions, you will start with a (root) user that hardly has any permissions at all. Docker uses the Linux system of Kernel capabilities:

Linux Kernel Capabilities

The Linux Kernel capabilities enable a more fine grained permission system. Docker links to this system by starting Docker containers with a specific set of capabilities ( This means that the root user can for example set the uid bit, but not do all kind of network-related operations. It is possible to add or remove individual capabilities if needed. However, it is advised to only do so when strictly needed. Capabilities that you add but don’t need, will only help hackers do more harm if they manage to compromise your container. Therefore, never add all capabilities at the same time. Docker set a secure default by only enabling a few of the most used capabilities.
For more info on kernel capabilities, look here: (Linux Kernel Capabilities)

Run as Root User inside the Container?

Ideally we don’t want our container to run as root inside. Docker advices this in its guidelines:
However, apart from this general advice we found very little useful discussion or examples on this topic, so we’re not so sure that this is something we should tell our developers. Yes, we encourage the use of a different, less-privileged user, but we won’t enforce it yet. Also, the fact that the root user has limited capabilities helps.

Getting the Right Image in the Right Place

When pushing or pulling containers over networks (especially over the Internet), you always run the risk of a man-in-the-middle-attack. You need to trust that the repository you want to communicate with, is actually the repository you’re communicating with. Also, you need to trust that the image you are pulling, was pushed by the correct publisher. Docker solves this with their Content Trust system.

Publishers can decide to sign their images when publishing them. The signature will be linked to the tag of the image, so for each new version of the image that the publisher pushes, a new unique signature is created.
From the consumer side, users can also decide to activate the content trust mode. As soon as you activate Content Trust, you can only ‘see’ (pull, push, build, create and run) images that are signed. If the origin of the image cannot be verified, you cannot download it to your computer anymore. If you want to benefit from this system, you should activate Content Trust on the machine where you build and push your images. Keep in mind however, that most companies do not sign their images yet, so you cannot use all images out there yet.

For more info on Content Trust, look here:

Other options

Although the above is only about Docker Security, it is good to remember that inside the Docker container there is a Linux system, so all the regular security measures on a Linux system still apply.

For even more ideas on how to improve security, have a look at the CIS benchmark for Docker:


Getting Docker Security Right

Code Reviews

Recently, I’ve read several articles, and heard multiple discussions on the quality of code reviews. To order my thoughts on this topic, I decided to write down my own ideas. Perhaps it helps someone, or it might lead to even more discussions.

So, what is a good code review? Obviously it depends on the situation. How big is the code change, how important is the feature, how many people are going to read that particular piece of code in the future, are there deadlines, etc. Let’s focus on the situation where there’s a reasonable amount of time available (no emergency fixes), for a feature change that has average importance, in a medium-sized team. Note that when I talk about a ‘code review’, usually I don’t just do a review of the ‘code’, but also of all the other parts my colleague has worked on. According to me the reviewer should for example also look at design and documentation, and check whether the acceptance requirements for the story have been met.

Before the review starts

Before looking at the review phase, let’s have a look at what happens just before. When does a developer decide it’s time for a code review? I think it is always the responsibility of the developer to deliver high quality code, so it is also the responsibility of the developer to decide if a review is necessary at all. You ask for a review because you want to learn, because you want to avoid overlooking mistakes, and because you want to share knowledge. Not because there’s some process that says all changes need reviewing. I guess I would ask for a code review in almost all situations, unless the change is about a typo in some documentation.

You might wonder: “Is a review needed when you already pair programmed the code?” Again, since it’s the responsibility of the developer(s) who wrote the code, they should decide. If you do a lot of pair programming with the same person, there’s a risk you start thinking with the same mindset, increasing the chances you miss something (even though pair programming is supposed to fix just that). Because a review means knowledge sharing too, or because the resulting code is hugely important, it may be beneficial to indeed ask yet another person to have a look.

Since I’m asking the reviewer for a favor, I want to make it as easy as possible, so I tell him (or her) where the changes can be found, where the documentation is, and of course that I’m available for questions (although I hope that my work is so self-explanatory that everything is immediately clear).

Ideally for the reviewer, the code changes are easy to oversee. Many changesets will make the work of the reviewer a lot harder. On the other hand, when implementing a bigger story, I don’t want to disturb my colleagues for every minor change. In case there was more work involved, I don’t give a list of 27 changesets, but rather just refer to the files where the new feature or bug fix was implemented. After all, it’s not just the changes I made, but the resulting code in its context that should be reviewed. It’s also advisable to put functional changes in different changesets than refactorings. There seem to be best-practices that say a review should contain no more than 200 lines of code, but I find that number rather arbitrary. Still, keep in mind that the reviewer is human. The easier it is to oversee the changes, the higher the quality of the feedback you will get.

Doing the review

How do you start a review?  Usually I start by reading the story that was implemented. You need enough background information and context to understand what the code is about. It would be an nice experiment though to start with reading the code, and see if you can deduce the functional requirements from the code. Ideally the code is so clear that this is indeed possible. However, I don’t often encounter such code. Perhaps if there is enough time, you can let a second reviewer follow this approach as a test, after the first review results have been processed.

Assuming you started with reading the requirements, you can now have a look at the code. I like to just start reading and see if I can understand the code. I always like the description of Ward Cunningham about good code he gave in Robert C. Martins book ‘Clean Code‘. Basically he says that when reading clean code, you should be able to read the code, nod your head: “that’s how I would do it too”, and move on to the next topic. If I don’t understand the meaning of the code after reading it once (even though I’m familiar with the topic), improvements are clearly possible, and there will be remarks in my report. Usually I try to think of a way to improve the code, so I can give suggestions: use better variable or function names, split functions, or any other refactoring actions. Even if I can’t come up with a better solution, I still point it out to my colleague. Perhaps by discussing it, together we can find the best way to solve it.

I don’t just read the changed lines of code, but also the context. The changes should ‘fit in the context’. Also, I expect the developer to follow the Boy Scout rule, and ‘leave the campground cleaner than he found it’. So if there’s code in the context that doesn’t follow current standards, I expect those parts to be updated as well, as far as that is practically possible at least. Making remarks about what I notice is little work. There can always be discussions with the developer later about whether it makes sense to put in the extra effort to further clean up the code. Perhaps he had already noticed, and had reasons not to do it yet.

Besides ‘understanding’ the code, I check the coding standards, although that is hopefully also checked by some automated tool in the build server. It’s annoying if your report consists mainly of remarks about missing spaces and arguments about why the brace should be on the next line or not. Perhaps for new developers this is necessary, but ideally this is not needed for the average review. Usually I combine this step with looking at the SOLID principle, DRY/WET code and all the other code quality related acronyms.

A bit harder is checking the architecture. Minor changes in the code may mean that the structure of the code is no longer appropriate. Sometimes this can lead to big changes and refactorings. A simple feature that was implemented in hours may result in days of extra work. Especially if that much refactoring wasn’t foreseen, this may be a problem. Still, it’s best to put it in your report and discuss later what to do. Not doing the restructuring means technical debt, and increases the probability of the Broken Window Syndrome. I try to look for architectural improvements separate from the first reading of the code, because now you need a bird’s-eye view of the code.

Depending on the Definition-of-Done, and whether the story is supposed to be ‘done’ after this review, I may also look at things like unit tests or code coverage. I don’t care about a specific percentage of coverage, but I do care that at least all important logic is covered.

It’s also good to mention that a review is not the same as testing, or even includes testing. It is very wel possible to find bugs, just by looking at the code. It certainly is a sign that you had a good look, but I don’t primarily try to find bugs when doing a review. I do try to follow the logic in the code, and I try to ‘map’ this in my mind to the functional requirements mentioned in the story, but that still isn’t the same as testing. After the review someone should still test the functionality and verify that the requirements have been met.

How much time do you spend on a typical review? Obviously, it depends. It depends on the amount of code (and other stuff) that needs reviewing, it depends on the time pressure (although this should never be used as a single factor to rush the review), it depends on the importance of the feature, and the impact in case of issues. Currently in my team, I spend between 1 and 3 hours on a review. Spending less time means you may miss obvious mistakes, resulting in code decay. You may also fail to understand the code, meaning that more knowledge sharing is needed later. Or worse, the next time you work on the code, you introduce bugs because of false assumptions. Spending more time does not necessarily mean finding more issues, and may result in time lost. Every time you need to find the right balance. If you notice that code quality is decreasing over time, it may be a good idea to consider spending more time on the reviews.

Reporting the results back

Now you need to report back the results. Remember that this is all about communication. Some people will take your remarks personally and feel offended, so make sure you communicate your positive intentions. You may use the Feedback Sandwich or any other feedback technique.

Don’t focus solely on the things that are wrong, or have to be improved. Also mention what you like about the code. Perhaps there are aspects of the coding that your colleague improved on, compared to the previous piece of code you reviewed from him. Perhaps he did improve existing code or cleaned up legacy code. It may be worth mentioning  that you can see that he as a developer is improving. Respect your colleague for choosing you, even if you were the only person available.

Depending on the seniority of your colleague, you can also consider adding more explanations to your remarks. For newer team members it’s good to refer to coding standards or all kinds of practices the team uses.

Processing the results

Finally, the results should be processed. Some remarks may be easy to fix: a better variable name, fixing a typo, or enhancing the documentation. Other remarks may take more time: restructuring a class or refactoring the entire architecture. The code review may also lead to discussions. Not every suggestion for improvement is straightforward, and different developers tend to have different opinions. When receiving the results, always think for yourself, see if you really understand and agree with the remarks or suggestions. In my experience, this is an opportunity that leads to the most interesting discussions. This will help you to really dive in to the details, and become a better programmer.


To conclude, it may be clear that I try to take reviews serious. It’s one of the best tools a developer has to improve code quality. For me there are several important aspects of code reviews that’s I don’t hear often enough from other developers. I think that as a developer, you are responsible for your own code. You are the one who should decide if a double check from a colleague is needed or not. Therefore, reviewing code from a colleague is a favor, and not an obligation or something that is required because a manager said so. Furthermore, I think a code review should include review of documentation and other deliverables too. Finally, communication is key.

Good luck!

Code Reviews

Total Scheduling Engineering Culture

My team at Raet is a bit different then other teams. We use a different programming language, we’re on a different operating system. We do our own maintenance, deployments, releases and monitoring. This brings extra work, but also has many benefits. We rarely have issues and are able to roll out new releases and patches in no time. We are ‘in control’. We are only a small step away from continuous delivery and have automated many parts of the process, that other teams are still struggling with manually on a daily basis. I’m proud of what my team accomplished so far, and I think in general people are happy with their work. We get asked often how we do this, so, inspired by the Spotify Engineering Culture movies (part 1 and part 2), I thought I’d try to write down my thoughts about the culture in our team.

Like the Spotify team, we haven’t figured it all out. We are definitely not perfect and we have a lot to learn, but we’re trying to move forward all the time. One of the most important factors for running smoothly, is the fact that everybody feels responsible. Responsible if something goes wrong, and responsible for successes. This does not mean that if something breaks, 7 people will jump at it to fix it, but that if someone notices something that’s wrong, he or she will never leave it thinking ‘not my job’.

In order to take responsibility, you need control over the process and environment and that’s what we have. It means, when there’s an issue, we don’t need to run to the manager to get permission before we can do anything. Also we don’t need to go to an Operations department to beg for some of their busy busy highly valued time, and please get access to the server to be able to analyze the issue. Yes, currently we still need to ask our manager for permission before doing an actual patch, to discuss the risk of the fix and decide about extra communication to the customer, but fortunately this rarely slows us down.

But it’s not just about the patches. We are able to do any change to our system. When we see opportunities to improve the process or the product, we discuss this within the team and then we just do it. Especially when there’s no dependencies with other teams, we can act very quickly. Now you might think: isn’t it a big risk if the team can access everything on it’s own without ‘the all-knowing mighty manager who never makes mistakes and always knows what’s best to do’? In theory yes, but we have one rule: “Making mistakes is allowed, but not learning from them is not”. Also, not everyone in the team has the same permissions and not everyone can do everything. This has been enough to create a very stable environment for the past few years.

Having fun is another important aspect of our culture. This can be achieved by doing fun things, like having lunch together, but also by doing fun projects. There should be a balance between doing responsible (and sometimes boring) stuff, and fun stuff. We don’t have a 20% rule at Raet (yet), but that does not mean that there’s no room for good initiatives. In my team I try to allow people to work on innovative ideas when they come up with something. Fun projects can definitely be useful. As long as the ultimate goal is to improve the customer intimacy. Fun is required for innovation. Unhappy people are not expected to come up with great ideas. As long as people keep coming up with great ideas to improve, I know we’re still taking that extra step to stay ahead of the competition. That way we keep the end user happy, and also ourselves.

To make sure everybody is happy and stays happy, we do frequent retrospectives. We use the MSGL (mad/sad/glad/learned) board to discuss the past few weeks. When during the sprint someone is mad, sad or glad about something and wants to discuss it during the next retrospective, he puts it on the board. Also when someone learned something that he wants to share, he adds it to the board. This way you get it out of your head and you can continue with what you were doing. Also, when during the sprint we see people have been working too hard, we suggest they go home early today, …because they deserve it. Commitment to the team is great, but staying healthy is even more important. And if you see  that other people in the team care about you, that likely increases the commitment again.

One of the nice things in our team is that everybody has their own specialties and preferences. We have a guy who likes frontend gui stuff, a girl who prefers the backend work, someone for mobile, a tester and a scrum master who likes agile coaching and keeping everybody happy. I am the architect, and I like to work on continuous delivery and security. As said before, interests don’t mean we don’t work on anything else. However, it helps to decide who can work on what. Pair programming and code review make sure that your knowledge is not just limited to your own interest. Also, at the end of the sprint, everybody just works on what needs finishing, whatever the preferences.

As a team, it is important to try to stay as independent as possible. Not just regarding procedures and controlling our own server, but also regarding our software. We don’t want dependencies with other teams or products. When we need to build an interface for some other product, we build it in such a way that it doesn’t break if the other product is down, or if the other product has not released some update yet. We’re always backwards compatible. That way we can move forward independently and go as quick as we can. Other teams just need to try to keep up.

Training is also important. There are many ways we try to keep our knowledge up to date. One way is by going to conferences. That’s where most of the ideas come from, but it can also be expensive, so unfortunately we can’t do that every week. Besides, after a conference, you need some time to actually use the newly gained knowledge and ideas and start working on it. In the meantime, we do knowledge sharing, within the team, and when possible also between teams. For example, we have workshops where we discuss some book. Anyone who is interested can join. Everybody reads the same chapters, and we discuss the interesting parts. This helps us understand better how to use the theory in practice and it’s a fun way of teaching others. Also, several team members are active in guilds where they share knowledge and where they find new inspiration for our own team. E.g. We have a scrum master guild, a security guild, a testing guild and an UX guild.

We still have plenty of challenges and struggles. One challenge lies in continuous delivery. We’ve already come quite far, but the last bits are always the hardest. Continuous Delivery and continuous deployment would be a huge improvement for the quality and the efficiency of our work. For now we have continuous integration. On each code commit, unit tests and integration tests are run. If successful we automatically deploy to our Test environment. Every night we do an additional nightly build that also includes a vulnerability scan. Separately we have static code analysis, which unfortunately is not yet linked to the build. Features are released through feature toggles. Often first to a pilot group and later to the rest of our customers. The last bits are in automating the SQL scripts, automatically deploying to Acceptance and Production, and most importantly, gaining enough trust in the quality by increasing the amount and relevance of automated tests.

Another challenge is that we have a team that is getting bigger and more distributed. Communication is vital and issues arise quickly if we forget to update our team members. Different people from different countries have different expectations, communication styles and backgrounds. It takes time to adjust and let the team grow.

Perhaps the biggest challenge we have is autonomy. We’re the only team at Raet working as devops, so keeping our autonomy is really difficult. Often new restrictive rules get applied to all teams, including ours, even though it doesn’t make sense for us. Fortunately, in the long run, everybody moves forward, so we’re getting there sooner or later…

Total Scheduling Engineering Culture

Automated Vulnerability Scan with OWASP ZAP

A few months ago, I set myself the goal of automating our vulnerability scan, and run it as part of our nightly builds. At that time I just started checking the different scanners that are out there, so I wasn’t attached to a particular scanner yet. I ended up with OWASP ZAP. Why? Because it’s free, it has an easy to use API and in general it’s just a great scanner. Maybe it’s not as complete as some of the expensive ones out there, but a very good start nonetheless. And because it is open source, there’s plenty of help available online.


I set a few objectives for this project. Obviously it had to be an automated process, so no user intervention to start the scan or interpret the results. The final result should be a ‘yes’ or a ‘no’, meaning: ‘yes’, everything is secure, there were no vulnerabilities detected, or ‘no’, there are potential vulnerabilities that need to be fixed or marked as false positive. Furthermore, the scan should have more or less repeatable results, so we can check whether previous findings are fixed the next day. The whole scan should run in more or less constant time, or at least not differ too much per run, and it should be done in a few hours. This last one proved to be the most difficult objective, because running the standard scan with all the options and without any constraints would run for several weeks easily, which is just not very practical for a ‘nightly’ scan.

There were also a few constraints. E.g. it should work with our build server TFS and the details about the vulnerabilities shouldn’t be sent across the network for everyone to see. A limited number of people have access to the scan results. It is fine however to send a summary to the build server.

Basic Setup

These objectives and constraints led me to the following setup. I’ll first give a high level overview, and go into the details later.
ZAP is running on a Windows Server 2012 VM. Only a few people have access to this machine, so the final report can stay safely on this machine. I created a simple PHP WebService that uses the ZAP API and can be called by TFS. The WebService returns either a simple ‘OK’, or a summary of the findings.

ZAP Scan - setup

By the way, I can’t advise the use of Windows here. I bumped into so many problems with installing stuff, memory issues, missing dll’s, etc. Partly this may however be caused by my lack of experience with this particular Windows Server version. The main reason for me to select this one, is that requesting a Linux VM would have taken years in our company, instead of ‘just 4 weeks’. If you want to automate a vulnerability scan in a similar way, take the OS that you feel most comfortable with. ZAP is very flexible and can adapt easily to your OS of choice.


The rest of this blog post will be written in a way that should allow others to follow the same steps and automate their vulnerability scan too.

Before you can automate anything, it’s best to first have a working manual scan that satisfies the objectives. For this we need to do a few things:

  • Setup a ZAP Context
  • Create an initializing ZEST script
  • Create a scan policy
  • Tune parameters
ZAP Context

The ZAP context is the basis of the configuration. It will tell ZAP which URL’s are in scope and which aren’t. It will tell ZAP how authentication works, and how it can decide whether it’s logged in or out. Also, you can specify which technology you would like to target. By doing a manual scan first, you can build the ZAP context.

Initial Zest script

The ZAP spider needs to have an initial request in the site tree before it can do anything. Now, unless I’m missing something, the spider does not seem to be able to use the login URL to initiate this. It seems a bit unduly, but the only way I could find to automate creating that first request, is by using a ZEST script that does this.

Create a scan policy

The Scan Policy tells ZAP how strong the scan should be. If you want to scan mainly for XSS, then you set the strength of those tests higher, and other tests lower. Because we are doing a nightly scan that should finish within a few hours, I set strength a bit lower overall.

Tune parameters

Finally, you should tune the parameters, so that the whole scan, including initialization, spidering and active scanning takes less than a few hours. Some ways to minimize the length of the scan are:

  • Decrease the max depth for the spider
  • Decrease the max children for the spider
  • Use lower strength for the scan policy
  • Take uninteresting parts of the site out of scope


Okay, on to the interesting part. You can use the ZAP API with any programming language, but I used PHP, because that’s what I do on a daily basis. For PHP a library is available for easy use of the API, but before I found that out, I already wrote my own. It’s just simple HTTP GET requests after all.

As said before, the biggest challenge was having the whole scan run automatically in more or less constant or at least predictable time. In the end it appeared that that means starting from scratch every time.

First I started with running ZAP in GUI mode under my own user account, so I could see what the API calls were doing. The API is available for webservice calls from outside anyway. That caused problems however with the session size. Although I started a new session every time (and used the overwrite option), session files kept growing. My file became more than 40GB and the server crashed. I think this is a bug in ZAP. Manually deleting the files doesn’t work, because they are locked. So, I had to restart ZAP on every run, which by the way is also an advantage, because that way it will automatically work when the server is rebooted. Now, because ZAP will be restarted from the webservice script, it cannot use GUI mode anymore. Fortunately, you can start ZAP in headless (daemon) mode.

Debugging the script in headless mode is a little more difficult, but if you want to debug, it shouldn’t be a problem to temporarily disable the restarting and use the GUI version to see what the API calls are doing.

Another challenge was loading the context, because a bug caused the structural parameters not to be loaded when importing the context file. Since our web application uses a structural parameter for differentiating between pages, that was absolutely required. Fortunately, the ZAP team was right on time with version 2.4.1 in which that issue was fixed. Kudos to the ZAP team!

Finally, it was difficult to get the spidering automated. As I told before, the spidering needs an initial request in the site tree, but by using a simple standalone ZEST script, that first request is easily generated.

My API calls can be broken down into four parts:

  • Initialization
  • Spidering
  • Active Scanning
  • Reporting findings
(re)start ZAP
start new session
load context
do first request

So, first you want to restart ZAP. If it’s still running, shut it down: (API call: ‘/JSON/core/action/shutdown/). If it wasn’t running, you’ll get a timeout, but the result is the same. If you need to delete old session files that have grown too big, now is the time to do this. Starting ZAP asynchronously in the background, in a way that the PHP script isn’t waiting for it to finish, appeared quite a challenge on Windows, but this is how I managed to get it to work:

  popen('start zap.exe -daemon -config api.key=123456789abcdefg", 'r');

Now you start a new session: (/JSON/core/action/newSession/). Set parameters to overwrite the previous session, because you don’t need the previous one anymore. Results from every scan will be archived at the end. Optionally check if the context is already loaded by requesting a list of currently loaded contexts (/JSON/context/view/contextList/). If it hasn’t been loaded, then I load it (/JSON/context/action/importContext/).

Now run the Zest script, so the spidering can start. First you load it (/JSON/script/action/load/), then you run it (/JSON/script/action/runStandAloneScript/).

For later calls, you need to know both the contextId and the userId, so retrieve those also (/JSON/context/view/context/, and /JSON/users/view/usersList/).

set spider options
start spider scan
  check spider status
while (status is 'RUNNING' and max time limit not reached)

Before starting the actual spidering, set the maximum spider depth (/JSON/spider/action/setOptionMaxDepth/), because spidering too deep requires a lot of extra time. I set it to 3 for a nightly scan. That will result in about 95-98% of all interesting URL’s.  Further scanning will just result in more of the same pages with different parameters.

Now you can start spidering (/JSON/spider/action/scanAsUser/). For later calls you will need the scan id, which can be found by requesting a list of current scans (/JSON/spider/view/scans/).

Because you don’t know exactly how long the spidering takes, you can check on the status every few seconds and take action if needed (/JSON/spider/view/scans/). When I run the scan on the command line, I also write the progress percentage to the screen. Furthermore, if for some reason the spider scan is running (much) longer than expected, you can just cancel it (/JSON/spider/action/stop/) and continue with the results you have until then.

Active Scanning
load scan policy
start active scan
  check active scan status
while (status is 'RUNNING' and time limit not reached)

API calls for the active scan are similar to the spidering. First you add the required scan policy file (/JSON/ascan/action/addScanPolicy/). Then you start the scan (/JSON/ascan/action/scanAsUser/), and you wait until it is finished by checking the status every few seconds (/JSON/ascan/view/scans/), or until it has taken too long, after which you actively stop it (/JSON/ascan/action/stop/).

Report Findings
get all alerts
filter out false positives
if there are alerts
  export the alerts
  return summary
  return 'OK'

After the scan has finished, you check how many issues there are (/JSON/core/view/numberOfAlerts/). Hopefully there is nothing, and you can just return the highly anticipated and hoped for ‘OK’ to the build server.

However, if there are issues, you need to retrieve them (/JSON/core/view/alerts/) and report on them. Currently I use a list of regular expressions that analyze the resulting ‘parameter’, ‘evidence’, ‘url’ and ‘alert’ and filter out the false positives. I know you can mark alerts as ‘false positive’ in ZAP, but as far as I know you can mark only 1 item at a time, which is annoying if ZAP tells you you need a specific security header that you left out for a reason, and it tells you for all 1000+ different urls. Also, the false positives don’t seem to be part of the context but of the ZAP installation, which makes it difficult to transfer them between ZAP installations. So, for now I prefer the regex option.

For more details, I export a standard HTML report locally (/OTHER/core/other/htmlreport). This can later be used to reproduce the issues, and fix them.

After all the filtering is done, I return the summary to the build server. Because our team is using Slack as a collaboration and discussion tool, I decided to also send the summary there, so everybody in the team knows work needs to be done. Results ‘may’ look like this:

Security Scan Slack Summary

Future improvements

Now that I have this working, I can start thinking about the future. I have a few wishes to further improve on this:

  • Better handling of false positives. The regular expressions are still not ideal, because the false positives are still exported to the HTML report. I guess the best way would be to have a ZAP add-on that allows you to graphically filter out multiple false positives at a time, while perhaps even adding remarks of why it’s a false positive. Obviously the configuration for this should be easily exportable to different installations of ZAP, and allow for easy backup.
  • Optimize the scan policy, so the scan becomes more efficient.
  • Add (Zest) scripts for more specific testing.
  • Make the whole webservice and configuration more generic, so we can use this for other Products as well. Actually, I already started working on this one.
  • Check out the ZAP code and start contributing…
Automated Vulnerability Scan with OWASP ZAP

White Chocolate Mousse

White chocolate mousse, one of those things for which you can wake almost everyone in the middle of the night. No, I’m not exaggerating. Wait until you try this recipe. A friend of mine hates white chocolate, but she almost begged me to make more of it. I consider this positive feedback.

Some advise upfront, only make this recipe if you have at least 3 or 4 guests or friends who help you eat, because this stuff contains quite a lot of calories. Eat at own risk!

By the way, I based this recipe on a recipe I found on, which I optimized to meet my own standards and preferences.

First the ingredients:

  • 2 egg whites
  • 2 egg yolks
  • 200 gr white chocolate
  • 2 dl whipped cream
  • 1 sheet of gelatin

The quality of the mousse depends mainly on the quality of the white chocolate, so don’t go for the cheapest type of chocolate.

The preparation will cost approximately 15-20 minutes, but you also need to put the end result in the fridge for at least 2-3 hours before you want to serve it.

Let’s start:

First put the gelatin in cold water, and leave it there until you need it.

Cut the chocolate in small pieces and put them in a bowl.


Whip the cream until stiff peaks start to form. Put the whipped cream in the fridge.

Beat the egg whites (with clean whisks!) until stiff.

Beat the egg yolks until creamy.

Put the bowl with the chocolate in the microwave and melt it. Depending on the type of microwave this will take a few minutes. Just to be on the safe side, take it out every 30-60 seconds to stir, and to check that it isn’t burning. as soon as everything melted, wring the gelatin and add it to the chocolate. Put this in the microwave for 15 seconds and stir again.

Add the egg yolks, … and stir.

Add half the whipped cream, … and stir.

Add the egg whites, … and stir.

Add the rest of the whipped cream, … and stir once more.

That’s it. You can put the resulting mixture in some nice glasses and put them in the fridge for a few hours. It tastes great with banana and chocolate sprinkles.


Bon appétit!

White Chocolate Mousse


When I first heard about clickjacking, I was amazed at how easy it is to use this type of attack and what damage it can do. Later I was amazed at how easy it is to secure your site against clickjacking. Now I’m just amazed at how many websites are still vulnerable. I’ve been thinking about it for some time and the only reason I can come up with is a lack of awareness, so here’s my contribution to making this world a little better (safer).

First some background: In a clickjacking attack, a hacker attempts to ‘hijack’ clicks and send them to a different component then where the user expects them to go, causing all kind of actions on behalf of the unknowing user. This may be sending emails, creating users, transferring money, allowing unrestricted access to your webcam… or worse. Basically any action can be executed unknowlingly if a site is not properly protected. While clickjacking attempts to hijack clicks, it is also possible to hijack other mouse events, or key strokes. This is all part of something we call “window UI redress”. In a redress attack, a hacker will ‘redress’ a site, so it looks different from what the user is accustomed to. The user won’t probably even know that he has loaded that site. Usually the hacker does this by loading the site in an iframe and making that iframe completely invisible, while showing something different behind it. It is also possible to show something on top of the site, and passing all events on to the element behind it (in this case an element inside the iframe).

400px-Clickjacking_description(image borrowed from OWASP)

Let’s illustrate this with a simple example page a hacker can make:

    <iframe src="" style="opacity: 0;" height="384" width="584" top="-123" left="-95" scrolling="no"></iframe>
    <button>Click for free iPad</button>

The vulnerable site is loaded in an iframe, with opacity set to 0, making it invisible. It is positioned in such a way, that the location where the hacker wants a user to click is right behind some shiny button.

A nice real-world example that I like, is the clickjacking vulnerability in the Adobe Flash settings page in 2011 that allowed a hacker to view webcams from any user around the world without their knowledge: Feross Aboukhadijeh explains how he found the bug and created a simple game to demonstrate it. This also shows that clickjacking does not just have to be about a single click, but can also mean complex patterns of clicks or other interactions.


So, how do you prevent all this? The basic idea is to control whether your site can be loaded in an <iframe> (or <frame>, <object>, <embed> or <applet>) or not, and if it’s allowed, on what domains then.

Never load in an iframe

If it is never allowed to let your site be loaded in an iframe, prevention is easiest. There are two things you need to do: 1. add some HTTP headers and 2. add JavaScript protection.

First the HTTP headers. Add these to any response to the browser. Oh, and don’t forget that your mobile site needs them too! And also remember any .swf files you’re using, or other objects that can be included through their own request URL.

X-Frame-Options: DENY
Content-Security-Policy: frame-ancestors 'none'

The first header is old and not very flexible, but it’s the only thing that works in most versions of Internet Explorer and some other browsers. The second header is newer and therefore unfortunately not yet (fully) supported in all browsers. So you will want to use both headers.

Second, you should also add a frame-busting script that checks if your site is the top-level domain and redirects the user to safety if it isn’t.

<style id="antiClickjack">body{display:none !important;}</style>
<script type="text/javascript">
   if (self === top) {
       var antiClickjack = document.getElementById("antiClickjack");
   } else {
       top.location = self.location;

There are many different frame-busting scripts, but according to the OWASP anti-clickjacking cheat sheet, this is currently the ‘best-for-now’ version. It will first ‘hide’ the whole site with a simple styling rule, and then, if your site appears not to be in an iframe, it will remove that rule again. Hackers have many ways of loading your page in an iframe and breaking the JavaScript, but because of the styling rule no harm can be done if they try. To give you an idea why JavaScript alone is not enough: hackers could run the site in a sandboxed iframe and disable all JavaScript, or they can use the browsers anti-XSS protection to disable the frame-busting script, or (in older browsers) they can redefine the meaning of the JavaScript variable ‘location’ and thereby changing the working of the script.

If you put the above styling and script in the head-section of your page, you should be safe. Well, … as safe as can be expected on the Internet. And again, don’t forget to include the script also in your mobile site.

only load in iframes on a specific domain

Sometimes it is required that your site can run in an iframe. Often this is only necessary on sites that are on the same domain. For example, because your web application runs in a web portal that is created somewhere else in your organization. In this case there is no JavaScript solution possible, because the same-origin policy forbids us to see the domain of the parent page, so there is no way to check if that page is allowed to include your site. Fortunately the HTTP headers have options for this situation.

X-Frame-Options: SAMEORIGIN
Content-Security-Policy: frame-ancestors 'self'

If your site needs to run in an iframe on a specific domain, simply specify the domain:

X-Frame-Options: ALLOW-FROM
Content-Security-Policy: frame-ancestors

Note that in the X-Frame-Options header you need to be more explicit and also specify the protocol.


What if your site needs to run on multiple domains? Unfortunately the X-Frame-Options headers is not so flexible, so you just can’t specify this. A possibility might be to detect which parent domain is used (e.g. through an explicit referrer parameter) and set the X-Frame-Options accordingly if allowed. Otherwise, you need to rely on Content-Security-Policy. In practice, that would mean that Internet Explorer (at least until IE11) won’t be able to give your users any safety regarding clickjacking. You can find out which browsers support Content-Security-Policy on

Content-Security-Policy: frame-ancestors, *

So, depending on your situation, all you need to do is add some JavaScript and one or two HTTP headers. No excuses! There are security vulnerabilities that are a lot harder to solve.

Just for the sake of compleness, I should mention that there are also ways users can protect themselves from the browser side through all kinds of browser plugins, but that’s a whole different story. Perhaps a topic for a new blogpost?