I started working with Docker in my job at TOPdesk almost a year ago. Security is an interest of mine, so I did some research. You can’t look at Docker without thinking about Microservices, although they are separate topics. It is often said that Microservices can greatly improve your security. But also, that if you do it wrong, security can actually get worse.
So, what do you need to do to improve (Docker) security, rather than get rid of it? For most security concerns there is already a good solution, although not all of them are widely adopted. Let’s have a look at our concerns and how we take care of them.
What Base Images to Use?
Perhaps our biggest concern was about where our developers would download their base images, and especially: what images will they select? If you look on Docker Hub, there’s a lot of images to start with. However, as with any software you download from the Internet, there can be all kinds of nasty surprises in them. A good way to look at this, is the “Tower of Trust”, as explained by Rory Mccune (https://youtu.be/Wn190b4EJWk). Basically he says that to trust the software you download (e.g. a Docker base image), you need to trust the developers of that software. Also, you need to trust the developers of all the dependencies, and the people involved in hosting that software on their servers, and the people who are responsible for the repository software, and everyone involved in the infrastructure between that server and your computer, including your own ICT department. That’s a lot of people!
You can’t get to know all these people, and figure out whether to trust them or not. What we can do, is give our developers guidelines on how to select the base images. For example, it helps to use a well known Docker registry, like Docker Hub, to select the images. The chances for compromises on such a registry are smaller than if someone is hosting a registry on some private server. ‘Official images‘ will (in general) get security updates more quickly than others. Furthermore, if more people use an image, bugs or vulnerabilities are more likely to be found, and fixed. We try to keep the amount of used images small. This makes it a little easier to keep an overview of all the known vulnerabilities. For now we are using a whitelist of images, since we don’t expect many different types of base images are needed anyway. We’ll see how that goes in the future.
How to keep Track of Vulnerabilities?
Most docker images contain known vulnerabilities. It is important to keep track of them. You should analyze those vulnerabilities to see if they are relevant for your situation. For any vulnerability, take measures to prevent misuse, because they may open the door for visitors with bad intentions. We usually use the scans from Docker Hub, but the results are hard to include in our continuous delivery pipelines. Also, there are rumours that those scans will not remain ‘free-to-use’ in the future. Not sure if this is true.
We’re looking into scanning our images with other tools like Clair (https://github.com/coreos/clair). It’s easier to make the scan part of the pipeline and scan on every build, especially since we can run the whole CVE scan in a Docker container itself. Also, we get the feeling that the results are more accurate than those from Docker Hub. There’s also a whole lot of commercial solutions that we’re looking into.
As soon as we have the scan results in the pipeline, we’ll decide whether we want to ‘block’ the pipeline when new issues show up. Alternatively, we can decide to initially just monitor the situation.
Limit a Users’ Permissions inside the Container?
In case a hacker could somehow compromise a container, we don’t want him to be able to do much harm. So, how do we limit the permissions of the user inside the container?
It turns out that Docker has already done some great work to minimize the impact in the default situation. If you change anything about permissions, you will start with a (root) user that hardly has any permissions at all. Docker uses the Linux system of Kernel capabilities: https://github.com/docker/labs/tree/master/security/capabilities.
Linux Kernel Capabilities
The Linux Kernel capabilities enable a more fine grained permission system. Docker links to this system by starting Docker containers with a specific set of capabilities (https://github.com/docker/docker/blob/master/oci/defaults_linux.go#L64-L79). This means that the root user can for example set the uid bit, but not do all kind of network-related operations. It is possible to add or remove individual capabilities if needed. However, it is advised to only do so when strictly needed. Capabilities that you add but don’t need, will only help hackers do more harm if they manage to compromise your container. Therefore, never add all capabilities at the same time. Docker set a secure default by only enabling a few of the most used capabilities.
For more info on kernel capabilities, look here: http://man7.org/linux/man-pages/man7/capabilities.7.html (Linux Kernel Capabilities)
Run as Root User inside the Container?
Ideally we don’t want our container to run as root inside. Docker advices this in its guidelines: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#user
However, apart from this general advice we found very little useful discussion or examples on this topic, so we’re not so sure that this is something we should tell our developers. Yes, we encourage the use of a different, less-privileged user, but we won’t enforce it yet. Also, the fact that the root user has limited capabilities helps.
Getting the Right Image in the Right Place
When pushing or pulling containers over networks (especially over the Internet), you always run the risk of a man-in-the-middle-attack. You need to trust that the repository you want to communicate with, is actually the repository you’re communicating with. Also, you need to trust that the image you are pulling, was pushed by the correct publisher. Docker solves this with their Content Trust system.
Publishers can decide to sign their images when publishing them. The signature will be linked to the tag of the image, so for each new version of the image that the publisher pushes, a new unique signature is created.
From the consumer side, users can also decide to activate the content trust mode. As soon as you activate Content Trust, you can only ‘see’ (pull, push, build, create and run) images that are signed. If the origin of the image cannot be verified, you cannot download it to your computer anymore. If you want to benefit from this system, you should activate Content Trust on the machine where you build and push your images. Keep in mind however, that most companies do not sign their images yet, so you cannot use all images out there yet.
For more info on Content Trust, look here: https://docs.docker.com/engine/security/trust/content_trust/
Although the above is only about Docker Security, it is good to remember that inside the Docker container there is a Linux system, so all the regular security measures on a Linux system still apply.
For even more ideas on how to improve security, have a look at the CIS benchmark for Docker: https://www.cisecurity.org/benchmark/docker/