Containerised applications are becoming an increasingly common way to deploy applications in environments that are adopting a DevOps approach. But are they secure? This post will briefly explore some of the positives and negatives they bring.

What are containers?

Containers are the next-level on from the virtualisation we all know and love – Virtual Machines (VMs). In essence, containers are about virtualising and isolating applications within an operating system (whilst still sharing a common kernel), rather than virtualising the entire operating system (OS) aswell. Containers offer massive benefits in the way of portability, speed of deployment and standardisation.

Container vs Virtual Machine

Let’s say you had two separate web applications you needed to run on an internal web server. You could build a VM, install all the relevant software and libraries it needs, deploy your code and run it. But what if the second application requires a different version of the same framework? Or the two packages cause problems with each other? Or maybe you need to add an additional server behind a load balancer to cope with demand – you’d have to build a new VM, with a new OS and re-deploy everything – a lot of overhead.

Enter containers! Containers are basically an immutable image that contains all of your code and dependant software/libraries in its own little package. Instances of containers don’t interfere with each other and have their own separate view of the OS and its resources. The container platform (such as Docker) abstracts the OS from the application a bit like VMware ESXi abstracts the hardware from a virtual machine. If you need to deploy a new containerised application, you can be confident that it will work the same way on your production server as it does on your test environment as everything it needs is contained within the image.

Additionally, if you need to scale out, then container images can be deployed much quicker than building an entire new VM. They are perfectly suited for cloud type environments.

Security pros of containers

One big advantage of containers is their immutable nature. They are a fixed configuration, with specific versions of software, specific code and specific resource requirements. You are no longer patching live servers, making changes to accommodate new features or leaving legacy unused software/rules lying around. You simply build an entirely new package, deploy it and delete the old instances.

Containers allowed for easier server build standards and baselining. The underlying host operating systems (that runs the container virtualisation platform – be it Windows or Linux) would just be one standard build across all nodes. You wouldn’t have to keep track of loads of software packages on specific servers etc. as they wouldn’t have much installed – most software would all be within the container images. This helps to reduce your overall attack surface.

Containers (and a general DevOps approach) bring with them faster deployment times and responses to issues. Let’s say you find a vulnerability in the web framework your application uses. Traditionally you’d have to deploy the version to a test, update the required components, perform testing, then repeat the process in production whilst hoping that your test environment was an accurate representation of live (something that can be quite difficult).

This process usually involves multiple teams (developers and infrastructure) and can be slow and cumbersome. With containers, there is no “patching” per-se. The developer would rebuild the container image with the up to date software in place. You could also be confident whatever testing is carried out on that image will be the same as it will be in production. This helpfully results in reduced patching requirements for your infrastructure teams and no more “well it worked on my machine”!

Security cons of containers

As with most things in life – containers are a double edge sword. Whilst there are clear benefits to them, they bring with them challenges.

Take the benefit above reduced server patching – whilst true, you are now reliant on your DevOps team to be choosing secure software and updating builds when there are vulnerabilities – vulnerability management becomes harder.

Traditionally with vulnerability management you’d scan an IP, find a load of vulnerabilities on a server and know what you need to do to resolve them. But with containers, by the time a vulnerability scan has finished, the affected application may not even be running on that server anymore (due to their dynamic nature). How do you know which application had the problem?

There is a disconnect from a host server and the applications that may or may not be running on it. Vulnerability management has to change to take this into account and needs to be present throughout the whole development life cycle. Applications need to be tested for vulnerabilities early on.

Another potential downfall of containers is the layered approach it takes to pulling the required software components into an image. As mentioned above, a container image contains all the relevant libraries etc. the application needs to run – but where have these come from? Often they will be pulled from public repositories, but are your developers validating what they contain? Are they pulling them from reputable sources? How do you know that the customised image of Apache they used didn’t actually contain a malicious backdoor. Image authenticity, authorship and integrity become hugely important for this reason.

Container isolation is also another potential concern. Application containers are just not as isolated as virtual machines. Containers rely on a shared kernel and a compromised application could result in being able to compromise other containers or the kernel itself. Something that would be harder to do if applications were running on separate virtual machines.


The points above aim to just give you a taste of some of the pros/cons of containers. There are a lot more to consider. Security has to be at the forefront of your mind if you are considering going down the route of application containerisation.

One good resource I’ve come across recently is NIST Special Publication 800-190 (link here). It goes through some of the main potential issues with containers as well as recommendations on how to address them. Some of these key recommendations include:

  • Use container-specific host OSs – Instead of deploying a generic Windows or Linux OS on which to run your containerised software, consider one specifically designed for the job (such as CoreOS or Google Container-Optimised OS). These are examples of minimalistic OSs with a lot of software stripped out, which helps to reduce your attack surface.
  • Group containers by purpose, sensitivity or threat posture – If you’ve got a public web application it probably isn’t a great idea to host it on the same host OS as your internal payroll application. Containers aren’t as isolated as VMs so taking a defense-in-depth approach with grouping can be a sensible thing to do. In large environments this will require the use of an orchestration tool.
  • Adopt container-specific vulnerability management tools – For the reason outlined earlier, traditional vulnerability management tools will struggle with the dynamic and immutable nature of containerised application.
  • Adopt the right vulnerability management processes – Going hand in hand with above, you need to ensure the right processes are in place for your DevOps to integrate vulnerability management early on in the software development lifecyle, and react to new vulnerabilities quickly and efficiently.
  • Image management tools and processes – Image management is probably one of the key concerns. Processes and tools need to be in place to ensure that images are being pulled from reputable sources, validated and the integrity validated to ensure they have not been modified.
  • Prevent embedding of cleartext secrets – Often there is the need for secrets within your applications. Perhaps connections to a database server or an API key etc. However embedding this in the clear within containerised images poses a risk. Instead, orchestration or other privileged access management tools should be utilised to prevent this behaviour.
  • Orchestration security – Most large deployments of containerised software will use some form of “orchestrator” that controls how images are deployed and where to. It is as equally as important that these are correctly secured too.

Don’t forget the underlying host OSs as well. It is easy to get tied up in focusing on container/application specific problems, but at the end of the day the software is all running on an underlying host OS – it is important this is secured and maintained correctly to prevent compromising the higher-layers of abstraction as well.


This was only a very quick look at some of the security aspects around containerised applications. It isn’t aimed to scare people off – adoption of application containers can be a really good thing for an organisation, but it is incredibly important that sufficient planning is done up-front to ensure you address the security concerns.

Hopefully if nothing else, this post has provoked some food for thought. It is by no means definitive and I’d recommend anyone considering going down the route of containers to do further research. The NIST paper referenced above is a good starting point to give you an idea of some of the challenges you may face.