In recent years, the use of containers to pack and deliver our applications has become increasingly present in development processes. Therefore, it is important to understand how we can guarantee the security of our containers and, consequently, of your applications. But we don’t want to take too much of your time talking again about some relevant points for this article, we suggest to access our blog and search for our other articles related to containers.
It is not rare to hear that, by using Docker, we are making our systems even more secure and, directly, making our whole process secure. Ah, it is worth mentioning that here the word system has a more generic meaning that determines a set of small components to form a whole.
With the use of Docker, we are able to make our entire process – and our systems – more secure. However, that’s not why we have to believe that everything is solved only by using containers in the Docker. It’s not uncommon to hear from people in the development and security market that Docker containers are just applications running in a sandbox environment, and this leads them to believe that when using containers, their host systems will be protected.
Of course, if you are running the Docker on a controlled system and using the best security practices, your concerns will be much less – but some care is still needed. What worries us is that we often see professionals understanding that running containers is a simpler way to run a virtual machine and that there is no direct connection between running the container and your host system. In general, containers are more insecure than virtual machines.
If you believe that containers should be treated like any other service, and for instance, you were running an Apache server on a container, you would probably adopt the same care with the container as with the service itself, right? And basically you would start to:
- Eliminate container privileges that are not strictly necessary;
- Avoid running the services as system root whenever possible;
- Look at the container root as you would at your system root.
That to say the least!
And what we’ve been orienting is that they treat the execution of containers and their privileged processes in the same way and with the same care as they treat privileged processes outside of a container.
So, basically, don’t run random container images on your system. This behavior reminds us a lot about running third-party libraries and components without proper validation, without proper care, just because it seems to us to have a secure source.
Also, for the more experienced, it reminds us of the beginning of Linux use. That is, when sysadmins heard about a new service in Linux and just looked for these packages – often in unreliable repositories – but still installed them and, following the guidelines, ran them with higher privileges. Not good, huh?
The same is happening now with the containers. After all, everyone believes they don’t need to validate or even doubt what’s packed inside it.
For those of you who never had the curiosity to read Docker’s documentation, I bring here a very interesting piece:
“Containers are lightweight because they don’t need the extra load of a hypervisor, but run directly within the host machine’s kernel”.
If you’ve been thoughtful, you’ll notice the part where the documentation makes it very clear that “they run directly in the host machine’s kernel” ! Did you notice? Containers have direct access to the kernel of their host system. That alone should make anyone aware of the extreme care when working with containers.
So, what could go wrong?
Well, if you’ve used a Linux distribution, you know that we have repositories that are maintained by the company, like Red Hat or even Ubuntu. And you know that others that are maintained by a community of developers, that can be very strenuous, but they can also let some things pass.
Here for example, if you use a distribution like Red Hat, you know that the company offers administrators and their users a trusted repository where they can download software from Security Updates to fix vulnerabilities. So here’s our first tip: run only trusted containers!
As a basic concept, don’t rely on container technology, the technology that solves your security problems will not protect your host.
Then, what is the problem?
To understand what the problem is, we have to understand one of the fundamental concepts behind container technology.
Containers are basically processes being executed in isolated ways. For example, it is as if we were running a service inside a sandbox.
This isolation, roughly speaking, is done through the use of namespaces, and in linux, not everything has its namespaces.
Currently, Docker technology uses only 5 namespaces :Process, Network, Mount, Hostname, Shared Memory. Just for you to understand, on a virtual machine system, your application is not directly in contact with the system kernel, it doesn’t have direct access to file systems like /sys and /sys/fs, /proc/*, which is the case of container systems that can have this kind of access.
Ok but, the question continues: what is the problem ?
Well, if we look at a VM(Virtual Machines) structure to be able to subvert the process execution and reach higher kernel privilege levels, the attacker will need to compromise a series of layers between the system, the virtual machine and finally the host. In the case of container execution, this access is already direct, you see? Here is a list of some kernel subsystems that do not have a namespace:
- file systems under /sys
- /proc/sys, /proc/sysrq-trigger, /proc/irq, /proc/bus
This means that if you manage to compromise the container and it is running with the wrong privileges, access to these subsystems will be guaranteed more easily.
So, answering the question on this topic, the problem is that we have seen many container users who are not setting them up correctly. And this can lead to serious system compromise problems.
What we put here does not make container technology impossible or suggest that you need to abandon it. What we want is to alert you to the need for a better understanding of how the technology works and how it can be better be used to deliver all the benefits.
We understand and believe that the best is the understanding of the technology we are using, and how to configure this technology in the best possible way.
So we want to throw some light on this issue and say that looking at host security can be a big step forward in ensuring the security of the entire system.
Also, try to understand how the container can be better created and configured to ensure a more secure execution.
We hope to see you in another article.