A while ago, around the release of the Linux kernel version 3, the concept of namespaces and containerization was introduced through a module called lxc (Linux containers).
The idea behind a container is sort of similar to the idea of a virtual machine. For example, with virtualisation, you have a server (the ‘host’ ) running something like KVM or VMware. The machines running under it are called guests. They are fully self-contained computers, running on top of the host.
Containers take this concept to the next level. Containers help developers especially (but also systems administrators) deploy applications or services rapidly. Containers are very, small Linux machines that run as a normal Linux process (in userspace). An average Linux server is gigabytes in size, and has a kernel choc-full of handy drivers for all sorts of hardware and so forth. A container, conversely is only a few hundred megabytes in size. A VM will boot in minutes, a container will start up, including its intended application, in a few seconds or less. It runs just as you’d expect any normal Linux application to run – as a process you can see using the ps command.
The below diagram from the folks at CoreOS shows the relation between containers and the Operating System well. On the top, you can see a normal Linux server, running all of the usual services you might imagine, such as Java, the nginx web server etc, there will also likely be a few other applications running too.
In the Figure 2, you can see an example of an Operating System that is geared towards containerisation. It has only limited, minimal applications, such as the ssh server running. All of the other applications are now running within completely isolated containers.
Containers are ideally made to be single purpose in nature: imagine you have an application that uses JBoss and PostgreSQL. You’d likely have a container for both JBoss and PostgreSQL components. The two containers are isolated from each other, but it is possible for them to be linked, so they can talk to each other. Containers are also designed to be ephemeral: once it’s purpose has been served, you blow it away, you can always spin up another one in seconds. This also means that data stored inside a container is not persistent. If you want data, such as databases or web sites to remain, you can put them in an exported storage volume on the host server. In security best practice, you would update your base-image with any new patches or security hardening procedures, and then roll on your application. After this, you’d destroy your container from last week, and use the new, securer container the next week.
Containers fit great with the whole SaaS (software as a service model), and it’s also enabling developers and operations staff to work together in a friendlier way (read DevOps way). But let’s not get ahead of ourselves here, tooling isn’t going to fix cultural problems, that’s a whole ‘nother topic for another website!
Docker has been around for a while now. Docker took to Linux what lxc could not: ease of use. Once developers found out how empowering it was just to spin up a docker container on their laptop and be guaranteed it would operate in exactly the same way on a server somewhere else, with the greatest of ease, it quickly became a no-brainer.
Docker has its critics, some (like the coreOS team) believe that Docker isn’t secure enough, and they are becoming too commercialised (that’s why they made rkt as a competitor to Docker), but love it or hate it, Docker has a huge following, and not just in companies you’d expect, like Amazon and Google.
You can read more about containerisation, over at the following websites: