Docker is a new container technology that primarily allows to run multiple applications on the same old server and operating system. It also makes it easy to package and ship applications. Let’s analyze why Docker can become a replacement of virtual machines and why it is better (or not). Though, undoubtedly it has easy-to-use deployment stuff.
VM hypervisors emulate virtual hardware and guzzle system resources heavily – each VM has its separated operating system. Conversely, Docker containers use a single shared operating system. Basically, each container has only its application and other dependencies. It is very lightweight as we don’t set up a separate operating system for each container. You can run much more applications (containers) in a single physical instance of hardware unlike VM with Docker. Docker is extremely rapidly getting popular, because It’s light, simple and you can save a great deal of money on hardware and electricity.
There are 2 major parts of operating system (Linux) that ensure the proper work of Docker.
Docker is built on top of LXC and allows to divide up operating system’s Kernel. It means that with Docker it is possible to use only Linux and share a single operating system for all containers. I’ve heard, by the way, that Windows platform has recently adapted Docker too.
The LXC technology (LinuX Containers), which Docker is based from, relies on SELinux and its cgroups to keep everyone in their own sandbox. You can restrict hardware resources: CPU, memory, disk. It can also manage the networking and mounts of filesystem for you as well. In other words you can run multiple isolated Linux containers on one host. Linux Containers serve as a lightweight alternative to VMs as they don’t require the hypervisors. Unlike VMs, LXC don’t require the hypervisor – this is not VM, but an ordinary host.
Assume, you have a container image of 1 GB. If you want to use virtualization, you will need 1 GB multiplied by a number of VMs required. With LXC you just share 1GB. Having 1 thousand containers requires just a little over 1GB of space for the containers OS, assuming they are all running the same OS image. Furthermore, LXCs have have better performance compared to VMs.
Another technology used in Docker is AuFS – advanced multi layered unification filesystem. Docker can save images for you and they can be represented as incremental updates to the baseline snapshots. You may have a base snapshot of e.g. Linux, make minor changes to it, save a new snapshot as a diff. AuFS allows incrementally merge all these snapshot updates into a single layered snapshot. That is the idea.
When to use Docker
If you need a full isolation of resources, a virtual machine is the way to go. However, if you need to run hundreds of isolated processes on an average host, then Docker is a good fit. There are some use cases when Docker is a win. I can see it as a good tool for development where you run tons of containers on a single machine, doing some interesting stuff in parallel with different deployments from a single Docker image. E.g. you can run a large number of development environments on the same host. For example, Dev, QA, Performance environments on old hardware also. Have a look at Fig project which is intended for these purposes and works along with Docker. It has a straightforward tutorial and super easy commands.
Docker is another technique for performing the same tasks as in Virtual Machines but with the restriction of a shared OS Linux. It is not quite mature however. There might be potential exploits to crash operating system bringing down all of the processes at once. This is the main drawback of Containers, whereas VMs ensure complete isolation. Besides that many companies use one in Production you might want to wait, but you can definitely start playing around with it on your dev continuous integration process.