Dockerizing Spray HTTP Server

dockerspray

This is the continuation of the previous article. This series shows how simple it is to create a lightweight HTTP-server based on Spray framework, put it into a Docker-image and run multiple instances on any single machine requiring no dependencies.

Implementing a lightweight RESTful HTTP Service

The whole project can be found on GitHub. For impatient, git pull it and jump right to the next section. We’re going to use Scala and Akka framework along with SBT build tool. From Spray framework we will use a spray-routing module which has a simple routing DSL for elegantly defining RESTful web services and it works on top of a spray-can HTTP Server.

Ok, let’s get started.

import akka.actor.{ActorSystem}
import spray.routing._
import akka.util.Timeout
import scala.concurrent.duration._
import scala.util.{Failure, Success}

object HttpServer extends App with SimpleRoutingApp {

  implicit val actorSystem = ActorSystem()
  implicit val timeout = Timeout(1.second)
  import actorSystem.dispatcher

  startServer(interface = "localhost", port = 8080) {

    // GET /welcome --> "Welcome!" response
    get {
      path("welcome") {
        complete {
          <html>
            <h1>"Welcome!</h1>
            <p><a href="/terminate?method=post">Stop the server</a></p>
          </html>
        }
      }
    } ~
      // POST /terminate --> "The server is stopping" response
      (post | parameter('method ! "post")) {
        path("terminate") {
          complete {
            actorSystem.scheduler.scheduleOnce(1.second)(actorSystem.shutdown())(actorSystem.dispatcher)
            "The server is stopping"
          }
        }
      }
  }
    .onComplete {
    case Success(b) =>
      println("Successfully started")
    case Failure(ex) =>
      println(ex.getMessage)
      actorSystem.shutdown()
  }
}

The REST API is as follows:

  • GET/welcome –> responds with a “Welcome” and Post-hyperlink.
  • POST/terminate –> will stop the server.

DSL describing this API is inside of a method startServer.  And that’s it!

I didn’t want to show the full power of Spray because this article is solely about Docker.

Let’s run it and check:

curl http://localhost:8080/welcome

Dockerizing the server

Because the Docker Engine uses Linux-specific kernel features, I’m going to use lightweight virtual machine to run it on OS X. If you too, just download it, install and run – easy peasy. Just make sure before dockerizing that you’ve set the following 3 env variables for connecting your client to the VM:

export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/ivan/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1

The remaining stuff doesn’t differ much.

We use a trusted automated java build (OpenJDK Java 7 JRE Dockerfile) as a base of our image.

First, you will need to create a Dockerfile for the image in your project:

# Our base image
FROM dockerfile/java

WORKDIR /

USER daemon

# Here the stuff that we're going to place into the image
ADD target/scala-2.11/docker-spray-http-server-assembly-1.0.jar /app/server.jar

# entry jar to be run in a container
ENTRYPOINT [ "java", "-jar", "/app/server.jar" ]

# HTTP port
EXPOSE 8080

Build your project as a single jar file:

DockerSprayHttpServer$ sbt assembly

And now navigate to the project’s folder and run:

DockerSprayHttpServer$ docker build .

This will send the newly created image to the Docker daemon.

Running multiple instances on a single machine

Run the following command to see available docker images :

DockerSprayHttpServer$ docker images

Снимок экрана 2014-12-15 в 23.48.52

a83cda03f529 is not in the repo yet – what we’ve just created. We’re going to run multiple instances from it.

First run the 1-st instance:

DockerSprayHttpServer$ docker run -d -p 10001:8080 a83cda03f529

Note, that we have mapped our 8080–>10001 port.

Now, let’s verify the container is running:

DockerSprayHttpServer$ docker ps

Снимок экрана 2014-12-16 в 0.58.12

We exposed port 8080 for the http-server. When run in Docker-container, it maps our port onto another. On top of this I am  on OS X. Do you remember we set at the beginning 3 env vars? One of them is DOCKER_HOST.

You can actually check this IP as follows:

DockerSprayHttpServer$ boot2docker ip

We need to use this IP address (for OS X, as we are not on Linux).

Let’s test it:

DockerSprayHttpServer$ curl $(boot2docker ip):10001/welcome

Great!

You can run as many containers as you want! They are completely isolated. For Scala developers, by they way, I’ve found a nice contribution, you can dockerize your artifacts right from SBT.

Advertisements

Docker: A bird’s-eye view

Introduction

Docker is a new container technology that primarily allows to run multiple applications on the same old server and operating system. It also makes it easy to package and ship applications. Let’s analyze why Docker can become a replacement of virtual machines and why it is better (or not). Though, undoubtedly it has easy-to-use deployment stuff.

docker

The gist

VM hypervisors emulate virtual hardware and guzzle system resources heavily – each VM has its separated operating system. Conversely, Docker containers use a single shared operating system. Basically, each container has only its application and other dependencies. It is very lightweight as we don’t set up a separate operating system for each container. You can run much more applications (containers) in a single physical instance of hardware unlike VM with Docker. Docker is extremely rapidly getting popular, because It’s light, simple and you can save a great deal of money on hardware and electricity.

There are 2 major parts of operating system (Linux) that ensure the proper work of Docker.

LXC

Docker is built on top of LXC and allows to divide up operating system’s Kernel. It means that with Docker it is possible to use only Linux and share a single operating system for all containers. I’ve heard, by the way, that Windows platform has recently adapted Docker too.

The LXC technology (LinuX Containers), which Docker is based from, relies on SELinux and its cgroups to keep everyone in their own sandbox. You can restrict hardware resources: CPU, memory, disk. It can also manage the networking and mounts of filesystem for you as well. In other words you can run multiple isolated Linux containers on one host. Linux Containers serve as a lightweight alternative to VMs as they don’t require the hypervisors. Unlike VMs, LXC don’t require the hypervisor – this is not VM, but an ordinary host.

Assume, you have a container image of 1 GB. If you want to use virtualization, you will need 1 GB multiplied by a number of VMs required. With LXC you just share 1GB. Having 1 thousand containers requires just a little over 1GB of space for the containers OS, assuming they are all running the same OS image. Furthermore, LXCs have have better performance compared to VMs.

AuFS

Another technology used in Docker is AuFS –  advanced multi layered unification filesystem. Docker can save images for you and they can be represented as incremental updates to the baseline snapshots. You may have a base snapshot of e.g. Linux, make minor changes to it, save a new snapshot as a diff. AuFS allows incrementally merge all these snapshot updates into a single layered snapshot. That is the idea.

When to use Docker

If you need a full isolation of resources, a virtual machine is the way to go. However, if you need to run hundreds of isolated processes on an average host, then Docker is a good fit. There are some use cases when Docker is a win. I can see it as a good tool for development where you run tons of containers on a single machine, doing some interesting stuff in parallel with different deployments from a single Docker image. E.g. you can run a large number of development environments on the same host. For example, Dev, QA, Performance environments on old hardware also. Have a look at Fig project which is intended for these purposes and works along with Docker. It has a straightforward tutorial and super easy commands.

Conclusion

Docker is another technique for performing the same tasks as in Virtual Machines but with the restriction of a shared OS Linux. It  is not quite mature however. There might be potential exploits to crash operating system bringing down all of the processes at once. This is the main drawback of Containers, whereas VMs ensure complete isolation. Besides that many companies use one in Production you might want to wait, but you can definitely start playing around with it on your dev continuous integration process.

References

Docker

Containers and Docker: How secure are they 

PaaS under the hood

FIG

Getting started with Docker Orcherstration using FIG