Introducing Akka Cloud to Edge Continuum. Build once for the Cloud. Seamlessly deploy to the Edge - Read Blog
Support
akka akka-cluster docker

How To Distribute Application State with Akka Cluster - Part 2: Docker and Local Deploy

Proof Of Concept With Akka Cluster

In this series of blog posts, we’ll walk you through a working Proof of Concept (PoC) built using Lightbend’s open source Scala programming language with the Akka distributed toolkit. In Part 1: Getting Started, we walked through building, testing, and running the PoC locally, with instrumentation and monitoring wired in from the very beginning using Lightbend Telemetry. In this Part 2: Docker and Local Deploy, we deploy Docker on our local machine and test. In Part 3: Kubernetes & Monitoring, we move our PoC to Kubernetes (on Minikube) and review some monitoring options. Finally, in Part 4: Source Code, we look at the Scala source code required to create the PoC.

In the previous part, we introduced our PoC and associated repository, which implements a highly consistent, and performant distributed cache with a persistence backing using Akka, Apache Cassandra, and Scala. We also showed you how to test, run locally, load test via Gatling, and monitor via Lightbend’s Telemetry tooling. In this part, we’ll introduce containerizing our PoC, and then deploying locally in Docker. Finally, we’ll load test and monitor our PoC as we did in the first installment.

Deploying to Docker

As we move to running the PoC in Docker, we’re switching to the following new configurations for each respective node type, which are provided to the JVM through the docker-compose.yml environment variables:

  • cluster-application-docker.conf
  • endpoint-application-docker.conf

We’ll also be containerizing our application into a single image using sbt, and the plug-in called sbt-native-packager.

To Deploy to Docker:

  1. Start Lightbend’s Developer Sandbox for Elasticsearch that you’ve downloaded earlier.
    From a new terminal, change into the unzipped directory for the downloaded developer sandbox scripts (cinnamon-elasticsearch-docker-sandbox-<version>), and issue the command:

     
      docker-compose up
     

    If you don’t have a Lightbend Subscription, you’ll skip this step.

  2. Create a Docker image of our PoC in your local Docker environment:
    From a new terminal window go to the root directory of the PoC project, and issue the following command:
      sbt docker:publishLocal
      
  3. Start the PoC in Docker:
    In the same terminal that you just used in the previous step, issue the following command:
      docker-compose up -d
     

    Note: the Docker network name used by Lightbend’s Developer Sandbox for Elasticsearch must match what is specified in our docker-compose.yml file. At the time of this writing the Docker network is named cinnamon-elasticsearch-docker-sandbox-2122_sandbox.


    If you don’t have a Lightbend Subscription you’ll want to use the following Docker command instead:
      docker-compose -f unsup-docker-compose up -d
  4. Find the IP address of the Endpoint container:
    In the same terminal that you just used in the previous step issue the following command:
      docker network inspect akka-typed-blog-distributed-state_statepoc

    Look through the list of containers and find the one named akka-typed-blog-distributed-state_endpoint_1 which should look like this:
    …
    "b49d2347797e6a0b4ff3ce3e9d6f3e92407df4d0d5fc26d0118a0669cde07e03": {
                    "Name": "akka-typed-blog-distributed-state_endpoint_1",
                    "EndpointID": "9dfa6df7d3d383e22cf4269fb323360a787b3bff8eebfede8aeb60f6f3172242",
                    "MacAddress": "02:42:ac:1d:00:05",
                    "IPv4Address": "172.29.0.5/16",
                    "IPv6Address": ""
                }
    ...
    

    In this case the IP address of the endpoint container is 172.29.0.5, which we’ll need for load testing with Gatling in the next step.
  5. Next we want to initialize the Cassandra database, plus verify proper operation of the PoC. This can be done by causing an event to be persisted using Curl and the IP address we just retrieved.
    In the same terminal that you just used in the previous step issue the following command (remember to use the IP address retrieved from the previous step):
    curl -d '{"artifactId":1, "userId":"Michael"}' -H "Content-Type: application/json" -X POST http://:8082/artifactState/setArtifactReadByUser

    Next verify the state has been persisted, with a query:
    curl 'http://:8082/artifactState/getAllStates?artifactId=1&userId=Michael'

    The response should look like this:
    {"artifactId":1,"artifactInUserFeed":false,"artifactRead":true,"userId":"Michael"}

Docker Containers Created

As noted above, four containers are created in the akka-typed-blog-distributed-state_statepoc network by the project’s docker-compose.yml. These include:

  1. akka-typed-blog-distributed-state_cassandra-db_1
  2. akka-typed-blog-distributed-state_cluster_1
  3. akka-typed-blog-distributed-state_seed_1
  4. Akka-typed-blog-distributed-state_endpoint_1

The purpose of the two containers that are suffixed with cassandra-db_1 and endpoint_1 should be obvious. But what is the difference between cluster_1 and seed_1? The answer is that both are a type of “cluster”, but seed_1 provides the functionality of a seed node and is configured through an environment variable. Seed nodes provide an initial contact point for joining a cluster. One or more initial contact points, or seed nodes, are used to form a single Akka Cluster.

Once a cluster is formed, any failed or partitioned seed node prevents other nodes from joining the cluster. Therefore, it would be a good idea to have more than one seed node in production on Docker.

Load testing with Gatling

As we discussed in the last post, Gatling is a highly capable load testing tool built on top of Akka that is used to load test HTTP endpoints.

In our previous example, we relied upon the PoC running on localhost, but now that we’re running the PoC in Docker we’ll need to update our application.conf with the IP address of the endpoint that we found in the previous step. For example, if you comment out the localhost entry, un-comment the sample line for Docker, and then update with the IP address found in the previous step, the application.conf should look something like this:

loadtest {
  # provides the base URL: http://localhost:8082
#  baseUrl = "http://localhost:8082"
  # sample baseURL when running locally in docker
  baseUrl = "http://172.29.0.5:8082"
}

To start Gatling load test:

  1. In a new terminal switch to the gatling directory from the PoCs root directory.
  2. Enter the command:
    sbt gatling:test
    While the load test is running we recommend jumping to the next section and taking a look at the metrics being collected in real time by Lightbend Telemetry.
  3. Once the test has completed, a message is displayed referring you to the specific report. You can find all reports created by Gatling, which are stored the ./gatling/target/gatling directory that is created each time a load test is run.

Browsing through the Metrics captured by Lightbend Telemetry

As discussed in the previous post, you can find the Grafana Dashboards provided by the Lightbend’s Developer Sandbox for Elasticsearch at http://localhost:3000.

Directions for using the Grafana Dashboards can be found here.

We recommended that you take a look at the following dashboards while Gatling load testing is running (previous section).

  • Akka Actors
  • Akka and Lagom Persistence
  • Akka HTTP and Play Servers
  • Akka Cluster
  • Akka Cluster Sharding
  • Akka Dispatchers
  • JVM Metrics

Shutting Down:

  1. Enter the command:
    docker-compose down
  2. In the terminal window running Lightbend’s Developer Sandbox for Elasticsearch, issue a control+c.

Conclusion

In this blog we showed you how to use sbt’s Native Packager to containerize the PoC, and then deploy and run Docker locally. Again, we’ve also shown you how to run load testing with Gatling, and how to view the instrumentation for the PoC using Lightbend’s Developer Sandbox for Elasticsearch.

In future blog posts, we’ll show you how to deploy and scale the PoC on Kubernetes as well as perform deep dives into the source code including Akka 2.6.x functional Typed Persistent Actors, and finally the configuration that makes the PoC work.

Next up in this series is Part 3: Kubernetes & Monitoring, in which we move our PoC to Kubernetes (on Minikube), review some monitoring options, and dive deep into some Kubernetes deployment YAML files. Or, if you'd like to set up a demo or speak to someone about using Akka Cluster in your organization, click below to get started in that direction:

SEE AKKA CLUSTER IN ACTION

 

The Total Economic Impact™
Of Lightbend Akka

  • 139% ROI
  • 50% to 75% faster time-to-market
  • 20x increase in developer throughput
  • <6 months Akka pays for itself