In this series of blog posts, we’ll walk you through a working Proof of Concept (PoC) built using Lightbend’s open source Scala programming language with the Akka distributed toolkit. In Part 1: Getting Started, we walked through building, testing, and running the PoC locally, with instrumentation and monitoring wired in from the very beginning using Lightbend Telemetry. In this Part 2: Docker and Local Deploy, we deploy Docker on our local machine and test. In Part 3: Kubernetes & Monitoring, we move our PoC to Kubernetes (on Minikube) and review some monitoring options. Finally, in Part 4: Source Code, we look at the Scala source code required to create the PoC.
In the previous part, we introduced our PoC and associated repository, which implements a highly consistent, and performant distributed cache with a persistence backing using Akka, Apache Cassandra, and Scala. We also showed you how to test, run locally, load test via Gatling, and monitor via Lightbend’s Telemetry tooling. In this part, we’ll introduce containerizing our PoC, and then deploying locally in Docker. Finally, we’ll load test and monitor our PoC as we did in the first installment.
As we move to running the PoC in Docker, we’re switching to the following new configurations for each respective node type, which are provided to the JVM through the docker-compose.yml environment variables:
We’ll also be containerizing our application into a single image using sbt, and the plug-in called sbt-native-packager.
docker-compose up
If you don’t have a Lightbend Subscription, you’ll skip this step.
sbt docker:publishLocal
docker-compose up -d
Note: the Docker network name used by Lightbend’s Developer Sandbox for Elasticsearch must match what is specified in our docker-compose.yml file. At the time of this writing the Docker network is named cinnamon-elasticsearch-docker-sandbox-2122_sandbox.
docker-compose -f unsup-docker-compose up -d
docker network inspect akka-typed-blog-distributed-state_statepoc
…
"b49d2347797e6a0b4ff3ce3e9d6f3e92407df4d0d5fc26d0118a0669cde07e03": {
"Name": "akka-typed-blog-distributed-state_endpoint_1",
"EndpointID": "9dfa6df7d3d383e22cf4269fb323360a787b3bff8eebfede8aeb60f6f3172242",
"MacAddress": "02:42:ac:1d:00:05",
"IPv4Address": "172.29.0.5/16",
"IPv6Address": ""
}
...
curl -d '{"artifactId":1, "userId":"Michael"}' -H "Content-Type: application/json" -X POST http://:8082/artifactState/setArtifactReadByUser
curl 'http://:8082/artifactState/getAllStates?artifactId=1&userId=Michael'
{"artifactId":1,"artifactInUserFeed":false,"artifactRead":true,"userId":"Michael"}
As noted above, four containers are created in the akka-typed-blog-distributed-state_statepoc network by the project’s docker-compose.yml. These include:
The purpose of the two containers that are suffixed with cassandra-db_1 and endpoint_1 should be obvious. But what is the difference between cluster_1 and seed_1? The answer is that both are a type of “cluster”, but seed_1 provides the functionality of a seed node and is configured through an environment variable. Seed nodes provide an initial contact point for joining a cluster. One or more initial contact points, or seed nodes, are used to form a single Akka Cluster.
Once a cluster is formed, any failed or partitioned seed node prevents other nodes from joining the cluster. Therefore, it would be a good idea to have more than one seed node in production on Docker.
As we discussed in the last post, Gatling is a highly capable load testing tool built on top of Akka that is used to load test HTTP endpoints.
In our previous example, we relied upon the PoC running on localhost, but now that we’re running the PoC in Docker we’ll need to update our application.conf with the IP address of the endpoint that we found in the previous step. For example, if you comment out the localhost entry, un-comment the sample line for Docker, and then update with the IP address found in the previous step, the application.conf should look something like this:
loadtest {
# provides the base URL: http://localhost:8082
# baseUrl = "http://localhost:8082"
# sample baseURL when running locally in docker
baseUrl = "http://172.29.0.5:8082"
}
sbt gatling:test
As discussed in the previous post, you can find the Grafana Dashboards provided by the Lightbend’s Developer Sandbox for Elasticsearch at http://localhost:3000.
Directions for using the Grafana Dashboards can be found here.
We recommended that you take a look at the following dashboards while Gatling load testing is running (previous section).
docker-compose down
In this blog we showed you how to use sbt’s Native Packager to containerize the PoC, and then deploy and run Docker locally. Again, we’ve also shown you how to run load testing with Gatling, and how to view the instrumentation for the PoC using Lightbend’s Developer Sandbox for Elasticsearch.
In future blog posts, we’ll show you how to deploy and scale the PoC on Kubernetes as well as perform deep dives into the source code including Akka 2.6.x functional Typed Persistent Actors, and finally the configuration that makes the PoC work.
Next up in this series is Part 3: Kubernetes & Monitoring, in which we move our PoC to Kubernetes (on Minikube), review some monitoring options, and dive deep into some Kubernetes deployment YAML files. Or, if you'd like to set up a demo or speak to someone about using Akka Cluster in your organization, click below to get started in that direction: