In this series of blog posts, we walk you through a working Proof of Concept (PoC) built using Lightbend’s open source Scala programming language with the Akka distributed toolkit. In Part 1: Getting Started, we walked through building, testing, and running the PoC locally, with instrumentation and monitoring wired in from the very beginning using Lightbend Telemetry. In this Part 2: Docker and Local Deploy, we deploy to Docker to our local machine and test. In this Part 3: Kubernetes & Monitoring, we move our PoC to Kubernetes (on Minikube) and review some monitoring options. Finally, in Part 4: Source Code, we look at the Scala source code required to create the PoC.
In the previous post, we showed you how to test, run locally, package, deploy, and run our PoC in Docker with the associated repository, showed you how to load test via Gatling, and monitor via Lightbend Telemetry tooling and sandbox. In this part, we introduce Lightbend Console for Kubernetes (K8s), and then deploy our PoC in Minikube (desktop version of K8s) using YAML files provided in the repository. Again, we’ll load test with Gatling, but this time we’ll monitor our PoC in Lightbend Console. Finally, we’ll do a deep dive into the K8s YAML files being used for deployment.
Lightbend believes that Kubernetes is the best way to manage and orchestrate containers in the cloud; however, it doesn’t address the needs of programming and Reactive systems development. Akka on the other hand provides tools to solve the challenges associated with building distributed applications. When combined, Akka wonderfully complements Kubernetes by doing the heavy lifting of managing distributed state and communication, maintaining application consistency and delivery guarantees, while Kubernetes manages the underlying infrastructure.
As we move our focus to K8s we need to shift to new tools. One of those new tools is Lightbend Console, which enables you to observe and monitor Akka based applications running on K8s.
Running Lightbend Console in Minikube creates some additional requirements such as more memory and Helm, so it’s recommended that you start the Minikube set up here. Once you’ve completed these prerequisites and successfully installed Console you can move on to deploying the PoC.
If you don’t have a Lightbend Subscription then you’ll need to skip this step.
Now that we have Minikube up and running with Lightbend Console we’re ready to deploy our PoC.
The repository contains a subdirectory called K8s, which contains .yaml files for each of the three separate deployments required to successfully deploy and run our PoC on K8s. These include cassandra, endpoints, and nodes (a.k.a. cluster in previous installments). The directories looks like this:
K8s/
cassandra/
cassandra-db-deployment.yaml
cassandra-db-service.yaml
endpoints/
endpoint-deployment.yaml
endpoint-role-binding.yaml
endpoint-service-account.yaml
endpoint-service.yaml
nodes/
akka-cluster-member-role.yaml
node-deployment.yaml
node-role-binding.yaml
node-service-account.yaml
node-service.yaml
Each of the directories contains yaml files for deployment and service. However, both endpoints and nodes require additional roles for proper operation and cluster formation.
To deploy the PoC in Minikube:
eval $(minikube docker-env)
sbt docker:publishLocal
cd K8s
kubectl apply -f cassandra
kubectl apply -f nodes
kubectl apply -f endpoints
If you don’t have a Lightbend Subscription then you’ll need to change the configuration files contained in the deployment yaml files for both nodes and endpoints respectively.
On line 45 of nodes/node-deployment.yaml file you’ll need to use the file name nonsup-cluster-application-k8s.conf instead of cluster-application-k8s.conf.
On line 26 of endpoints/endpoint-deployment.yaml file you’ll need to use the file name nonsup-endpoint-application-k8s.conf instead of endpoint-application-k8s.conf.
minikube service expose-es-console -n lightbend
minikube service endpoint --url
curl -d '{"artifactId":1, "userId":"Michael"}' -H "Content-Type: application/json" -X POST /artifactState/setArtifactReadByUser
curl '/artifactState/getAllStates?artifactId=1&userId=Michael'
{"artifactId":1,"artifactInUserFeed":false,"artifactRead":true,"userId":"Michael"}
In K8s, a NodePort exposes a service on each node’s IP at a static port (the NodePort). Whereas, an Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
In a normal production environment, an Ingress is a more appropriate approach to exposing services outside of the cluster. However, this functionality isn’t currently provided out of the box, and therefore you need to install additional software to support an Ingress controller such as Ngnix, which is a common way to provide a load balancer to a K8s internal service. Many cloud providers also provide load balancers that can be added to K8s. It’s also worth noting here that some distributions, such as Openshift, provide built in load balancers through Routes.
In a local development environment running Minikube, a Nodeport is a simple and easy approach since no additional software is required and internal ports are mapped to the hosts external port. Normally in Minikube, a node is simply a virtual machine running on your laptop.
As we discussed in the two previous posts, Gatling is a highly capable load testing tool built on top of Akka that is used to load test HTTP endpoints.
In our previous example, we relied upon the PoC running in Docker, but now that we’re running the PoC in Minikube we’ll need to update our application.conf with the URL address of the endpoint that we found in Step 6 above. For example, if you comment out the localhost entry, un-comment the sample line for Minikube, and then update with the IP address found in the previous step, the application.conf should look something like this:
loadtest {
# provides the base URL: http://localhost:8082
# baseUrl = "http://localhost:8082"
# sample baseURL when running locally in docker
#baseUrl = "http://172.18.0.5:8082"
# sample baseURL when running locally in minikube
baseUrl = "http://192.168.39.237:30082"
}
In a new terminal switch to the gatling directory from the PoCs root directory.
Enter the command:
sbt gatling:test
While the load test is running we recommend jumping to the next section and taking a look at the metrics being collected real time by Lightbend Telemetry in Lightbend Console.
Once the test has completed, a message is displayed referring you to the specific report. You can find all reports created by Gatling, which are stored the ./gatling/target/gatling directory that is created each time a load test is run.
If you take a look at the file gatling/src/test/scala/com/lightbend/gatling/ArtifactStateScenario you’ll see the default “set up” near the bottom:
scn.inject(rampUsers(1000) during (5 minutes))
This set up scenario ramps up to 1000 users over a period of five minutes. Normally this test should run without any errors. You can easily modify either the number of users, ramp up time, or both and run again to see how things work.
Another experient you can try is scaling up the number of node replicas up and down. For example from the K8s directory enter the following command to scale replicas up from three to five:
kubectl scale --replicas=5 -f nodes/node-deployment.yaml
To scale nodes back down, simply reuse the command(s) above and change the replica count to the original values of three.
You can also try scaling up the number of endpoint replicas too. For example, from the K8s directory enter the following command to scale replicats up from one to two:
kubectl scale --replicas=2 -f endpoints/endpoint-deployment.yaml
Note: to properly scale endpoints you should really be using a properly configured load balancer through an Ingress to route traffic instead of the NodePort we’re currently using. Openshift / Minishift provides a built-in route that supports load balancing.
Ultimately, you’ll probably find that Cassandra becomes your bottleneck since it’s only running in a single pod.
A number of other “set ups” have been provided in ArtifactStateScenario for your experimentation, but they have been commented out. There is one particular “set up” that comes directly from the Gatling documentation here, which should overload the PoC and cause a number of timeouts. Look through the various metrics and try to determine where the problems originate.
You can find the Grafana home page that’s provided by Lightbend Console by clicking on the Grafana icon in the top left corner of Console
We recommended that you take a look at the following dashboards while Gatling load testing is running (previous section).
The repository contains a subdirectory called K8s, which contains YAML files for each of the three separate deployments required to successfully deploy and run our PoC on K8s. These include Cassandra, Nodes, and Endpoints.
Each of the directories contains YAML files for deployment and service. However, both endpoints and nodes require additional roles for proper operation and cluster formation.
We’re not going to spend a lot of time here on Cassandra, as we simply used the Kubernetes Kompose utility to convert our docker-compose-cassandra.yml file into K8s YAML files. It’s usually not recommended to run production databases inside of K8s unless they’re managed by an operator. However, for our PoC, these YAML files work well enough to bring up a single Cassandra pod in K8s.
We’re starting with nodes here because the akka-cluster-member-role.yaml is used by both nodes and endpoints:
akka-cluster-member-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: akka-cluster-member
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
This YAML file is used to create the akka-cluster-member role that’s required by Akka Management, which uses the K8s API to find the other pods (nodes) for proper cluster formation.
We will highlight individual pieces of this file, since there’s so much going on here. This deployment file starts with traditional metadata settings:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node
spec:
replicas: 3
selector:
matchLabels:
app: ArtifactStateCluster
template:
metadata:
labels:
app: ArtifactStateCluster
tag: clusternode
annotations:
prometheus.io/scrape: 'true'
Note above:
Next comes the spec section of the deployment:
spec:
serviceAccountName: nodes-sa
containers:
- name: node
image: akka-typed-blog-distributed-state/cluster:0.1.0
Note above:
Next, comes the K8s health prob configurations:
#health
readinessProbe:
httpGet:
path: /ready
port: akka-mgmt-http
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /alive
port: akka-mgmt-http
initialDelaySeconds: 90
periodSeconds: 30
#health
Note above: Akka Management, which we’ve included in our build, provides built-in implementations for K8s health readiness and liveness probes. You simply need to provide the proper configuration in your deployment YAML file to take advantage of these features. Since we’re using Akka Cluster, we need to give Akka enough time to find all nodes and then form the cluster before accepting outside requests.
Next, we’ll be taking a look at environment variables passed from the deployment:
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: CASSANDRA_CONTACT_POINT1
value: "cassandra-db"
- name: JAVA_OPTS
value: "-Dconfig.resource=cluster-application-k8s.conf"
Note above:
Environment variables are accessed through typesafe config files using the notation of {?ENV_NAME}.
Next, comes port naming:
ports:
# akka remoting
- name: remoting
containerPort: 2552
protocol: TCP
# external http
- name: akka-mgmt-http
containerPort: 8558
protocol: TCP
- name: node-metrics
containerPort: 9001
restartPolicy: Always
Note above:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nodes-akka-cluster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: akka-cluster-member
subjects:
- kind: ServiceAccount
namespace: default
name: nodes-sa
node-role-binding.yaml binds the akka-cluster-member role with the service account nodes-sa.
apiVersion: v1
kind: Service
metadata:
name: node
spec:
type: NodePort
ports:
- name: akka-mgmt-http
protocol: TCP
port: 8558
targetPort: akka-mgmt-http
nodePort: 30558
selector:
tag: clusternode
node-service.yaml exposes an external Nodeport on port 30558 on each running node. In our case this is the single VM hosting Minikube. The incoming traffic is forwarded to pods identified by the selector “clusternode” in a round-robin fashion.
Port 8558 is being used for Akka Management’s HTTP interface. To find out more please see the documentation here.
For example, if you wanted to see what Akka views as cluster membership you could enter the following in a terminal to find the IP:
$ minikube service node --url
http://192.168.39.73:30558
Then, using the response above you could use the following to find Akka’s cluster membership:
$ curl http://192.168.39.73:30558/cluster/members | python -m json.tool
The result might look like this:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1012 100 1012 0 0 43491 0 --:--:-- --:--:-- --:--:-- 44000
{
"leader": "akka://ArtifactStateCluster@172.17.0.11:2552",
"members": [
{
"node": "akka://ArtifactStateCluster@172.17.0.11:2552",
"nodeUid": "5081129479427795119",
"roles": [
"sharded",
"k8s",
"dc-default"
],
"status": "Up"
},
{
"node": "akka://ArtifactStateCluster@172.17.0.13:2552",
"nodeUid": "-869599694544277744",
"roles": [
"sharded",
"k8s",
"dc-default"
],
"status": "Up"
},
{
"node": "akka://ArtifactStateCluster@172.17.0.14:2552",
"nodeUid": "-9132943336702189323",
"roles": [
"endpoint",
"k8s",
"dc-default"
],
"status": "Up"
},
{
"node": "akka://ArtifactStateCluster@172.17.0.15:2552",
"nodeUid": "4657711064622987928",
"roles": [
"sharded",
"k8s",
"dc-default"
],
"status": "Up"
}
],
"oldest": "akka://ArtifactStateCluster@172.17.0.11:2552",
"oldestPerRole": {
"dc-default": "akka://ArtifactStateCluster@172.17.0.11:2552",
"endpoint": "akka://ArtifactStateCluster@172.17.0.14:2552",
"k8s": "akka://ArtifactStateCluster@172.17.0.11:2552",
"sharded": "akka://ArtifactStateCluster@172.17.0.11:2552"
},
"selfNode": "akka://ArtifactStateCluster@172.17.0.13:2552",
"unreachable": []
}
If you were to repeat this command over and over you’ll notice the “selfNode” changes in round-robin fashion through each of the members.
apiVersion: v1
kind: ServiceAccount
metadata:
name: nodes-sa
node-service-account.yaml simply creates the service account “nodes-sa” used for bindings between roles and the deployment.
That pretty much covers it for our nodes YAML files, next we’ll take a look at our endpoints YAML files.
The endpoints YAML files follow the same basic pattern as the nodes, not to mention that both deployments use the exact same Docker image, so we’re just going to highlight some of the differences in this section.
First, let's take a look at role bindings, as we’re re-using the akka-cluster-member role.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-akka-cluster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: akka-cluster-member
subjects:
- kind: ServiceAccount
namespace: default
name: endpoints-sa
We’re still using a unique service account for endpoints, but because Akka Management uses the K8s API during the cluster bootstrapping process to find other pods, we’re sharing the same akka-cluster-member for both nodes and endpoints.
Another difference with endpoints is that we’re exposing two Nodeports in our service definition.
apiVersion: v1
kind: Service
metadata:
name: endpoint
spec:
type: NodePort
ports:
- name: "8082"
protocol: TCP
port: 8082
targetPort: 8082
nodePort: 30082
- name: akka-mgmt-http
protocol: TCP
port: 8558
targetPort: akka-mgmt-http
nodePort: 30559
selector:
tag: endpoint
In the case of endpoints, we’re exposing ports 8082 and 8558 on the NodePorts 30082 and 30559 respectively. Port 30082 is serving up our PoC’s API, while Port 30559 provides access to Akka Management. We’re already using port 30558 for the akka-mgmt-http with nodes, so we had to choose a different port. It’s also possible to leave the nodePort: setting off, and let K8s assign an open available port.
You can use the following Minikube command to verify the IP and ports of the endpoints with the following command in a terminal:
$ minikube service endpoint --url
http://192.168.39.73:30082
http://192.168.39.73:30559
As you can see, the IP and both ports are being reported.
To optionally undeploy the PoC use these commands in a terminal from the K8s directory:
kubectl delete -f endpoints
kubectl delete -f nodes
kubectl delete -f cassandra
You can find instructions on uninstalling Lightbend Console here.
To shut down Minikube enter the following in a terminal:
minikube stop
Note: you can simply stop Minikube without stopping or undeploying the PoC.
minikube delete
In this blog post, we looked at how to use sbt’s Native Packager to build and deploy our PoC to Minikube / Kubernetes. Again, we’ve also shown you how to run load testing with Gatling, experiment with scaling, and how to view the instrumentation for the PoC using Lightbend Console. Finally, we did a deep dive into the K8s deployment YAML files we’re using for the PoC.
Next up in this series is Part 4: Source Code, in which we look at the Scala source code, including Akka’s 2.6.x functional Typed Persistent Actors, required to create the PoC. Or, if you'd like to set up a demo or speak to someone about using Akka Cluster in your organization, click below to get started in that direction: