Microservices - reactive communication patterns
Project showcasing different microservice communication styles using Scala, Akka, Play and other tools from Scala ecosystem.Tweet
Project showcasing different microservice communication styles using Scala, Akka, Play and other tools from Scala ecosystem.Tweet
There are several ways to get this template.
reactive-microservicesin the Lightbend Activator UI.
Already have Lightbend Activator (get it
here)? Launch the UI then
reactive-microservices in the list of
reactive-microservicesproject as a zip archive
If you haven't installed Activator, you can get the code
by downloading the template bundle
In your File Explorer, navigate into the directory that the template was extracted to, right-click on the file named "activator.bat", then select "Open", and if prompted with a warning, click to continue:
Or from a command line:
This will start Lightbend Activator and open this template in your browser.
C:\Users\typesafe\reactive-microservices> activator ui
reactive-microservicesproject from the command line
If you have Lightbend Activator, use its command line mode
to create a new project from this template.
activator new PROJECTNAME reactive-microservices on the command line.
The creator of this template maintains it at https://github.com/theiterators/reactive-microservices#master.
We've included the text of this template's tutorial below, but it may work better if you view it inside Activator on your computer. Activator tutorials are often designed to be interactive.
Reactive microservices is an activator template completely devoted to microservices architecture. It lets you learn about microservices in general — different patterns, communication protocols and 'tastes' of microservices. All these concepts are demonstrated using Scala, Akka, Play and other tools from Scala ecosystem. For the sake of clarity, we skipped topics related to deployment and operations — that's a great subject for another big activator template.
To feel comfortable while playing with this template, make sure you know basics of Akka HTTP which is a cornerstone of this project. We recently released an Akka HTTP activator template that may help you start. At least brief knowledge of Akka remoting, Akka persistence, Akka streams and Play Framework websockets is also highly recommended. Anyway, don't worry — all these technologies (and more!) will be discussed on the way but we won't dig into the details.
This activator template consists of 9 runnable subprojects — the microservices:
Take some time to review
application.conf files that are located in
resource subdirectory of each microservice. You can also look at
docker-compose.yml file, which contains docker preconfigurated images for all the required databases.
auth-password you need to run the SQL migration scripts which are located in postgres directory. If you want to use non-default names please tweak the `application.conf` files. You can also tweak and use this script in your console:
psql -h localhost -U postgres -f ./postgres/init.sql &&
psql -h localhost -U postgres -f ./postgres/auth_entry.sql &&
psql -h localhost -U postgres -f ./postgres/identity.sql
docker-compose up in project main directory to launch databases, or if you are using your own database instances, make sure you have PostgreSQL, MongoDB and Redis up and running.
You can run each service separately, but we also we provided a SBT task called
Due to some issues with Play/sbt cooperation
btc-ws should be run separately. In order to run them in one sbt CLI instance use these commands:
; project btc-ws; run 9000
; project metrics-collector; run 5001
Everything else should work out of the box. Enjoy!
There's no formal or widespread definition of a microservice. However usually microservices are defined as an software architectural style in which system is composed of multiple services. Those services are small (or at least smaller than in typical, monolith applications), can be independently deployed and they communicate using (lightweight) protocols. Well–defined microservice should be organized around business capabilities or to put it in Domain–Driven Design wording — should encapsulate Bounded Context. Sometimes it is said that microservices are implementation of Single Responsibility Principle in architecture.
Applications based on a microservice architecture are basically distributed systems. This means that they can scale horizontally in a much more flexible way than monolithic systems — instead of replicating whole heavyweight process one can spawn multiple instances of services that are under load. This guarantees better hardware utilization — money savings. Another important consequence of moving from monolith to microservices is the need of designing for failure which can result in a truly reactive system. Like it or not, while designing a distributed system you have to take failure into account — otherwise you will see your system falling apart.
However, technical benefits of introducing a microservice–based architecture are far less important than the social ones. MSA enables truly cross–functional teams organized around business capabilities which are implemented by microservices. This in turn allows easy team scaling and makes team members care more about actual business behind the technology. This agility together with autonomy of teams usually result in a shorter time–to–market. In fact, most of the early adopters of microservices (Netflix, Amazon, SoundCloud, Tumblr) underline the ability to deliver faster as the main selling point of microservices.
Shorter time–to–market stems not only from well–organized teams but also from other features of MSA. First of all, microservices are supposed to be easier to understand (thus maintain) than monolithic systems. Smaller size also means that microservices can be easily rewritten (or simply disposed) instead of being refactored which usually is expansive and results in sub–optimal code quality. Autonomy of teams enables polyglot approach which provides better utilization of tools and bounded context is supposed to increase code reusability. Microservices should also be fun for developers, as everything that's new and challenging.
A shift from monolithic to microservices architecture is a serious step. Distributed systems are totally different than monolithic ones and have their own dilemmas and problems. First and foremost microservices are all about communication and protocols. One should be aware that microservices doesn't magically suppress complexity — they just move it from code to communication layer. Different communication protocols (synchronous and asynchronous) and transport guarantees will be the main subject of this tutorial (see chapters, , , ). Another important subject worth mentioning while discussing microservices is a polyglot persistence — how to embrace multiple different data stores and not lose consistency and performance. You can check out this approach in our activator template in . Very common question asked while developing microservices is 'how big is a microservice?' — we'll discuss it in . Microservice approach requires some boilerplate; one may be tempted to share code using shared libraries which introduce coupling — why and when would you like to do that? See . There's also a multitude of the problems which are out of the scope of this template like: testing (why, when and how to do it?), polyglot approach (is it worth the cost?), operations, contract management (what's my API, who are my collaborators, how can I contact them?), API versioning (how to stay backward compatible?), logging & debugging and security. Feel encouraged to enrich this activator template with suitable examples and tutorials — and let us know!
To present different concepts related to microservices we built an authentication system. The idea behind it is really simple — you can sign in/sign up using arbitrarily chosen authentication methods (currently they're email–password, Facebook Oauth, codecard). Number of used authentication methods indicates user's token strength. User that presents a valid authentication token can access business applications behind the authentication system. To test if our authentication system actually works we integrated a simple application that after singing in lets users subscribe and get notifications in real–time about bitcoin market events such as rate change, volume above/below certain level etc.
If you want to have a closer look at this schema go here.
Complete system consists of 9 microservices:
Synchronous communication is a data transfer method in which receiving and sending are governed by strict timing signals. It's usually takes the form of a request–response protocol — party sends data only when it's explicitly asked for it. That's how typical HTTP service works. Synchronous protocols are very popular because they're easy to understand and analyze. However they have one significant drawback — they don't scale well. First of all, synchronous protocols introduce liveness issues — when you ask for something you have to be prepared that your call may explicitly fail or even worse you may never get any response at all (that's why you need timeouts). Secondly, usually if you ask for something, you have to wait for the response to continue processing. Several such calls and you'll end up waiting most of the time instead of doing something actually useful. Having said that, world wide web we use daily is mostly synchronous — you click 'Log in' button and you wait for 'Login successful' confirmation box. Request–response — that's how vanilla HTTP works. Synchronous communication is also useful (well, almost unavoidable) while designing microservice–based system — learn why and when.
identity-manager is a microservice for issuing and managing existing identities in the system. Code of this service is very straightforward, it consists of:
auth-codecard, auth-password are services for signing in/signing up using codecard and email–password respectively. These are user facing services, which means that SPA frontend application communicates with them directly. They're synchronous by design — when user logs in or registers, she has to wait for the operation to complete before proceeding. These services are much more complex, thus structured:
auth-fb is a service for signing in/signing up with Facebook Oauth and is very similar to auth-codecard and auth-password. However it misses separate Repository layer. Microservices are all about agility and flexibility, so we don't have to follow the same rules all the time. auth-fb uses Redis — database with very simple interface and writing dedicated class for interacting with it would be an overkill. That's why all the database access is done inside Service. Looking at auth-fb it's easy to notice how polyglot persistence approach is helping developing microservices — each authentication method service is responsible for (and only for) its own registration data and can use the best persistence method to store it (simple key–value store in case of auth-fb). So called 'distributed truth' makes systems little harder to maintain and use (you can't query all the datastores at once, data access is non–uniform etc.) but it enables better tools utilization, scaling, decoupling and resilience (of course when done right).
token-manager is a focal point of our authentication system. Authentication method services gets a fresh token for logged in users from it and business services verifies tokens presented by users to check identities. token-manager is built using previously presented layered architecture (routes — service — repository) but it misses gateway as it doesn't initiate communication with other services. It's the most important service in whole project so it's equipped with custom metrics reporting — there's processing time reporting for every request and success/failure reporting for each action. You'll learn how it works under the hood in .
In the microservice architecture you usually have services that offer public API accessible by clients and internal API that is being used only by other services. Sometimes service offers some features that are public and others that are strictly private. That's the case with token-manager — you want your clients to be able to log out (delete token) but you don't want them to be able to add new token without going through one of auth services. This could be easily handled by a proxy server (like HAProxy or nginx) but sometimes you may want to do more complex transformations (ex. change API). In this case you should write a proxy service. Session manager is an exemplary proxy service that changes API and hides internals.
Asynchronous communication, unlike synchronous, is not restricted by any timing signals. Asynchronous communication is usually implemented by message passing (vs request–response) — 'telling' (vs 'asking'). Asynchronous protocols usually scales better as communicating parties don't have to wait. However asynchronous message passing is unnatural, can get really complex thus it's very often complicated and hard to debug. Nonetheless, truly reactive systems should relay on asynchronous message–passing — as stated in Reactive Manifesto.
Inyou saw nice interface for reporting metrics. Let's see how it's built. metrics-common is a shared library that consists of:
metrics-collector is the receiving end of metrics subsystem. It's a Play app that receives metrics, stores them in a database and presents them to system administrators via websockets. Websockets part is really straightforward — it passes received metrics to websocket. The receiving endpoint is much more interesting. It's of course built using Akka HTTP but with customized flow instead of typical routing. The flow starts with broadcasting to
requestResponseFlow simply maps every request to 200 HTTP response.
requestMetricFlow turns requests into Metrics or in case of error to 'empty' element.
requestResponseFlow is the actual output of the main flow while
requestMetricFlow is being broadcasted to
journalerSubscriber is an actor subscriber that saves received metrics to MongoDB (ex. for further analysis).
wsSubscriber is another actor subscriber that broadcasts received metrics to all connected websockets via Akka router.
If you want to have a closer look at this schema go here.
Another way to provide asynchronicity in the world of webservers are websockets. Websockets are similar to TCP sockets but additionally they provide minimal framing. This makes them ideal for asynchronous message passing. You've seen them in action in metrics-collector but this time we'll analyse more complex service.
btc-ws is a façade service for business part of our application. It allows users, after authentication, to manage subscriptions for BTC market events and receive alerts. While one could argue that user's actions can be handled synchronously, market events are asynchronous and should be handled like that. Websockets are perfect fit for such case — otherwise we would have to use HTTP polling which is inefficient. First thing to notice in the btc-ws code is a long block of mappings needed for a websocket message — scala object translations. Websocket initialization code is really straightforward — it retrieves user's identity based on presented token and opens websocket handled by
WebSocketHandler actor. You'll learn what happens there in .
The 'go–to' tool when it comes to asynchronous message passing in Scala world is of course Akka. It has great capabilities and convenient interface but it comes with a cost. If you chose Akka as a communication protocol for a significant part of your project you're losing many of the benefits of polyglot approach. First of all, if you want to interoperate with Akka, you have to be on JVM and preferably use Java or Scala. Your code may also become more tightly coupled — Akka encourages usage of shared libraries and data structures. As a result, in certain cases it might be better to consider using lightweight message queues such as RabbitMQ or Kafka to avoid aforementioned drawbacks but if you're sure you won't be leaving JVM anytime soon, Akka is definitely the best choice. That was also our decision in the fully asynchronous part of our system.
WebSocketHandler actor is modelled as a simple state machine to handle all the possible failures. Let's see a state diagram.
When websocket connects and actor starts it sends a request to btc-users supervisor to get a remote command handler (
waitForHandler state), in case of failure it simply disconnects websocket — there's nothing we can do more here. After successful handler acquiring (
waitForSubscriptions state), websocket actor requests a list of existing subscriptions for user — handler failure is the same as before. List of subscriptions causes websocket actor to enter operational handling of commands (which are routed to handler) and market events from handler (which are routed to websocket) —
handleUser state; in case of timeout we assume something bad happened to handler so we switch back to
waitForHandler state. Notice how clean protocol we've got — btc-ws microservice doesn't know any implementation details of btc-users — it just sends messages with clear semantics.
btc-users is a microservice completely based on Akka. It consists of three actors types:
UsersManager plays a role of supervisor. It responds to requests from btc-ws to create a handler (
UserHandler actor) for user with given id.
UserHandler is where all the heavy lifting happens. It's a persistent actor that processes subscription requests and issues alarms based on ticker from BTC market. First and foremost we once again leveraged the polyglot persistence approach — we used Akka persistence to persist subscription settings. Subscribe/unsubscribe actions are a perfect cases for event sourcing and that's exactly how we implemented it — see
receiveRecover. Besides handling subscribe/unsubscribe requests,
UserHandler responses to
QuerySubscriptions and broadcasts market alarms.
UserHandler actor manages its lifetime similar to how btc-ws does — by heartbeats and timeouts.
DataFetcher is a very simple actor that every few seconds fetches BTC ticker and broadcasts it to all
UserHandler actors via Akka router. That's it — that's how subscriptions are managed and market alarms are issued.
During our tour of microservices we hopefully showed the full power that comes with mixing different approaches, techniques and tools. Scala's toolbelt (and especially Akka and Play) is, without a question, ready for building reactive microservice–based distributed systems. However before migrating all your project to MSA make sure you deeply understand all dilemmas and problems of microservices, particularly ones we had to omit to keep this activator template concise like: eventual consistency, testing, operations & deployment, contract managing, versioning, monitoring, logging & debugging and security. Good luck, have fun and let us know about your adventures on the microservice way!