Why we rewrote Cloudflow's CLI from Go to Scala

For a few months after I started working in Lightbend, my first assignments were around a little-known, yet super-powerful product called Cloudflow.

TL; DR Cloudflow (at the core of Akka Data Pipelines) quickly brings developers up-to-speed in building multi-component streaming applications with Akka Streams, Spark, and Flink on Kubernetes.

Such applications are built on top of the most popular JVM's streaming engines and Cloudflow provides you a battery-included set of libraries, tools, and plugins for going from prototyping to production.

One of the strengths of Cloudflow is its CLI that comes in the form of a kubectl plugin, and with it you can use the familiar kubectl CLI and the kubectl cloudflow command to manage and deploy your Cloudflow applications in the cluster.

In the CLI we additionally perform an incredible amount of checks and validations that will prevent the user from deploying an application that's not configured properly, and we automate several actions that will require specific and deep knowledge to be performed manually.

At the time I joined Lightbend, the CLI was a classic Go application, organically developed to the point where technical debt was preventing the engineering team from easily adding functionalities and doing bug-fixes.

Additionally, the support for Cloudflow's configuration format HOCON (Human-Optimized Config Object Notation) is pretty poor in Go, and it caused many issues that were not easily fixable.

With this in mind we decided to completely rewrite the CLI with these principles:

Requirement Technology chosen
Programming language the team is comfortable with Scala/Java
Native performance GraalVM AOT
Industry-standard libraries Fabric8 Kubernetes Client
Solid ecosystem/stack HOCON/Scopt/Airframe logging/Pureconfig

Give me the code!

The demo project we refer to is directly extracted from the Cloudflow CLI, adapted to the example in the article Kubectl's plugins in Java, which happened to be published during our CLI re-write. For other references, you should also check out the kubectl plugins documentation.

The code is available here:

For reference, the full code of the Cloudflow CLI is open source here:

For the impatient, you can generate the binary (tested on MacOsX) by running:

sbt "graalvm-native-image:packageBin"

and use it:

export PATH="$PATH:$PWD/target/graalvm-native-image"
kubectl lp --help
kubectl lp version
kubectl lp list

The libraries we use

In the example, we make use of a great little set of libraries that, among others, have markedly helped this project to be successful.

  • Scopt is a lovely library for command-line option parsing. It supports sub-commands well, which is important to give a more “Kubernetes-like” experience.
  • Airframe Log is a highly customizable logging library used for “internal” logging.
  • Fabric8 Kubernetes Client is a rock-solid library that covers the entire Kubernetes API, without which nothing would be possible.

In the "real" Cloudflow CLI we make use of an additional library worth a mention.

  • Pureconfig is used for the validation of the configurations, as well as, a typesafe interface to generate valid HOCON. The quality of the produced error messages is astonishing and gives the user a precise indication of what should be actioned.

Why GraalVM

We reserve a special section to GraalVM, our biggest blessing and curse, and, for this post, we refer to it exclusively as the Ahead Of Time (AOT) compiler.

GraalVM is the big enabler for this project, letting us compile Scala code with both Scala and Java dependencies directly down to native binaries, meaning users won't incur a long JVM start-up time.

It's an amazing project, but it comes with a few challenges that we faced and managed to overcome.

Unfortunately, GraalVM compilation should be configured and this configuration can become costly and risky to maintain (especially when using “reflection heavy” libraries such as Jackson); luckily we completely own the boundaries of our CLI and we can automate the generation of such configuration by training it against a real cluster.

This boils down to running sbt regenerateGraalVMConfig every time we do a significant change to the CLI. This command will run an alternative Main of our application, intended to cover most of the possible code paths and invariants while recording the Assisted configuration.

Another challenge with GraalVM is the long compilation time, currently, the CLI takes about ~5 minutes to build the native package with the command:

sbt "graalvm-native-image:packageBin"

While this is a real time-sink in the very early days of development, when you try out different configurations and dependencies after you have a more stable build, you can still run the application on the JVM and, in particular, into the sbt interactive shell.

The ability to run, during development, on the JVM gives us the freedom to quickly try out Kubernetes commands against a cluster directly executing sbtrun <command>” and we can even use all the standard Java debugging tools.

The last issue we have had the pleasure to go beyond is the fact that GraalVM doesn't support cross-compilation. In this case, we simply leverage the CI (GitHub Actions in this case) for different runners to natively compile on the target architecture. We based our setup on this precedent work.

Benefits of our work

Not only challenges but delights have emerged from this experience as well. Being able to directly use Scala and Java dependencies has kept us in the comfort zone of a well-known ecosystem.

Having an “internal” logging system that kicks-in when necessary but normally stays hidden and gives the user the very same smooth experience as a “traditional” CLI is a big advantage.

In the case that something goes wrong we can expose all the internal stack-traces by simply adding the option -v trace at the end of our command.

This makes debugging user issues pretty straightforward by preserving the same ergonomics of the original CLI implemented in Go. Below you can see BEFORE and AFTER images of the CLI:

...and after:

You can try it yourself with:

kubectl lp -v trace

A little nice feature is that Jackson provides out-of-the-box serializers for case classes to JSON and YAML. This way we can provide machine-readable output for any command with little effort.

Try it yourself with:

kubectl lp -o json
kubectl lp -o yaml

Another very useful by-product is the fact that we can re-use the CLI directly from ScalaTest (importing it as a dependency) and we have full-Scala Integration tests without the need to run external processes with access to helpful error messages in case something fails. We also plan to port the Cloudflow operator to use the very same CRD/CR definition, immediately eliminating the possibility of introducing discrepancies between the CLI and the operator itself.

Last but not least, we are especially proud of the new structure of the code. There is a strong enough separation of concerns that makes everything easily understandable and testable in isolation without much boilerplate. Implementing a new command is now a matter of separately implementing:

  • parsing the command from the command-line
  • defining what is going to be performed during its execution
  • rendering the result back to the user


Rewriting a kubectl plugin from a standard Go stack to a more comfortable approach with Scala has been challenging and rewarding. We are really happy with the result, and we arguably improved it in several ways:

  • Improved the user experience concerning ergonomics, feedback, and validations
  • More features and functionalities have already been added easily
  • The code-base is by far more familiar and easy to grasp

The success has been reinforced by the fact that Cloudflow users migrated to the new CLI without noticing, but started to use and enjoy the new functionalities. 

If you'd like to learn more about Cloudflow, check out this business brief:




Filter by Tag