Lightbend recently introduced Cloudflow, an open source framework hosted at cloudflow.io that addresses the full application lifecycle for developing, deploying and operating streaming data pipelines on Kubernetes.

Cloudflow enables users to quickly develop, orchestrate, and operate distributed streaming applications on Kubernetes. Cloudflow allows you to easily break down your streaming application into smaller components and wire them together with schema-based contracts. Cloudflow integrates with popular streaming engines like Akka Streams, Apache Spark, and Apache Flink. It also comes with a comprehensive CLI tool to easily manage, scale and configure your streaming applications at runtime. With its powerful abstractions, Cloudflow allows you to define, build, deploy, and evolve the most complex streaming applications.

  • Develop: Focus only on business logic, leave the boilerplate to us.
  • Build: We provide all the tooling for going from business logic to deployable Docker image.
  • Deploy: We provide Kubernetes tooling to deploy your distributed system with a single command, and manage durable connections between processing stages.
  • Operate: With a Lightbend subscription, you get all the tools you need to provide insights, observability, and lifecycle management for evolving your distributed streaming application.

As data pipelines become first-class citizens in microservices architectures, Cloudflow gives developers data-optimized programming abstractions and run-time tooling for Kubernetes. In a nutshell, Cloudflow is an application development toolkit comprising:

  • An API definition for Streamlet, the core abstraction in Cloudflow.
  • An extensible set of runtime implementations for Streamlet(s). Cloudflow today provides support for popular streaming runtimes, like Spark Structured Streaming, Flink, and Akka Streams.
  • A Streamlet composition model driven by a blueprint definition.
  • A sandbox execution mode that accelerates the development and testing of your applications.
  • A set of sbt plugins that package your application into a deployable container.
  • The Cloudflow operator, which is a Kubernetes operator that manages your application lifecycle.
  • A CLI, in the form of a kubectl plugin, that facilitates manual and scripted management of the application.

The different parts of Cloudflow work in unison to dramatically accelerate your application development efforts, reducing the time required to create, package, and deploy an application from weeks to hours.

To read more about Cloudflow concepts, check out our documentation.

Getting Started with Cloudflow

Cloudflow’s source code is available on GitHub at https://github.com/lightbend/cloudflow.

The project documentation contains all you need to know to get started with Cloudflow. Developers can get started in one of two ways:

  • Deploying to a local JVM: If you don’t have access and cannot create a Kubernetes cluster, you can still see what it is like to develop a Cloudflow application. The Sandbox provides a local runner for applications.
  • Deploying to Kubernetes: We provide an install script to configure a GKE cluster. If you already have a GKE cluster, you must make sure the prerequisites are met. Cloudflow itself only depends on Kubernetes, and if you want to run somewhere besides GKE, change the install script accordingly.

Regardless of which approach you use, you can start with our sample application, which simulates processing data from a wind turbine farm.

Beyond Open Source: Production Tooling and Support

Lightbend offers commercial features for Cloudflow that help you productionize and operate the full application lifecycle of a Cloudflow application.

Our UI provides an easily comprehended picture of end-to-end health and performance of your application. The UI makes it easy to quickly pinpoint unhealthy streamlets, allowing you to visualize processing bottlenecks. Context-sensitive charts allow you to drill into specifics of problematic streamlets.

The production tooling provides additional Kubernetes operators to augment the operational experience of Cloudflow.

  • An installation operator will remove the need to run bash scripts and helm charts from outside the cluster to allow for a much smoother installation experience.
  • This same operator will provide support for upgrading installations for new versions or patches, and other installation-related operations.
  • An application-level operator controls metrics export and UI integration, and will be the basis for roadmap features like metadata management and autoscaling.

A subscription to Lightbend Platform comes with these features, full support, and, of course, the right to use the rest of the Lightbend Platform.

Join the Community!

We welcome discussion, questions, and contributions. There are two ways to interact with our community.

Please use whichever forum you find most convenient and appropriate.

Please check our Code of Conduct. Be kind, courteous, and empathetic and you’ll be good!

GO TO CLOUDFLOW.IO

Share



Comments


View All Posts or Filter By Tag


Questions?