Announcing Akka 24.05: More Security. More Performance. More Efficiency. Watch the Webinar Replay
cloudflow streaming akka-platform

What Every Technical Leader Should Know About Streaming Data Pipelines

Brad Murdoch EVP, Strategy and Product, Lightbend, Inc.

Does your enterprise need to improve customer service, optimize supply chains, or enhance manufacturing operations? Download this white paper and see how building real-time streaming data pipelines can drive competitive advantage.

Organizations invest in big data to get more insight into their business. They then further invest in data science teams to build and train machine learning (ML) models for making better business decisions. Both are excellent steps, but the greatest value can be realized when true operationalization of data for real-time business decisions and customer optimization can take place. And that requires streaming data pipelines.

What are streaming data pipelines?

For context, let’s say you want to determine the optimal location to open your next warehouse. Lots of data is gathered: predicted customer locations, historical buying trends, local tax breaks, regional cost of labor, weather patterns, and proximity to key links in the supply chain. This data is used to train a ML model, which takes additional time to refine before coming up with suitable options.

But what if you needed a decision to be made a thousand times a second, such as fraud prevention for customer orders? How do we apply our fraud detection algorithms in a way that will not prevent orders from processing in a virtually instantaneous manner?

Enterprises are under increasing pressure to build real time intelligence into their business activities: recommendation engines, real-time personalization, real-time risk analysis, real-time supply chain optimization, IoT operational controls, financial services batch or overnight processes. This means every business can benefit from better use of their data.

Currently, regardless of where raw data may come from—e.g., customer data, market data, device data, sensor data, social media feeds, application logs, or transaction logs—to extract value often requires several processing stages that may look something like this:

  • Bring the data into the system
  • Cleanse the data of extraneous components
  • Merge with other relevant data points
  • Enrich it with data from existing internal systems
  • Apply analytics to the data
  • Score it against an ML model
  • Pass the results to another application

Linking all these stages together is referred to as a data pipeline. However, if the data is to be processed continuously as it is generated or as it is coming into your system, then this becomes a streaming data pipeline.

Are building streaming data pipelines hard?

Typically, streaming data pipelines introduce a significant amount of complexity for application developers, data engineers and DevOps teams. Different processing engines are often required for different stages of the pipeline, and applying ML models to the real time stream of data is not exactly a simple exercise.

Additionally, and most importantly, data never stops coming, so systems need to be “Always On”. Building reliable, scalable streaming systems is extremely difficult as is, not to mention potential data and system faults. Given a never ending stream of data, all the edge cases that very rarely occur will all eventually occur! Planning for failure, recovering completely and gracefully, is critical...and again, difficult.

So, yes, building streaming data pipelines and running them successfully in a production environment is hard. But despite the challenges, the business benefits are definitely worth it. Need proof?

Some examples

Capital One digitally transformed their auto loan approvals process from an average of 55 hours down to sub-second, applying 12 ML models concurrently to the data as it’s streaming through their platform. This new process provides a far better customer experience and dramatically reduces the risk associated with their loan portfolio.

Hewlett Packard Enterprise (HPE) is now delivering near real-time insights to their customers using data gathered from over 20 billion sensors sending trillions of metrics each day. Analyzing this streaming data in real time provides HPE the ability to transform their customers’ experience by monitoring infrastructure, predicting possible problems, and recommending ways to enhance performance.

Credit Karma uses a real time streaming data architecture to provide hyper-personalized data analytics to their users. With this architecture Credit Karma significantly scaled its ML model processing, enabling the company to present more information faster to users who were looking to improve their credit scores.

Tell me more

Not all solutions are equal. For more information on getting your enterprise started with real-time streaming data pipelines, download this white paper — and get on the path to future-proofing your business with the ability to make critical decisions and provide unsurpassed customer experiences in real-time.

The Total Economic Impact™
Of Lightbend Akka

  • 139% ROI
  • 50% to 75% faster time-to-market
  • 20x increase in developer throughput
  • <6 months Akka pays for itself