From Java EE To Cloud Native:
The End Of The Heavyweight Era

How to modernize traditional Java EE applications
for cloud-native infrastructure

Download The PDF (2.6 MB)

Why Read This Report

In a world where business models face constant disruption, digital business imperatives are driving architects and technology leaders to embrace modernization to remain competitive. However, traditional monoliths running on Java EE middleware were not designed with cloud-native elasticity and development agility in mind, making them a bad fit for today’s needs.

This report helps architects and technology leaders understand the business impact of modernizing existing Java EE legacy systems. In the modern world of streaming data and multicore cloud computing, businesses need to be prepared for cloud-native approaches and microservices-based architectures in order to survive.

Key Takeaways

Java EE Middleware Servers: The Wrong Approach For Today’s Cloud-Native Infrastructures

Over the years, the use of legacy technologies and expensive Java EE middleware servers has resulted in the pervasiveness of large, monolithic applications. Enterprises are becoming bogged down with long release cycles and increasingly complex applications, leaving teams unable to achieve a high level of development productivity as well as firefighting production systems with an unhealthy amount of interdependencies that were never designed for cloud infrastructures.

Distributed, Reactive Systems Unlock Your Cloud ROI

Achieving ROI in the cloud starts with designing distributed architectures and decomposing monoliths into individual, decoupled microservices, ideally based on the characteristics defined by the Reactive Manifesto. Distributed microservices enable enterprises to be flexible—able to adapt to complex environments—and quickly roll out new changes without rigid dependencies and coordination. These systems are built for flexibility and resiliency, not just efficiency and robustness. They scale massively at any given moment without compromising infrastructure. Many distributed systems, in particular Reactive Systems, are capable of rapidly identifying, reporting, and self-healing in the face of failure at any system level.

Offline Version (PDF/2.6mb)

Cloud Native And The Future Of Java EE

By 2019, fewer than 35% of all new business applications
will be deployed in Java EE application servers.

Anne Thomas Distinguished Analyst at Gartner Group

Global 5000 enterprises that never before considered themselves as technology companies are now faced with digital business imperatives that force them to modernize their infrastructure. On the path to becoming a digital, on-demand provider, development speed is the ultimate competitive advantage. With technology and business innovation inextricably intertwined, businesses must adapt with greater agility than ever before.

Technology leaders (e.g. Amazon, Microsoft, Google, LinkedIn, etc.) and industry analysts (Gartner, Forrester Research, RedMonk) agree that modern system architecture must embrace a cloud-first strategy to capture the benefits of development agility and cost efficiency. Modern systems need to be optimized to reduce latency and architected for resilience and elasticity. As online consumption continues to grow at an exponential rate, modern systems require a highly flexible infrastructure design that can scale at levels far higher than previous conceptions of peak traffic.

For the majority of use cases, however, the Global 2000 do not have the luxury of starting with a greenfield infrastructure such as digital natives LinkedIn, Netflix, or Airbnb. Changes need to be made within existing frameworks to keep pace with new web-scale organizations. This presents challenges to many organizations that have huge investments in legacy Java EE infrastructure, where technical debt and monolithic system architectures require modernization in order to confront the following business risks:

  • Development agility isn’t meeting business demands and causes slow and infrequent releases
  • Monolithic applications that are difficult and expensive to scale and aren’t optimized for cloud infrastructure
  • Batch approaches reduce the ability to react based on real-time insights and streaming ‘data in motion’

Complexity kills development velocity and only fosters infrequent releases

For a decade or more, enterprise development teams have built their Java EE projects inside large, monolithic application server containers without much regard to the individual lifecycle of their module or component.

Hooking into startup and shutdown events was simple, as accessing other components was just an injected instance away. It was comparably easy to map objects into relational databases or connect to other systems via messaging. One of the greatest advantages of this architecture was transactionality, which was synchronous, easy to implement, and simple to visualize and monitor. Projects were released twice per year, had multi-year lifespans, multi-month test cycles, and large teams to manage everything.

But those days are now at an end. Due to the lack of agility, applications grew enormous, leading to the accumulation of technical debt, slower development velocity, and longer release cycles. Here's how

  • Development team agility is constantly blocked. With no simple development model to support modern systems, the traditional compile-build-deploy cycle for every service cripples productivity. At best, reusability and componentization in Java EE are achieved by sharing packaged bundles between projects. These designs ultimately rely on a single database schema with sheer project ROI calculated on comparably long production uptimes.
  • Big teams and heavy apps create long release cycles. With ongoing maintenance and the constant addition of new features, monoliths constantly grow in size. This means that the burden each team incurs to create, maintain, and manage their app continuously increases, slowing productivity. Team structures are also heavily influenced by these monolithic software architectures, with multi-month test cycles being perhaps the most visible proof. Projects with life spans longer than five years tend to have huge bug and feature databases. Since containerless development is impossible, testing is barely qualified—there are no acceptance tests and hardly any written business requirements or identifiable domains in design and usability.
  • Complex code bases and fearful engineers lead to technical debt. Instead of business-driven components, the classical monolith has a very technical design and struggles to keep up with the constant change in business requirements. Production releases often occur only twice a year, and introducing new features or making hotfixes outside of the official production setting process is a risky venture. Upholding the motto: “Never change a running system,” continuous inherited complexity leads to a very cautious update process by engineers who are understandably afraid of breaking anything.

Scaling monoliths is too expensive for the cloud

From a production perspective, the classical monolith relies on heavyweight infrastructure and rarely scales inside the application server itself. Scaling, therefore, requires vast engineering resources, making it a clunky, expensive, and inefficient process.

  • Monoliths are difficult and expensive to scale. Java EE applications are bound to the thread-per-request model, making it difficult for them to scale to larger numbers of nodes. And clustering relies on vendor-specific features because it’s not part of the Java EE specification. Servicing a growing number of users requires the complete replication of the application server stack, including the underlying infrastructure. With more extensive scaling requirements, the use of vendor-proprietary features and clustering options became mandatory for many installations.
  • Monoliths lead to resource inefficiency. Even with optimizations, scalability is limited to a couple of hundred nodes and can’t be controlled dynamically to serve load peaks without confronting failures. Instead of intelligent, dynamic scaling, monoliths must always be prepared for traffic peaks. This makes scaling and calculating infrastructure requirements difficult, leading to a lot of unused compute power that is only activated for rare high demand occasions. And when things go wrong, application servers have few built-in resilience mechanisms—one component failure is propagated instead of contained and usually brings down the entire application.

Separate cloud-native operational models from application architecture and developer APIs

Java EE provides two fundamental capabilities: (1) an application architecture with supporting APIs and (2) a multi-application operational runtime. Modern cloud native solutions separate these capabilities, greatly increasing development team flexibility and agility. Operational models are provided by orchestration solutions (e.g. Kubernetes) or by cloud solutions. Application architectures and developer APIs must enable cloud native operations by delivering the elasticity and resilience requirements of cloud applications.

Streams and ‘data in motion’ need to be supported

Since its invention, the way people use the internet has fundamentally changed. A tidal wave of connected devices, sensors, and intelligent home appliances has caused demands to grow exponentially.

Image courtesy of @kvnwbbr

In 1995, less than 1% of the world population had an internet connection. Today it’s around 40%. The number of internet users increased tenfold from 1999 to 2013. The world’s billionth user logged on in 2005, and by 2010, that number had doubled to two billion. In 2018, the internet reached four billion users. In response to this rampant growth, the business requirements for modern applications have drastically changed:

  • Real-time streaming data is a first-class citizen in today’s applications. Instead of operating on data that rests in a centralized relational database (RDBMS), modern software increasingly relies on data processing in near real-time. With the shrinking demand for batch-mode processing, the ability to work with time-sensitive data presents an enormous competitive business advantage. Yet Java EE has no native support for streaming “data in motion” technologies like Akka Streams, Apache Spark, etc., and the persistence tools provided (JDBC and JPA) are synchronous and blocking, allowing only one query at a time per connection.
  • Insights and value must be harvested from data. Modern reporting and incident analysis must happen as the data streams into the application, not in retrospective. With high flexibility now a requirement rather than a “nice to have,” production systems must be equipped to resolve issues that weren’t considered or even relevant when their initial version was put into production. With the help of message-driven systems and their built-in event-log capabilities, it is now possible to not only change the processing of incoming data quickly but also to replay previously captured data-sets and extend with new analytics or actions.
  • Non-traditional data persistence models must be used. Handling of large amounts of data within distributed systems requires immutability for data, and immutable deployment units surrounding services are the new paradigm for load balancing, high availability, and dynamic resource sharing. While Java EE provides JPA, which works well with classical RDBMS based persistence, only Command Query Responsibility Segregation (CQRS) and Event-Sourcing (ES) work well with immutable data-structures and are beneficial for microservices that handle streaming data.

The Shift Towards Real-Time Streaming Systems

To address all the shortcomings of traditional monolithic Java EE applications and the heavyweight middleware and infrastructure needed to run them, developers must shift their thinking. Both systems and organizations themselves must increase flexibility, adapt to complex environments, quickly roll out new changes without rigid dependencies and coordination, know how and when to behave in certain ways, scale massively at any given moment without compromising infrastructure, and most importantly, be able to rapidly identify, isolate, and self-heal in the face of failure at any level. The steps for achieving these goals include:

  1. Design distributed systems, ideally following Reactive principles
  2. On the path to Microservices, take a lesson from Domain Driven Design (DDD)
  3. Prioritize resilience before thinking about scaling elastically in the cloud
  4. Utilize a streaming architecture to achieve distribution, concurrency, supervision, and resilience

Design distributed systems and Reactive principles

Designers need to build systems for flexibility and resiliency, not just efficiency and robustness. This necessitates the redesign of existing Java EE applications into more flexible modules that are self-contained, autonomous, and can be scaled independently, as they are responsible for their own business context from individual features down to the relevant data. To make it easier for business leaders, IT professionals, and third-party vendors to innovate and collaborate around these new systems, a common vocabulary was established in the Reactive Manifesto to cover these requirements: Reactive systems are Responsive, Resilient, Elastic, and Message-Driven:

Image courtesy of
  • Message-driven means more than just non-blocking I/O. Reactive systems at their foundation are powered by asynchronous, non-blocking, message-driven communication. This enables supervision, isolation, and replication of failed processes.
  • Resilience goes further than fault tolerance. The ability to self-heal in an automated and predictable way is treated as part of the full service/application lifecycle and made possible by a message-driven approach to communication.
  • Elasticity means efficient, cost-conscious scalability. A message-driven foundation enables a level of indirection and loose coupling. This helps create systems that can boost performance by scaling out, as well as up, across all physical and cloud infrastructure during busy times, and lower costs by dynamically scaling in/down during slow times.
  • Responsive systems always serve customers. Reactive systems provide a consistently responsive user experience that is highly available, never fails during busy times, and isn’t susceptible to blocked processes and cascading failures.

Take a lesson from Events-First Domain Driven Design (DDD)

Rather than thinking of microservices architecture as Service-Oriented Architecture (SOA) 2.0, developers now have the Reactive Manifesto to help them apply the principles of Reactive systems to real-world domains. The requirements of microservices architecture can best be identified with the help of Domain-Driven Design (DDD), an architectural principle that recommends designing systems to reflect real-world domains, considering the elements, behavior, and interactions between business domains.

Image courtesy of @kvnwbbr

Microservices operate on principles similar to those of DDD. Each microservice owns its data and must be responsible for a specific feature or functionality, and be able to work together as a system to form an aggregation of cohesive functionality. A good rule of thumb is to gather services that change for the same reason while separating those services that can change for a different reason. This can be achieved by designing systems that:

  • Use encapsulation to improve flexibility. Microservices must encapsulate all internal implementation details so that external systems utilizing them in the cloud or on-premise never need to worry about the internals. Encapsulation reduces the complexity and enhances the flexibility of the system, making it more amenable to changes.
  • Apply isolation to encourage loose coupling and avoid the cascade effect. The changes to a single microservice should have no negative impact on other services. As synchronous communication introduces a host of interrelated dependencies, this principle aligns with the message-driven communication approach to distributed systems by enforcing asynchronous, non-blocking stream-based communication between microservices. As per SOA, RESTful APIs are more suitable than Java RMI, as the latter enforces a technology on other system components.
  • Separate domains of concern to reduce complexity. Creating microservices based on distinct functions with zero overlap of concerns with other components lets designers reduce the complexity of interaction between services. While it is important for each microservice to own its data, there is considerable flexibility in how that data is stored. Of course, the data may be stored in a traditional database. However, it is also common for some microservice implementations to store data into multiple databases. For example, store data in a SQL database for flexible queries and also in ElasticSearch to provide more advanced search options. Another common data storage approach is to save all data change requests in an event log and also store the data in a more queryable form in one or more databases, referring to Event Sourcing and CQRS mentioned above. The advantage here is that the event log can be treated as a stream, allowing consistent and resilient propagation of state changes throughout a system.

Prioritize resilience before thinking about elastic scaling in the cloud

Most applications are designed and developed for blue skies. But all software across all time has failed and will continue to fail. Today’s applications, therefore, must be designed with the inevitability of failure in mind.

With cloud-based microservices architectures, things are even more complex: these applications are composed of a large number of individual services, adding a level of complexity that touches all relevant parts of an application in order to measure availability and recover from failures.

These new requirements force designers to reconsider how they incorporate error handling and fault tolerance into applications. Modern applications must be resilient on all levels, not just a few. Reactive systems, therefore, place a critical focus on resilience, which enables systems to self-heal automatically and adopt a confident attitude to routine errors or failures that are managed quickly.

The key to achieving this is message-driven service interaction, using streams, which automatically provides the core building blocks that enable systems to be resilient against failures at many different levels. In turn, these building blocks serve as a rock-solid foundation that is capable of scaling in and out elastically across all system resources.

  • Automate supervision to minimize human intervention. The goal of building resilience against failures into systems is to minimize human intervention. Supervision—the ability to identify successful or unsuccessful task completion across the entire system—is at the core of system performance, endurance, and security. Supervision based on a message-driven approach enables location transparency so that processes can run and interact on completely different cluster nodes as easily as in-process on the same VM.
  • Isolate and contain failures to enable self-healing. When isolation is in place, systems can separate different types of work based on a number of factors, like the risk of failure, performance characteristics, CPU and memory usage, etc. Failure in one isolated component won’t impact the responsiveness of the overall system and the failing component will have a chance to heal. A dedicated separate error channel allows redirection of an error rather than just throwing it back to the caller.
  • Master resilience and elasticity to achieve system responsiveness. Modern applications must be resilient at their core in order to scale and remain responsive under a variety of real-world, less than ideal conditions. The result is a consistently responsive system ready for business.

Utilize a streaming architecture to achieve distribution, concurrency, supervision, and resilience

Traditional Java EE monolithic applications rely on technologies and architectural approaches that are in conflict with the properties needed to create resilient and scalable systems. This is seen most prominently in the use of a central database and distributed transactions to handle all distribution, concurrency, and supervision concerns. These techniques, however, violate isolation, coupling all the systems that use the database and coordinate transactions together, preventing resilience, scalability, and leading to a lack of responsiveness.

A Reactive, streaming architecture is the major booster that provides distribution, concurrency, and supervision without the use of a central database and distributed transactions, allowing isolation between services in order to achieve resilience and scalability. Streaming architectures supervise operations and processes at the stream level, ensuring progression of operations through consumption and distribution of streams.

Distributed, Reactive Systems Unlock Higher Cloud ROI

Akka has consistently allowed us to cut 80% of infrastructure, or increase overall application performance by 5x, when compared to the traditional systems we replaced.

Akara Sucharitakul, Principal MTS at PayPal

To address the business challenges of modernization, many of the most admired brands around the globe are transforming their businesses with Lightbend, engaging billions of users every day through software that is disrupting their markets. Lightbend Platform provides scalable, high-performance microservices frameworks and streaming engines for building data-centric systems that are optimized to run on cloud-native infrastructure. Supporting both Java and Scala, Lightbend Platform includes the Akka message-driven runtime, Lagom microservice framework, Play web framework, and Scala programming language. With Lightbend, development teams can deliver highly responsive user experiences backed by a resilient, message-driven application stack.

This enables product teams to:

  • Focus on the business logic, not low-level protocols. Not only do developers need to be able to take advantage of multiple cores on a single machine, but at a certain point, they must also utilize clusters of machines themselves. Distributed by default and based Akka, Lagom, and Play provide managed concurrency out-of-the-box so that teams can focus on the system’s business logic rather than manually wiring together complex, low-level protocols.
  • Eliminate bottlenecks and single points of failure. Reactive applications are difficult to build within thread-based frameworks because there is no model for distribution/distributed communication—Reactive concepts unify distributed and local communication (concurrency) into a single programming model with the same semantics (where local is just an optimization), eliminating the limitations of shared mutable state, threads, and locks. When designed incorrectly, system performance and availability suffer huge losses. Akka, Play, and Lagom employ asynchronous, non-blocking communication with a secure supervision model. This allows components to easily share work across all infrastructure resources, resulting in highly resilient, elastically scalable systems.
  • Realize true ROI from investing in cloud infrastructure. Shared mutable state also makes it difficult, though not impossible, to scale up and out. Ensuring thread-safety is complicated, and performance penalties associated with over-engineering for thread safety are severe. The lightweight component models in Play and Lagom are ideal for cloud and hybrid-cloud deployments, keeping infrastructure costs under control while maintaining a high level of responsiveness to serve customers.
  • Focus on what matters to your business. Lightbend tools and frameworks make it simple and natural to write distributed applications. They promote a decoupled architecture, which allows logical divisions of responsibility within a team and results in systems that are easier to reason on and to understand. For example, Verizon found that using Lightbend technology made their developers 20-40% more productive than when they were using WebLogic.

Enhancing customer engagement with data-driven insights

For many enterprises, using data-driven insights to deepen customer engagement over web, mobile and IoT applications is a focal point of digital transformation. Consequently, architectures are shifting from batch to streaming.

  • Real-time streaming and microservices architectures are unifying. The demands for availability, scalability, and resilience is forcing fast data architectures to become like microservice architectures. Conversely, successful organizations building microservices find their data needs grow with their organization. Hence, there is a unification happening between data and microservice architectures that the Lightbend Platform is uniquely capable of serving.
  • Case Study: Delivering 30-second personalized offers at 100x traffic peaks. At online gaming leader William Hill, personalized offers may only be valid for thirty seconds and must perform perfectly when traffic regularly spikes by 100x.

Unleashing innovation to protect or capture markets

Many digital transformation initiatives at large enterprises are being sparked by either The Innovator’s Dilemma in which market leader successes and capabilities have actually become obstacles to seizing the next wave of innovation, or the Reinventors in which successful companies are using a key asset to capture a new market.

  • Empowering developers to innovate. Lightbend technologies make developers feel empowered by making things that used to be very hard quite easy and straightforward, allowing them to do things they didn’t dare before. Reactive Applications embrace the reality of unplanned errors and adopt a pragmatic ”Let It Crash“ philosophy using supervision and self-healing to ensure impacted components are reset to a stable state and restart upon failure.
  • Case Study: Real-time streaming for billions of sensors. As a recognized industry innovator, HPE is paving the path for faster, smarter data center infrastructure solutions by adding near real-time insights from billions of sensors around the globe to the InfoSight predictive analytics platform.

Improving agility and time to value

In a world where business models face constant disruption, your digital transformation initiative is likely driving you to embrace speed to remain competitive. However, traditional application monoliths running on Java EE application servers were not designed with development agility in mind. With no simple development model to support modern systems, the traditional compile-build-deploy cycle for every service cripples productivity.

  • Work autonomously, deliver continuously. The asynchronous and message-driven foundation of Lightbend Platforms supports the design of Reactive systems, which reduce dependencies between components and enable feature teams to work autonomously and deliver continuously, accelerating time to value by 2x to 3x.
  • Case Study: Reducing build complexity by 40%. MoneySuperMarket Group (MSM), the holdingcompany behind three of the UK’s most popular comparison shopping sites is propagating new features700% faster, dramatically accelerating the time to revenue.

Reducing compute costs while scaling elastically

Nearly every digital transformation initiative includes a cloud strategy. And, nearly every industry analyst will tell you, if you want to take advantage of the cloud, you can’t lift and shift. Your applications need to be architected for the cloud. In fact, Gartner is recommending you design every new application to be cloud-native, even if you plan to run it on-premise.

  • Reduce cloud infrastructure and hardware expense. Using asynchronous message passing, Lightbend Platform more densely utilizes commodity hardware compared with traditional systems bloated by locked threads. This helps businesses scale up and out effortlessly on multi-core and cloud computing architectures. Lightbend Platform handles bursty traffic with ease, without requiring hardware over-provisioning, so you can reap the financial benefits of elasticity. The added resilience and cost savings are huge.
  • Case study: Saving 50% on infrastructure with no downtime. After experiencing a horrific — and very public — downtime, Walmart Canada modernized its e-commerce platform, shaving 50% off infrastructure costs, and achieving unprecedented resilience.

Increasing developer happiness and productivity

With your digital transformation, a tremendous amount of development lays ahead. If your developers are hamstrung by old tools, it will be nearly impossible to keep your project on track and your top talent engaged.

  • Making it easier, and faster, to build distributed systems. Because the actor-model is at the heart of our application development platform, it alleviates the developer from having to deal with explicit locking and thread management, making it easier to write correct concurrent, parallel and distributed systems.
  • Case Study: Boosting deployment frequency by 400%. A modern Reactive development model that emphasizes proper service isolation has allowed developers at Norwegian Cruise Lines to continually roll out new features and bug fixes and deploy 400% more frequently.

The Days Of Java EE Monoliths Are Ending

After almost two decades of building Java EE monoliths that are slow to evolve, complicated to release, and expensive to maintain, modern enterprises are looking to new system architectures for running their business.

The centralized Java EE middleware approach became ill-suited to the always-on, real-time nature of cloud computing. To achieve the full business benefits offered by the cloud, today’s systems must be lean, flexible, and Reactive. Lightbend helps developers create applications that are responsive, resilient, flexible, and message-driven. Built on a foundation of Domain Driven Design and based on services rather than plain objects, Lightbend provides the perfect architecture for creating powerful, adaptable applications that thrive in the cloud.