It's the year 2015, so unless you've been living under a rock for the last decade, you probably have heard about servers and platforms needing to go asynchronous in order to scale. But really, how deep did you dive into the reasons as why this need arrises?
This talk aims to explain the various reasons and techniques that can be (and often are) used in developing high performance web applications - from the kernel depths, to the high level abstractions that all contribute to such designs.
We'll start with the lowest level of them all - the network transports we all use and how they impact latency in our systems. Then we will move on to operating systems' socket selector implementation details and the now legendary C10K problem, to see how implementations were forced to change in order to survive the ever-rising number of concurrent connections. Next we'll dive into processor and thread utilisation effects and how parallel programming - using either message-passing or stream processing style libraries fits into the grand picture of pursuing the most stable and lowest latency characteristics we could dream of.
by Viktor Klang
Stream processing, to quote Mugatu, "is so hot right now".
In this presentation we'll explore fast data streaming using Akka Streams, an implementation of Reactive Streams, and how to design robust transformation pipelines—with built-in flow control—able to take advantage of multicore and going over networks.
We'll discuss possible pitfalls and how they can be avoided as well as explore how we can define immutable pieces of processing logic, as data we can reuse via composition—a veritable smorgasbord of stream transformations that transparently takes advantage of multicore hardware when executed.
by Jonas Bonér
It doesn’t matter how beautiful, loosely coupled, scalable, highly concurrent, non-blocking, responsive and performant your application is––if it isn't running, then it's 100% useless. Without resilience, nothing else matters.
Most developers understand what the word resilience means, at least superficially, but way too many lack a deeper understanding of what it really means in the context of the system that they are working on now. I find it really sad to see, since understanding and managing failure is more important today than ever. Outages are incredibly costly—for many definitions of cost—and can sometimes take down whole businesses.
In this talk we will explore the essence of resilience. What does it really mean? What are its mechanics and characterizing traits? How do other sciences and industries manage it, and what can we learn from that? We will see that everything hints at the same conclusion; that failure is inevitable and needs to be embraced, and that resilience is by design.
Until recently, concurrency in Java meant: java.util.concurrent and threads.
Threads were originally envisioned as "lightweight processes" - starting a new process for concurrent operations meant to much overhead, and posed the problem of inter-process communication. Threads were supposed to be light and remove both disadvantages - less resource consumption for creation and scheduling, and shared memory.
Today it seems this model has reached it's limits. The context switch between threads is not a good match for modern processor architectures, resource needs are still to high for fine-grained concurrency, and shared mutual state is a curse, not a blessing, leading to race conditions, locks, contention. To quote Oracle JVM architect John Rose: "Threads are passé". We will explore different approaches to concurrency below the thread level, and have a look at their advantages and disadvantages. Namely we will look at Quasar Fibres, Clojure Agents, vert.x Verticles and Akka Actors.
No matter what your story may be, our team is waiting to hear from you. Request to be contacted here: