Introducing Akka Cloud to Edge Continuum. Build once for the Cloud. Seamlessly deploy to the Edge - Read Blog
Support
microservices fastdata podcast

Lightbend AMA with Our Experts Podcast Ep. 08 with Jamie Allen

I talked to our Lightbend chief educator, Jamie Allen, for the newest edition of the Lightbend Podcast. He taught me about Microservices, Fast Data and why it's important for a Director to still code. 

Listen on Soundcloud

 

Tonya Rae Moore:    Hey, and welcome to Lightbend AMA podcast with our experts number eight. My name is Tonya Rae Moore, and I am the community and field programs manager for Lightbend. Today, I am going to be talking to our Lightbend educator and the only Lightbender who has the official Good Housekeeping seal of approval, trademarked. Hi, Jamie!

Jamie Allen:    Hello.

Tonya Rae Moore:    Hi! This is Jamie Allen, and he is going to start off by telling us a little bit about his background and what led him to Lightbend.

Jamie Allen:    Ah, goodness. I gradated from college in 1993, and I started off in the consulting world, working for Price Waterhouse. And, you know, I worked in a whole bunch of different platforms and languages and stuff, and I found that I just really enjoyed doing work that involved custom application development. And that was not a good time to be involved in custom application development.

    That was a time when people were getting very involved with SAP, with the R3 rollout, PeopleSoft, you name it. All these packages were suddenly becoming very prevalent. Oracle. Especially in places like Price Waterhouse where you could easily make partner and make six figures by being a developer who focused on specifically these kinds of technologies. 

    I found myself being routed into this data warehousing kind of thing, which was a very immature world. I mean, if you think in terms of people trying to make sense of large amounts of data, at least at that time -- back then, we're talking maybe a gigabyte -- you know, trying to make sense of that data in that world just wasn't for me. 

    So I ended up leaving and going off and doing navigational systems for boats and then going back to consulting and working in the financial sector and working with healthcare and all this kind of stuff. By the end of the dotcom boom, I was pretty much working on my own as an independent. In New York City, really enjoying myself. But then everything came melting down. And I thought, hey, I'll take the summer off. I'm just going to kick it for a while. 

    It turned out it wasn't so easy to find those fat jobs anymore. I went back to my boating stuff. I started a company with a friend, and that company's still around and doing well. Got married, had a couple kids, moved to Philadelphia. Started working for a company called Chariot Solutions there because I just kind of wanted to lay low and make decent money and program. And while I was there, somebody at a client site told me I had to learn Scala and told me that we were going to do Akka-based development in 2009. 

   I didn't know it, had no idea it was going on, wasn't given any training, had to figure it out. That didn't go so well for me. It took a little while for me to internalize everything about the language and this paradigm shift to asynchronous application development. And I found I really enjoyed it. That my mind took to the paradigm pretty well. And not everything you do in programming is like that. 

    For some people, they really understand how to compose functions or something like that, or they really understand how to organize things in an object-oriented inheritance kind of way. For me, the whole message passing and the idea of asynchronous application development just made sense. And so I started doing more of it and started trying to get the rest of the world to do more of it. And, you know, that worked in some places, it didn't work in others. 

    I was very fortunate that in 2012 -- wow, four years ago, almost to the day -- Typesafe, now Lightbend, approached me and asked me if I'd be interested in coming onboard as a consultant. And I was so excited at the prospect. At that point, Typesafe was a year old. And anybody who was in the Scala community really -- not anybody -- a lot of people in the Scala community really wanted to work for them. 

    And I was just honored to even be considered. And I ended up taking the job and going out and doing work for Juniper Networks, who was one of our early investors, helping them leverage Scala and Akka. I can't say exactly how. But then, you know, I started getting more responsibilities. Soon, I was the head of consulting. Then I became the head of consulting and training. And now I'm in charge of the global rollout of both. And I still like to do the technical things. I'm very lucky that I still have the ability to. So . . . 

Tonya Rae Moore:    Yeah, I was going to ask, do you still code, Jamie?

Jamie Allen:    Not so much, but I do hack things up every now and again, because the hardest thing -- I know this is going to sound crazy, but we just rolled out Lagom, and there's no way for me to have any idea how that thing is going to work unless I sit there and I start futzing around with it. I have to. That's the nature of -- people are going to come to me and say I need your expertise in building stuff with Lagom. Well, okay, I have to have some, right? So that's the sort of stuff I typically do.

Tonya Rae Moore:    So you've had kind of a path at Lightbend then. You've had a couple of different hats to wear. What are you officially right now?

Jamie Allen:    Right now, I'm the senior director of global services. And you know how paths are in companies. They can change, so I don't know where I'm going to be heading next.

Tonya Rae Moore:    I understand that, as someone who just had a complete shift in her career path at Lightbend. Tell me a little -- so that means that you're talking to a lot of our customers, basically, right? That's a fancy title for "I talk to customers"?

Jamie Allen:    Pretty much. I spend a lot of time, especially, understanding what their problems are and then trying to figure out how we're going to approach them.

Tonya Rae Moore:    So what are some of those top conversations and top problems that you're seeing right now?

Jamie Allen:    A lot of it has to do with resilience. A lot of the companies out there are trying to figure out how they're going to build applications that stay up in the face of being deployed across a large footprint. So the idea of just building something and putting it out on a server and then maybe putting it out on another server, putting it out on another server, that's going away, right? People are realizing that that's not a scaleable or resilient way to deploy systems. 

    You can put out there multiple instances of a monolith, but if only one part of that monolith is getting overwhelmed, well, then you have to deploy a whole other monolith and a whole other monolith, then, you know, really, you just wanted to do that one little component. And we're becoming bigger proponents in the world of what people call microservices, but I think that's a terrible term. I think that instead we should be thinking in terms of not size but isolation. 

    We should be focused instead on how are we building services that are completely isolated from every other service. So we've decoupled so many different aspects of our services. We don't have a client between one and another such that, you know, the rules and dependencies of the other service are being enforced on this other service, right? And on top of that, we also don't want to have dependencies shared at all between them. 

    Like in our backend, we don't want to have a util or a miscellaneous or something like that, a kind of jar that gets shared between them. And now whenever you go to upgrade some common dependency, you have to upgrade both systems. It's just a world of pain when you end up with that kind of spaghetti. 

    And then even on the data layer, how we shouldn’t have a database that is shared between services, because if you want to change the schema in some kind of meaningful way, now you have to update both of your services, and that could be extremely painful.

Tonya Rae Moore:    I saw your tweet the other day that said how you think microservices should be called isolated services. So I think that that's probably how you're voting in 2016, right? Isolated services for president?

Jamie Allen:    Well, it seems like a better option than what we have available to us. That's for sure.

Tonya Rae Moore:    Ooh, #political.

Jamie Allen:    Yeah.

Tonya Rae Moore:    I mean, you've obviously been around for a little while. Why do you feel that microservices are becoming so relevant right now? What's been the push?

Jamie Allen:    I think the push has really been that it is difficult to manage the monolith at scale. There is no free lunch here. When you talk about the idea of putting all these individual, isolated services out there, there is a cost at the operations standpoint that you're not just managing one big footprint and putting it across a whole bunch of different servers. 

    You're instead figuring out, all right, I need three instances for my inventory service, I need five instances for my order entry, I need six -- not six -- seven for my accounts payable. Whatever. By the way, rule of thumb, always use odd numbers of instances, don't use even. But then you have scaleable independence, where you can say, you know, if my inventory service is the one, because I only put it on three, that is really struggling to keep up with the load, well, then I can amp that up to five. 

    And it has no impact on the other servers other than reducing their latency because suddenly they have so much more ability to get information from the inventory service. But the other great benefits to this are that your teams can scale independently. You can have a group of people who work on the inventory service, a group of people who work on the order entry service, and they don't have dependencies on each other aside from their APIs. 

    So they can work independently. And that allows your team to scale and go off to do other things. And not have the impact of, okay, now we've got to wrap up this whole big monolith that we're going to deploy all at once. Everybody's got to be done by this date. And this is not going to work. 

    We've seen this not work. And it's funny how many things we've been doing for the last 20 years and our antipatterns to the way we should scale our teams, scale our deployments, scale our application development are still prevalent in our best practices and need to be weeded out.

Tonya Rae Moore:    That's interesting to me because last week I was talking to Markus Eisele, our newest Java developer advocate, and he was saying much the same thing. And I'm learning so much about how we definitely had to walk before we could run, but now we can fly. So let's take all this walking out of it. Like, let's just weed it out and move on to the next thing.

Jamie Allen:    Exactly. But there are people out there who are still advocating we should be on our hands and knees crawling across the ground. Stop it.

Tonya Rae Moore:    That's the second tip from Jamie Allen, everybody. Stop. So how are microservices relating to fast data, then, for you?

Jamie Allen:    It's really about consumption in that regard. If I've got mounds of data coming at me at all times, and I'm going to find ways to immediately capture that and start performing some sort of analytical computation on it, transform that data from just being all this data coming at me from all these different devices and all these different clients and everything like that, transform it into information that can then be consumed, well, I've got to make those consumable. 

    So, you know, what we typically -- the way we typically architect these kind of fast data systems is get that data, harness it, provide interfaces through which other services can consume the information that's been derived from it, and those all represent individual services, whether they're the services asking for the data , they could be, or their own microservices, not only those that are exposing that data, right? 

    Because you want to segment that data, then, once it's been transformed. Such that, you know, once I derive some information, I may need to provide it to people who need to get it using this format. And I need to provide it to these people over here using this format and structure. So separating those concerns.

Tonya Rae Moore:    Okay. So why do you feel that the monolithic data store is becoming an antiapattern in working with fast data?

Jamie Allen:    Well, because so much data is coming at you at any given point in time that, you know, you cannot optimize a single store for both reads and writes effectively. And maybe this is a bad way to say it. Maybe it's better to say the monolithic relational data store is becoming an antipattern. Because you see this with DBAs, and they've been -- these poor DBAs have been working their tails off for so long trying to optimize this single store for both reads and writes, and that's an impossible task. 

    It means that over time I'm sitting there trying to say, well, I need to get these writes faster because I need to be able to put the stuff in the database, but then I've got all these read requirements that are -- because of the way the data is structured to make writes faster -- harder to perform, right? So they're always at odds with one another. 

    DBAs have done amazing jobs trying to deal with this over the years, but in this fast data world where so much data is coming at them, there is just no way a single relational data store is ever going to be able to provide both of those capabilities. It becomes the biggest logjam in any architecture. 

    So instead you've seen more and more people moving into what we call the CQRS model, the command query responsibility segregation. And CQRS espouses the idea that you should have different stores for your write concerns and for your read concerns. And this means that you have to think in terms of the price you're paying. I'm going to receive a whole bunch of stuff coming at me, and I can immediately write that out to something like Kafka, right? 

    And then once I've done so, I need to ingest it through some transformation pipeline, which is going to make it available on the read side through another store, maybe something like a Cassandra or Riak or -- I tend to favor Dynamo stores because I'm big fans of the semantics involved with them. And maybe if I want key value pairs, then I use Riak, or if I want columnar data, I use Cassandra. And both scale really well. But it's that transformation pipeline. 

    How am I going to take the data that was coming into Kafka and quickly put it into some sort of readable view, right? Before, the monolithic relational data store was actually doing that for us because the DBAs were putting together things such as views and indexes and writing all these complex joins, when we were just exporting the concept of eventual consistency to another paradigm. 

     We were saying, all right, well, there's a time latency involved between when the database update occurs and when the index is updated, right? Or when the insert occurs and the index is updated. That was never taken into account. We just assumed that it was automatic. But it wasn't. And now we're saying we're going to make an external concern to the data store. 

    We're going to make it so that we can control that and maybe even make it tunable how much latency there is in between, depending on what our business requirements are.

Tonya Rae Moore:    Jamie, I can tell that you are our in-house educator for a reason. Every time I hear you talk about anything, I learn so much. I do. I always enjoy learning from you. I appreciate you taking some time today.

Jamie Allen:    Thank you.

Tonya Rae Moore:    Were you at the engineering meeting a couple weeks ago?

Jamie Allen:    I was, but only briefly. I had to leave Tuesday night to fly to Riga.

Tonya Rae Moore:    Ah, DevRiga, yeah, of course. Was there anything cool going on at the engineering meeting that you heard, that you can let us in on? Or any shout outs?

Jamie Allen:    Yeah, I think the engineering meeting was largely focused on Lagom because, as a framework, it had so many -- not so many, but it had quite a few dependencies on other systems that were important, such as Akka, Akka Cluster, ConductR, our tool for orchestrating multiple instances of an application across multiple nodes. 

    So you can say you want three instances of something out there on your footprint, and it's always going to make sure three of them are up there. And one of the nodes goes down, it automatically puts an instance up over in the remaining footprint. So all of these are part of our views of how systems can be deployed in an elastic and resilient way. And so Lagom representing that fundamental implementation that describes our viewpoint, everybody was very much focused on, you know, all the things that we wanted to do with Lagom [as it was reaching] -- 

Tonya Rae Moore:    I'm sorry to interrupt. I know I wasn't getting any replies to emails that week.

Jamie Allen:    I know, it was difficult, right?

Tonya Rae Moore:    Well, thank you so much, Jamie. I know that you are one of our busiest men, and I appreciate you taking some time today to drop a little knowledge for some of our listeners.

Jamie Allen:    Thank you, yeah. All the people with Typesafe are extremely busy, that's for sure -- I'm sorry, Lightbend -- are extremely busy, but, I mean, the thing that makes us so great is we feel like we're doing such a great thing to change the world of programming and move it in a direction that we think is fundamentally important to writing resilient and scaleable systems.

Tonya Rae Moore:    I've got to say, it makes a difference just across the board. Everybody who I work with, we're just all so excited and just so amazed by what we're doing. It's a really great feeling to work for this company.

Jamie Allen:    It is cool.

Tonya Rae Moore:    All right, Jamie. Check out Typesafe -- see, now you've got me doing it -- check out Lightbend.com for more information about Lagom and Reactive Platform. You can tweet at us! You can tweet @jamie_allen for Jamie, @tonyaraemoore for me, and @Lightbend for Lightbend. I also check out the #Lightbendpodcast now for feedback and suggestions for improvements, because this is always a work in progress. Thanks for everyone who listened!

 

The Total Economic Impact™
Of Lightbend Akka

  • 139% ROI
  • 50% to 75% faster time-to-market
  • 20x increase in developer throughput
  • <6 months Akka pays for itself