Earley AI Podcast - Episode 7: AI, ML, and Customer Experience with Paul Lasserre

From Nuclear Submarines to Predictive Routing - How Applied AI Is Transforming Customer Experience End to End

Guest: Paul Lasserre, Global Segment Lead for Applied AI Solutions at Amazon Web Services (AWS)

Hosts: Seth Earley, CEO at Earley Information Science

             Chris Featherstone, Sr. Director of AI/Data Product/Program Management at Salesforce  

Published on: February 1, 2022

 

 

 

In this episode, Seth Earley and Chris Featherstone sit down with Paul Lasserre, Global Segment Lead for Applied AI Solutions at Amazon Web Services, who traces a remarkable career arc from French naval officer doing counter-piracy work off the coast of Africa and serving on nuclear submarines, through co-founding a wine recommendation startup in the Bay Area, to building Genesys's AI group from scratch to over 100 people, and finally to leading applied AI strategy across industries at AWS. The conversation covers why submarines turned out to be a perfect introduction to machine learning, how to sell disruptive AI ideas internally when product owners insist they are already doing it, why rules-based systems still outperform ML in the right contexts, the shift from reactive to proactive customer service through predictive engagement, why contact centers should transcribe every call before taking any action, and why the most dangerous thing a company can do is sprint in front of the AI train by trying to build commoditized capabilities instead of focusing on what will actually differentiate the business.

 

Key Takeaways:

  • Nuclear submarines introduced Paul to machine learning naturally - determining the position, speed, and course of nearby vessels through passive listening, and classifying what type of vessel it is, are both problems that are almost intractable with explicit rules but become tractable with supervised learning approaches.
  • Internal selling of AI innovation requires running parallel proof-of-concept experiments rather than winning debates - as Jeff Bezos put it, two parallel efforts are better than zero, and once the evidence is clear, people naturally rally around the winner.
  • Rules-based systems still outperform machine learning in the right contexts - routing a Spanish speaker to a Japanese agent is a failure no black-box optimizer should be trusted to avoid, and explicit escalation triggers require human-defined rules, not pattern matching.
  • The winning teams in applied machine learning are not rooms full of software developers but combinations of data scientists and subject matter experts who may not even code - the collaboration between people who understand the technology and people who understand the domain is what produces reliable models.
  • Proactive customer service means detecting anomalous behavior in the customer journey - someone who stops filling out a loan application mid-form - and intervening with exactly the right action through exactly the right channel at exactly the right moment, rather than flooding every customer with unsolicited chat boxes.
  • You cannot manage what you cannot measure: before taking any action on contact center improvement, transcribe the entire month of customer communications, put them on a dashboard, and identify the top three reasons people are calling - then go into a room, analyze it, and plan from evidence.
  • Do not sprint in front of the train - companies that try to outperform AWS, Google, or other hyperscalers on commoditized ML capabilities like transcription, translation, and OCR are wasting engineering talent that should be focused on the differentiating capabilities only that company can build.

 

Insightful Quotes:

"Customer support, customer experience is answering questions and fixing issues that tend to be the same, recurring very often given a given organization. So it's a great machine learning problem. It's a great pattern recognition and pattern matching problem." - Paul Lasserre

"You can't manage it if you can't measure it. Start by opening the black box - transcribe the entire month of communications with your customers. Transcribe it, analyze it, put it on the dashboard and look at the top three issues people are calling you for before you take any action. Go in a room, analyze it, look at it, understand it, and then you can think about the actions that will diminish this level of issues." - Paul Lasserre

"Don't sprint in front of the train. You can be a large company in the short term on this particular problem, but you will probably not win in the long run. That battle is not meaningful for you. Pick your battle, decide what you want to be known for, leverage the work done by the entire industry, and build on top of it." - Paul Lasserre

Tune in to hear Paul describe how he convinced a wine-skeptical company to fund a Netflix-of-wine recommendation engine in 2012, why predictive routing in a contact center is not just about NPS but about a configurable reward function that balances resolution rate, handle time, customer retention, and upsell potential simultaneously, and why Andrew Ng told him the exact same thing about talented students wanting to build everything from scratch when they should be focused on what they want to be known for.


Contact Paul:

Thanks to our sponsors:

 

Podcast Transcript: Applied AI, ML, and Customer Experience - From the Navy to AWS and the Future of Proactive Customer Service

Transcript introduction

This transcript captures a conversation between Seth Earley, Chris Featherstone, and Paul Lasserre covering how submarine navigation introduced Paul to machine learning, how to build internal momentum for AI innovation when product owners are defensive, where rules-based systems still beat ML, the mechanics of predictive routing and predictive engagement in contact centers, the AWS Contact Center Intelligence platform, and why companies should stop trying to outbuild hyperscalers on commoditized capabilities and instead focus their talent on what actually differentiates their business.

Transcript

Seth Earley: Good morning, good afternoon, good evening, depending upon your time zone. Welcome to today's podcast. My name is Seth.

Chris Featherstone: And I'm Chris Featherstone. It's good to be with you again, Seth, as always.

Seth Earley: Great to see you. And I understand you had a short week.

Chris Featherstone: Yep - we get well-being days at Salesforce where I work, and today is one of my mental health days that I get once a month.

Seth Earley: And you decided to spend it with us?

Chris Featherstone: Well, it's you and it's also the guest, right? This is the perfect segue into serenity now or goosefraba - whatever the thing is that gets you nice, calm, and centered.

Seth Earley: I try to get a little bit of mindfulness and meditation every day and it makes a huge difference. Before we start, I do want to thank our sponsors: the Marketing AI Institute - check them out, they have some wonderful courses and a conference - Simpler Media, which you might know from their website CMSWire, and of course Earley Information Science.

Today's guest started his career as a French naval officer specializing in counter-piracy work, which has nothing to do with AI - or maybe it will. Maybe he will tell us what that has to do with AI. Today he is the Global Segment Lead for Applied AI Solutions at Amazon Web Services. Please welcome Paul Lasserre.

Paul Lasserre: Thank you. Hi, everyone. Thanks for having me today.

Chris Featherstone: Paul is one of these rare one-of-a-kind talents, but he's also a good friend and I'm excited to have him on. I think the cool part as we get into this podcast is really understanding how you got into it. There's this notion that AI has some mystical elements and properties to it. And it really doesn't. If you just take a simple, straightforward approach, you'll find that the outcomes you're driving toward can help answer a lot of those questions - but also open a thousand more. My good friend Paul Lasserre and I were partners in crime at AWS.

Seth Earley: So Paul, can you tell us a little bit about your background? I understand you were on a nuclear submarine.

Paul Lasserre: Yeah, in the French Navy. And once again, thanks for having me. Super excited. So cool to spend some more time with Chris again.

To come back to your initial question Seth - what is the link between AI and chasing pirates off the coast of Africa? As much as I like to connect the dots backward on this one, I can't really say there was any direct relationship. The closest I can relate my Navy experience to what I've been doing for the past eight or nine years in AI is probably submarines, as you mentioned, where I got the intuition that...

So if you're not familiar with submarines, when you're underwater, there are basically two things that keep you busy. The first is to figure out what's going on around you. You're blind - you're underwater, just listening, it's all passive. So try to mask your eyes, go to the middle of a crossroads, and try to infer the position, speed, and course of all vessels around you. It's a very difficult mathematical problem. The second is classifying: is it a Russian submarine? A whale? A French submarine? What type of vessel?

And the same thing - mathematically it's pretty difficult if you try to program it with explicit rules. But if you approach this problem in a machine learning way and apply some basic supervised learning methods, you tremendously simplify these equations. That was my first moment of: wow, this new thing is very powerful for these types of problems.

And I used what I understood then to pitch in my business school application. I wanted to combine that with my passion for wine as a good Frenchman. So that's what I did during school - I started a little company with a good friend of mine who was finishing his PhD in deep learning in 2012, 2013. That was pretty early on, not as hot as today. We built a system that could help online wine retailers create a Netflix of wine - better wine recommendations based on people's preferences. The company was called Preferences. That's how I made my transition into AI.

Chris Featherstone: You know, it's like you don't need to be impressed by people who make a career in AI - often it has to do with just how things come together. In your case it was totally by accident.

So how did you get to Genesys? You started some of the deep AI approaches to all of this, especially in customer service - looking at it from the customer experience lens.

Paul Lasserre: At Preferences - the wine thing - we were thrilled, but we could not make enough money to sustain ourselves in the Bay Area. After school we decided not to pursue the project. I was in the Bay Area, married, a kid and a second on the way. I thought: going into wine for a living might not be the best idea. A passion is meant to spend money, not to make money.

So I tried to identify industries where I could apply what I know. Talking with the person who would become my boss at Genesys, I thought customer support and customer experience sounded like a fantastic industry. It's huge. It's not as shiny as wine, so you might not have the same competition there. And at the end of the day, if you think about what customer experience and customer support is all about - it's answering questions and fixing issues that tend to be the same, recurring very often at a given organization. So it's a great machine learning problem. It's a great pattern recognition and pattern matching problem. That's how I got into customer experience.

Once again, 2014-2015 was early - AI and ML were on the CMO's deck as one of the 20 futuristic trends. It was early for many of our customers and investors. So I waited a year or two before it became obvious that the contact center would be definitely changed by this technology. That's when I got the opportunity to start an AI group from scratch with my good friends in R&D, which grew to more than 100 people in a couple of years. We acquired a few companies and started some very exciting partnerships including with AWS, Google, and others.

Chris Featherstone: Your time at Genesys built a lot of the foundations that we took later - I was certainly grateful to have someone accelerating what I was trying to do. In terms of those baseline foundations at Genesys - you built up a team. What was it like to take new concepts and ideas and shop them around to the different product owners and try to get traction? Internal selling is a little different from the proposition to customers.

Paul Lasserre: Good question. It's a double sale - we need to sell to the owners and leadership of the company to bet with us, and we need to go and convince all of the product owners. On the latter, we were not initially successful. And I am sure you have seen that at Cisco and other places, Chris - you come in, you see the revolution that is about to happen, you pitch it to people, and some tell you: we are already doing this. There is nothing new. And in fact, not really.

We could not make it work with a decentralized approach. We ended up building something quite centralized - all of the people in the same team, product managers, engineers, and pre-sales under the same roof. We built critical mass, launched the first AI product, before it became ubiquitous and decentralized across all the products. It's hard to start in a decentralized way before getting there and convincing all the owners.

To convince the leadership of the company when it was still early and not as obvious as maybe today - we got some discreet, under-the-radar POCs going with large customers, starting with something we called predictive matching, which became predictive routing. That means you use machine learning to better route the calls and chats that go to customer service representatives, based on the outcome you want to optimize instead of rules. Like getting your NPS to go higher, or controlling the average time a conversation lasts. These are things machine learning systems can do very well. We did some POCs with great successes, and that was the hard evidence to get us started. Then we acquired Altocloud in Ireland and grew from success to success.

Seth Earley: I want to pick up on that thread of going to people with this vision and having them say: "yeah, we're already doing that" - and yet they weren't. I've seen that a lot. Tell me more about the delta between what people thought they were doing versus what your vision was. How do you get to that level of clarity where you can say: no, you are not actually doing this thing we're talking about?

Paul Lasserre: Yeah, and it really comes down to people, relationships, and human nature. There is no one valid answer for all scenarios. I've seen people come along quickly with just concrete evidence and logical demonstration. And I've also seen people take an irrational path, protecting their baby - sometimes it's just emotional. I've been like that in some cases too, so I totally get it.

In cases where you can't get people along with just hard evidence, I fall back on what Jeff Bezos said at Amazon: when two teams are competing on the same project, "two is better than zero." You basically let the things run in parallel and accept a little bit of resource waste. And at one point it becomes so obvious that one is winning that everyone just rallies around the winner.

Seth Earley: So it's running those experiments, funding those POCs, trying different approaches in parallel. There's some experimentation that may be a little bit inefficient, but that way you can actually see how things play out. And if you are doing all that stuff and you're not getting the results, then you're not really doing that stuff. What are your results? What can you measure?

There is an interesting thing when it comes to measurement - you talked about improving NPS and controlling the length of a conversation. Many times those are in opposition. Can you talk a little bit about conflicting objectives in the contact center?

Paul Lasserre: Yeah, that's a good question. In this particular example of routing - some people managing routing strategies for thousands of employees will tell you it's sometimes more art than science, because there are so many parameters and you need to tune it like a musical instrument. And that's where you see the power of human intelligence, by the way. Because if you let a black box optimize for some outcome, every now and then a Spanish speaker will talk to a Japanese agent. There are aspects where there is really no point having a deep learning system - where explicit rules can do the job. Language routing is a good example. And keywords that trigger an escalation - there are plenty of examples where you really want to control the rules.

Number two - how do you control the parameters? It's great if you have one extra NPS point on average, but if all of your conversations are now ten times longer, your CFO might not like it.

What we do is optimize a reward function to define the weight and the limits you want to put on each outcome. The outcomes can be classic contact center metrics like first call resolution, average handle time, number of transfers - and increasingly, more sales and marketing outcomes: NPS, customer retention, upsell, cross-sell. And you can have a nice mix, depending on the queue, with the trade-offs you want for each set of outcomes. That is where good human judgment comes into play.

Seth Earley: Human judgment. Yes. And I like what you said about the fact that sometimes a rules-based system is OK. In one Harvard Business Review article on workplace AI, the argument was that less data is actually going to be better in some situations - in the context of a rules-based or expert-based system, because there is still value in expert-based systems, in rule-based systems, in explicit knowledge capture. You can't go all the way to the end of the spectrum and say it's machine learning and data, data, data.

Paul Lasserre: Yeah, totally agree. There is a bit of hype around AI and people want to have as many AI systems in production as possible. But at the end of the day, you want to apply these technologies for amounts of data where a human mind cannot make sense of the patterns, cannot really understand the logic and connect the dots. That's where the power of a machine learning system that scales almost infinitely makes a huge difference. But with small datasets, why would you want to do that?

Seth Earley: And in cognitive systems - I always say "cognitive AI" is a bit of a misnomer because the systems don't think, but what they do is reduce the cognitive load on the human. I want to talk about the last mile of cognitive AI. You're going to have a general language model that will interpret common terminology. But when it comes to a highly specialized life sciences or technology company, there's specialized nomenclature and specialized knowledge that still needs to be captured and codified. You can't have a general language model apply to that. Can you talk about how that works for AI solutions in contact centers and the explicit knowledge capture piece?

Paul Lasserre: Yeah, and I don't think that's specific to contact centers. The winning association is often machine plus human. It's true in chess and it's true in many industries. We had a conference this year with some great customer success stories around what we call intelligent document processing - the ability to process large volumes of documents semi-automatically. Do most of the job with a machine, have a suggestion given to an expert who is able - for COVID documentation for example - to understand the machine's assessment and quickly make a call based on their medical knowledge, but with the process accelerated tremendously.

In contact centers, it starts with tagging the data. That's what Chris and I were doing for the solution we launched at AWS called Contact Center Intelligence. You use primitives like transcription, processing, and translation - general capabilities. But proper names, medicine names, industry-specific vocabulary - these cannot be trained by default in a general system. And that's where custom language models, custom vocabulary, the ability to add the specifics of your business on top of a general model, is quite powerful.

For that you need people - people who have the expertise, who tag these different semantic elements, create the system. And at the end of the day, often you have a human in the loop, with a confidence level, and you need someone with the expertise to make a call.

Chris Featherstone: There was a video I saw not long ago comparing AI and deep learning to a Rube Goldberg machine - you push the domino and it pushes a ball and that ball pushes a stick. But the key point was that at every step, someone had to be there shepherding it. Are you driving the right outcome? Is there bias? Is the reinforcement learning set up correctly? A lot of folks have this misconception that it's a submit-and-forget hands-off situation and you get this amazing result. It's actually the opposite - it's more hands-on than ever because the data is right or wrong or the wrong shape or format, and you have to normalize all of that.

Paul Lasserre: Exactly. And it shows in the people. With classic software you have a bench of software developers in a room writing great code. But the best teams in the era of machine learning are made of data scientists and subject matter experts - sometimes the subject matter experts don't even code. The collaboration between people who understand the technology and people who understand the specifics of the domain is usually the winning team. You need people who have spent years in a field, seen edge cases, who can rewire the model the right way.

Chris Featherstone: There are very few true AI experts, and the ones actually writing the algorithms will say themselves: this is my best scientific theory, and here is what I intended, and the results are always going to vary. My hat's off to you, Paul, because you were doing early AI work for speech recognition without things like open-source BERT models. We were dealing with a lot of these pieces trying to create a better customer experience. Speech recognition alone has been around a long time. Some people say "I already do that" - but it's like driving a car. Maybe you have a Model T Ford with a top speed of 25 miles an hour. You drive a car, technically. But if you want to drive to visit a friend in California, it's going to be a rough go.

So Paul - going back to your time at Genesys. What was your proudest moment there, and why did you leave? What was the opportunity at AWS?

Paul Lasserre: The product I am very proud of - when I see from the AWS perspective the success Genesys still has in machine learning, one of our largest users of ML products, it makes me very proud. We went a long way and it was very fun to start it from zero, design the first PowerPoint, go and do some runs convincing people, start and fail and start differently.

I would say what we did and what is still a work in progress in many organizations is not only coming up with new AI products but then, once you have two, three, four, five different applications or products, creating a new platform - an ecosystem of AI and ML products where they all reinforce one another. For instance: predictive routing where you optimize who talks to whom based on outcomes. Now imagine also optimizing the scheduling of your agents, knowing the average characteristics of the thousand callers on a given day at a given time, the forecast of calls, and the attributes of your agents. Now you map two machine learning systems that each pursue the same outcomes with different boundaries. And you do that with the next system and the next system, and your entire enterprise works together with loosely correlated applications all working against the same goals. That's something we started to do at Genesys. I see it as a trend in the market and it makes me very proud.

Seth Earley: And you're making really great points about the synergies between different components and primitives - assembling Lego building blocks. And then you're seeing lots of incremental improvements throughout the enterprise, because the same things that can answer questions for a call center agent can answer questions for an HR representative, an engineer, a salesperson, a marketer. The same ideas can be applied everywhere in the organization. We speed up the information metabolism of the enterprise - getting everybody faster answers to their questions.

Before we carry on, I want to introduce again Paul Lasserre, Global Segment Lead for Applied AI Solutions at Amazon Web Services. And once again thank our sponsors: the Marketing AI Institute, Earley Information Science, and Simpler Media.

One question I have is about proactive versus reactive customer service. Usually a call center is: you have a problem, you call me, and now I react to that problem. Talk about how the evolution of AI and ML has assisted that evolution toward proactive.

Paul Lasserre: Yeah. And I will just quickly address Chris's question about moving from Genesys to AWS. One of the reasons was that I had become very, very specialized in contact centers. As much as I love contact centers, I did not see myself necessarily building a career exclusively there. When the opportunity at AWS came, I had the chance to apply what I learned at Genesys across a bunch of different industries and technology domains - from media to business intelligence to many enterprise systems families. But to come back to contact centers and proactive versus reactive -

For years, for decades, enterprises and organizations have waited for people to have a problem before reaching out and talking to their customers - usually unhappy customers at that point. And you can't blame organizations for that, because how would you do it differently? Would you want to pay people to just monitor every single customer?

Seth Earley: Call them up and say - do you have any problems now? Is everything OK?

Paul Lasserre: It was just not possible. And now in this world of a treasure trove of data and the ability to have systems understand the boundaries of what a good customer journey looks like, and when someone is going outside of what that good journey is supposed to be - that's when you reach out. And so many organizations today understand that the technology allows a completely different way to talk to customers.

In contact centers, there was the Altocloud acquisition that Genesys made in 2017-2018, which specialized in predictive engagement - which is exactly what we are talking about. You just have a system monitoring what is going on online. If you start an application for a loan online, fill out part of a questionnaire, and then stop - it is likely that something wrong is happening. So don't wait for that person to look up the phone number and call you. First, assess probabilistically whether it's a real issue or just someone who stepped away for coffee. And second, use a second model to assess the best intervention - who is likely to bring this person back? Should it be an offer, assistance, a call, a chat, or just an article? And you can apply this logic across the entire world of customer support.

Seth Earley: It's so interesting - you mentioned the right intervention. Because what people really hate when they walk into a store is someone immediately coming up and saying: can I help you? No, just looking. It feels like an intrusion. When I go to websites, the first thing that pops up is a bot saying "how can I help you?" and I want to exit immediately. That proactivity can step over the line and be intrusive.

Paul Lasserre: And you have different ways to act on it. Where you have Amazon - and I say this just as a customer of Amazon.com - I am always impressed by the proactive nature of Amazon's customer support. Not that someone is calling you or putting a chat box in your face. But Amazon is super data-driven - we understand what is going on, we can analyze where customer journeys sometimes go left or right, and we act accordingly with the right help at the time and place where you are most likely to need it.

Proactive can mean simply changing the routes that customers are taking so they are less likely to call you. Now, you will always have people who want to call you, who want to chat, who want to reach you on the channel of their choice on the device of their choice when they want - with their rules. You want to be there for them. There is no shortcut. And that's the beauty of automating what you can - you free up your people for the human touch in what really matters.

Seth Earley: And the critical piece is understanding that journey and understanding where it can go wrong and anticipating it - looking for the anomalous behaviors, the ones that say: wait a minute, they should have done this, they didn't. Why did that happen? If you don't understand that, you can't anticipate it.

Paul Lasserre: That's the key. You can't manage it if you can't measure it. What Chris and I always tell customers and partners on this topic is: just start by opening the black box. Transcribe the entire month of communications with your customers. Transcribe it, analyze it, put it on a dashboard, and look at the top three issues people are calling you for before taking any action. Go into a room, analyze it, look at it, understand it - and then you can think about the actions that will diminish this level of issues.

Chris Featherstone: Define irony - because how many opportunities are missed by companies gathering customer sentiment off surveys? Which is interesting because to your point on proactive, it can be overbearing and cause a negative experience. And if they are not reactive at the right timing, that also causes a negative experience.

So: you're calling in, you're upset about something, I can make the conversation worse or better - and then I'm going to ask you to take a survey at the end. We know less than 1% of people even want to take a survey. And the only ones who actually do are the ones who are emotionally escalated - people who are happy won't take it. So now you have all these false negatives in your results that skew your NPS and CSAT scores. It makes no sense at all - as opposed to trying to get markers and scores of the conversation at the beginning, in the middle, and at the end, and actually deriving: did they take the optimal journey? Did that optimal journey yield the result they were looking for - which was solving their problem?

The thing that works is: take your voice conversations, analyze them - crawl, walk, run - just start with the recorded calls you already have and understand what is going on. Then maybe get into real-time next-best-action where instead of proactiveness being about always trying to make sure you're OK, you create automated connection points dictated by the customer's own preferences and channel choices.

Seth Earley: And the other piece is: reach out to me when you say you will. There are a lot of process issues and basic blocking and tackling. I'm looking forward to telling my Verizon wireless story - they were going to call me back in three separate situations and never did. Doing what you say you're going to do, understanding the process, understanding those signals - but also just doing the basic blocking and tackling is so important. You might be looking for these hidden insights and hidden signals, but there's a lot of explicit stuff you just aren't doing.

Chris Featherstone: What's more frustrating than having an issue and being given a window like "between 8pm and 8am I'll get back to you"? Anyway - let's get back to it. You mentioned the opportunity of moving to AWS was really the chance to look at so many different AI and ML problems across industries. Give folks listening an idea of what that scope looks like. We always seem to default to customer service. What else do you get to work on at AWS?

Paul Lasserre: Great question. There is work we're doing to extend what we did in contact centers with fulfillment - doing some great work with companies like Pega and Workato on the mission side, working with systems that listen to you, get insight, and can automate fulfillment so you don't stay waiting for a company that was promised to call you three times. It's all part of the same process and same journey.

Outside of customer experience, a few areas where we've seen a lot of success with customers, analysts, and partners of AWS in the past year or two:

I mentioned Unity Airlines, who came on stage at re:Invent in Vegas in December, talking about how they developed in almost no time an application using primitives for text recognition and NLP to understand very quickly if you were compliant with your vaccination form and COVID test when coming from different countries. On average that used to take about seven minutes per traveler for controllers at the gate - a nightmare for airlines. We applied the same logic to insurance claim processing and bank loan processing - automating a process that usually takes a lot of people and training to handle the variety of possible forms.

We also - and Chris was involved, and I saw that one of your guests was Adam Thurston, an awesome colleague at AWS - we developed a solution with close to 20 well-known companies in media, entertainment, and education. The same concept as in contact centers: open the black box. If you have an archive and you have data, just analyze it, understand it. So the next time you search for something, or want to do a special edition of your last 15 years of magazine content as was the case for one of our customers - instead of hiring people to screen hours of content, you can very quickly create thumbnails semi-automatically, translate and localize content in games in no time, and monetize your content and videos. Especially in an era where cookies are disappearing, optimizing advertising based on content rather than based on who is watching it is also a super exciting theme we work on.

And lastly, at re:Invent in December we announced something called AI for Data Analytics. The thinking is: one of the reasons many organizations today struggle with machine learning is because they cannot find the talent. Data scientists are a scarce resource. They tend to cluster at large technology companies and hot startups. Many organizations would like to do more with machine learning but just cannot find the people.

What if you could empower good data people - people who are subject matter experts, know how to manipulate data, build a dashboard, find insights, but do not code and are afraid of a command line interface? The idea is: meet these people where they are, work with the tools they are already using today, so they can move from descriptive analytics to predictive and prescriptive analytics without having to learn new skills, with no need to code and no need to create models from scratch. That is some of the most interesting machine learning application we see, where customers are looking for turnkey solutions that vendors can make easy to install and put in place for the business.

Seth Earley: But they still need to understand their processes. You can't automate a mess. Back to those fundamentals - you need to know what your outcome is going to be, what you're trying to accomplish, what your business problem is, how you're going to measure it.

Paul Lasserre: It always starts with you. Exactly.

Chris Featherstone: What's next for you, Paul? I was so grateful to have you helping me drive a lot of the Contact Center Intelligence work - and congratulations by the way for getting that on the main stage at the big AWS customer conference. Paul and I created a very competitive and cooperative approach to all the customer service and contact center work. It started with just the two of us driving the whole program and then it ballooned and got bigger and perpetuated past us - which was the goal, though there were parts of the organization that didn't love that.

Seth Earley: You were annoying people.

Chris Featherstone: Well, you know you're creating something disruptive when you start disrupting what other people are doing. We were able to do some phenomenal stuff. And then we moved into the media side - taking these building block primitives and applying them across industries - there is really no one-size-fits-all, but there are building block services you can put together to create an accelerator that can be tailored and customized. So - what's next for you, my friend? What are you working on now that you can share without blowing up compliance?

Paul Lasserre: We don't communicate on roadmaps as you know, Chris, so I can't share what's not been announced. But I can tell you: think about all of the hot use cases in machine learning in the world of enterprise - from industrial AI to enterprise search to content moderation. These are all hot topics where we have the capabilities at AWS.

What really excites me today is going back to our discussion about "I'm already doing it." I'm seeing a trend in the market around companies wanting to develop their own machine learning systems in-house. And the truth is: a small company that is very specialized in a given industry will likely not be known for transcription. So my message often to customers and partners is: pick your battle. If a machine learning system is not going to differentiate your business, do something else. Focus your engineers and data science talents on what is proper to your business, what you want to be known for.

I was fortunate to have a discussion with Andrew Ng a couple of years ago where I was sharing some of these challenges with engineers. And he told me: that is exactly the same thing here - I work with some of the most talented students and they could do it all themselves, and I constantly have to get them focused on the thing they want to be known for.

The way I phrase it is: don't sprint in front of the train. Of course a large company might be able to beat AWS or Google in the short term on a particular problem. But you will probably not win in the long run. That battle to start with is not meaningful. It's not relevant for your business. Pick your battle, decide what you want to be known for, just leverage the work done by the entire industry and build on top of it. Most people get it today and apply that logic - and that's why we see so much fast innovation in AI.

Seth Earley: That's great. I love that. Don't sprint in front of the train - you can win in the short term, but you won't survive in the long term.

Chris Featherstone: You know, a phrase we used at Amazon a lot was: don't break into jail.

Seth Earley: Well, listen, we have come to the end of our time today. It really has been a tremendous pleasure speaking with you, Paul. I do want to again remind people of our sponsors: Simpler Media, which is CMSWire, Earley Information Science, and the Marketing AI Institute. And again, Chris - wonderful having you as our partner in crime. And Paul - fantastic having you on. We'd love to have you on again. Tremendous insights. All right guys, thank you so much. Have a wonderful weekend and we will talk to you soon.

Paul Lasserre: Thanks for having me. It's a lot of fun. Bye, everyone.

Chris Featherstone: Take care.

Meet the Author
Earley Information Science Team

We're passionate about managing data, content, and organizational knowledge. For 25 years, we've supported business outcomes by making information findable, usable, and valuable.