Earley AI Podcast - Episode 21: AI and the Future of Work - Why Domain Ontologies Are the Hidden Foundation of Employee Experience with Dan Turchin

Impacting a Billion Lives at Work - How Augmented Intelligence, Clean Data, and Human-Centered Design Transform Employee Service

 

Guest: Dan Turchin, CEO and Founder of PeopleReign

Hosts: Seth Earley, CEO at Earley Information Science

             Chris Featherstone, Sr. Director of AI/Data Product/Program Management at Salesforce 

Published on: November 11, 2022

 

 

 

 

In this episode, Seth Earley and Chris Featherstone speak with Dan Turchin, serial entrepreneur and CEO of PeopleReign, whose personal mission is to positively impact one billion lives at work by using AI and natural language processing to give employees time back from mundane tasks. Dan argues that too many technology leaders focus on the how and miss the why - and that two thirds of the work required to deploy a functional AI virtual agent is really just data management. He walks through PeopleReign's three-step deployment framework, explains how a domain ontology transforms keyword matching into genuine conversational intelligence, shares a cautionary tale about a healthcare company whose AI could not find "Peloton" because the word had never been labeled, and makes the case that the human experience - not the cloud architecture diagram - should always sit at the center of any technology design.

 

Key Takeaways:

  • The term "augmented intelligence" better describes what AI should do for employees - using technology to make humans better at being human, not to replace them.
  • Any job that is dull, dirty, or dangerous is a strong candidate for autonomous intervention; the net new jobs AI creates will require uniquely human traits like empathy, creativity, and judgment.
  • Work will change more in the next thirty years than it has in the previous three hundred - leaders have a responsibility to be intentional now about ensuring that productivity dividend benefits people, not just bottom lines.
  • Two thirds of the work required to deploy an autonomous virtual agent is data management - aggregating where answers live and assessing the hygiene of that data before a single line of AI code is configured.
  • A domain ontology transforms natural language processing from keyword matching into contextual understanding - without it, asking about "airport" returns results about flights when the employee meant the Apple Wi-Fi product AirPort.
  • PeopleReign never commits to a project unless they can demonstrate value within thirty days - starting with one business problem and one success metric is what catalyzes organizational creativity and builds a roadmap with integrity.
  • Technology architecture diagrams that put a cloud in the center instead of a human are symptomatic of a deeper flaw - technology should always be mapped to the human experience, not the other way around.

 

Insightful Quotes:

"Too often in the technology field we focus on the how and we miss the why. For me, my twenty-five year journey has always been about the why, and more importantly, about the who. I'm on a personal mission to impact a billion lives at work." - Dan Turchin

"I contend that work will change more in the next thirty years than it has in the previous three hundred. Now is the time to be thinking about what that new definition of work means for everyone." - Dan Turchin

"I can't tell you how many technology architecture diagrams I've seen that place some version of a cloud in the middle with features orbiting around it. I want to vomit every time I see one of those, because we're missing an opportunity to put the human in the center of that technology ecosystem." - Dan Turchin

Tune in to hear Dan Turchin explain why the most important AI deployment question is never about the technology - it is about the human impact, the data hygiene, and the thirty-day business outcome that proves the concept and opens the door to everything that follows.

 


Links:

 Thanks to our sponsors:

 

Podcast Transcript: Augmented Intelligence, Domain Ontologies, and Why 80 Percent of the Work Is Always the Human Problem

Transcript introduction

This transcript captures a conversation between Seth Earley, Chris Featherstone, and Dan Turchin about the future of work, the foundational role of domain ontologies in employee service automation, and PeopleReign's discipline of leading with the human why before ever proposing a technology solution. Dan shares his journey from building a pager-based field service app in the 1990s to training a neural net on one billion IT incidents, explains his three-step deployment process, and illustrates with real customer stories - including a university network outage and a European insurance company that started with one IT use case and eventually expanded to seventy-five departments across twenty-five countries.

Transcript

Seth Earley: Good morning, good afternoon, good evening, wherever your time zone is. Welcome to today's podcast. I'm Seth Earley.

Chris Featherstone: And I'm Chris Featherstone.

Seth Earley: Today our guest is passionate about the future of work - building great teams that build great products and solve hard problems and change lives. He is the CEO of PeopleReign, the company using AI and natural language processing to make technology work for humans. Joining us from San Francisco, please welcome Dan Turchin. Thanks for joining us today, Dan.

Dan Turchin: Great to be here.

Seth Earley: Dan, you had me on your podcast and we covered a lot of the same thinking around AI, virtual assistants, information management, and ontologies. But before we get into that, maybe you could talk a little bit about your journey to where you are today.

Dan Turchin: I'll start by saying we've published over a hundred and fifty episodes of "AI and the Future of Work," one of the most downloaded podcasts on the topic. Of those hundred and fifty-plus episodes, there's only one where I got to geek out about one of my favorite topics - domain ontologies and information architecture. Seth, you and I were like two kids in a candy shop. I'd encourage all your listeners to go listen to that conversation. He put on a master class.

But to your question - too often in the technology field we focus on the how and we miss the why. For me, my twenty-five year journey has always been about the why, and more importantly, about the who. I'm on a personal mission to impact a billion lives at work. I started my seventh company, PeopleReign, applying the principles of AI and machine learning to improve the employee experience and make lives better - to make humans better, with not artificial intelligence, but augmented intelligence, using technology to make humans better.

It started back in the nineties with a company I founded called Aerprise. It was solving a similar problem to what PeopleReign solves today, but for field technicians who needed to do work in the field. Back then they didn't get credit for doing the work until they got back to a desk. They'd print out a paper ticket and file a report when they returned to headquarters to get credit for the end user they were supporting. Often they were less efficient because the paperwork consumed time that could have been spent in the field helping more employees. My co-founders and I looked at that as a broken process and decided - before there was a smartphone, just the dumb pager - if you could take the right attributes of a trouble ticket with you on your pager and update it from your pager, you could improve those people's lives. That company ended up being acquired by BMC Software to improve their systems management tools.

Since then, through a series of innovations, we got to the point a few years ago where we could really apply AI and learning at scale. At PeopleReign, we've built what I'd call the world's definitive taxonomy of employee service. We trained a neural net on about a billion historical IT incidents and HR cases. You can use NLP sitting on top of that neural net, sitting on top of what we call a domain ontology, and have a very fluid conversation in twenty-seven languages with an employee, and almost every time autonomously diagnose and resolve their issue. What that means is that we can give an employee back about an hour a week - to do less of what they hate, which is waiting on hold and being treated like a number, and more of what they love, which is creating value, being creative, being empathetic, exercising rational thinking. Twenty-five years later, we've made the human experience less robotic by introducing AI and machine learning. That's why I was put on this Earth - to help the next billion employees feel like the best versions of humans using augmented intelligence.

Seth Earley: That's analogous to the fears people had at the turn of the twentieth century when steam engines arrived. People worried about ditch diggers being out of work - but people didn't like digging ditches. What became possible with that equipment was building skyscrapers and highways and cities. The challenge, of course, is that there will be disruption. Conformity-level jobs, jobs requiring less expertise, are still employing people. What happens when we automate more of those? What about self-driving trucks and the millions of people who make their living driving?

Dan Turchin: Disruption is real. The World Economic Forum says AI will create a net new fifty-eight million jobs in the next five years. The Qualtrics Research Institute did a study showing eighty-nine percent of employees polled said they need to either reskill or upskill to be relevant in the next decade. I contend that any job that is one of the three D's - dull, dirty, or dangerous - is a really good candidate for autonomous intervention.

The reason I'm so optimistic is that the net new jobs being created are going to be in fields that uniquely require human traits. Think about the fact that seventy-five percent of baby boomers will rely on digital assistants or voice assistants in the home to order their medications in the next three years. That creates an entirely new field of people who will educate that aging population about how to configure and use these tools - analogous to how, twenty-five years ago, print journalists feared what the web would do to their careers. What actually emerged were massive fields like web development, SEO, and search engine management that paid much higher than print journalism ever did. We've learned over the arc of history that you never want to be on the wrong side of innovation. I'm more optimistic than ever about the future of humans and jobs because automation is rapidly replacing dull, dirty, and dangerous work with roles that take advantage of innately human skills.

Dan Turchin: As a society, we need to get comfortable with the dialogue about what it means when automation gives every citizen back a day a week. We will be just as productive in four days as we are in five. That's not a threat - it's a promise, and it's within the next decade. The question for leaders and technologists is: how can we ensure that extra twenty-five percent of productive time goes to improve life and improve society - whether that's pursuing a hobby, being a better parent, or tackling things like global poverty or the climate crisis?

Seth Earley: The risk is that organizations respond to that productivity dividend by cutting twenty percent of the workforce and keeping everyone working five days instead. That would deepen the divide between who gets the benefit and who pays the price. Leadership needs to be intentional about this.

Dan Turchin: I contend that work will change more in the next thirty years than it has in the previous three hundred. Now is the time to think about what that new definition of work means for everyone - whether you're a teacher who will be augmented by learning technology, a police officer who can identify the location of a gunshot faster, or an insurance claims adjuster whose actuarial process will be automated. Those careers won't necessarily disappear, but what it means to do them will change, and it's incumbent on leaders in every profession to embrace these innovative technologies and figure out how to use them to make everyone in the field better rather than exercise fear about what might happen. We're a good fifty years conservatively away from artificial general intelligence that can do essentially any human task. Right now narrow intelligence is having its golden age - let's not fear-monger. Let's embrace the capabilities and use them as an opportunity to rethink what it means to be the best version of a human.

Chris Featherstone: There are still people who consider human-in-the-loop as almost an afterthought in AI. My background is in language and speech, and for certain applications like closed captioning for people with hearing impairments, you need a hundred percent accuracy. Eighty-five percent accuracy in ASR is phenomenal in our world, but it is an abominable experience when fifteen out of every hundred words are wrong. You still need a human in the loop to validate what the machines are doing.

Dan Turchin: Those who engage in fear-mongering about bots taking over really do a disservice. The ASR example you gave is a great one - eighty-five percent accuracy is pretty good, but fifteen wrong words per hundred is an abominable experience for closed captioning. And the current state of AI is actually pretty poor. The best AI today is, using the life cycle of a human, maybe an infant. It's a long way from the conceptual leap of Skynet. What AI is really good at doing is predicting the next event by learning from historical data. When you strip it down to its foundations it's really applying statistics to data at scale. We're many significant scientific breakthroughs away from anything that truly resembles human intelligence. So rather than fear-mongering about it, let's focus on using it to improve prosthetics for amputees, to predict where a drought might occur, to prevent famine - real problems it's genuinely well-suited to solve today.

Seth Earley: Let me shift this to ontologies and to how organizations can actually adopt tools like yours. What are the prerequisites to success? How far does an out-of-the-box solution get an organization, and what else do they need to do - their own specific knowledge, content, data, processes, corporate DNA? Maybe define ontologies from your perspective for the audience.

Dan Turchin: If I said all AI is a data problem, the role that ontologies play is fundamental to solving that data problem for employee service automation. In order to use natural language processing to understand an arbitrary intent expressed in natural language, having a structured vocabulary that defines how that service is delivered is essential.

For example - an employee asking about an "airport" may have a question about the place you fly a plane from, or they might be asking about AirPort, the Wi-Fi product from Apple. If the context of their request involves Wi-Fi, an outage, or connectivity, it's less likely they're referring to the airplane. Having an ontology - we refer to it as a knowledge graph - that links these concepts makes it really easy for an autonomous virtual agent to understand in context how to go about resolving the issue. Without that structured vocabulary, you're essentially doing Google search: matching a query to keywords and returning documents that may or may not have anything to do with the actual request. Insert a domain ontology behind that request, and all of a sudden you're conversing with an intelligent virtual agent. That's how having a mature domain ontology and information architecture is essential to delivering employee service.

Seth Earley: My philosophy is that organizations still need to get their knowledge and data house in order. When you go into an engagement, what are the preconditions for success? What do they need to be working on today to make these things work?

Dan Turchin: Interestingly, the deployment process at PeopleReign involves three steps, and two out of three - literally two thirds of the prep work to deploy an autonomous virtual agent - has to do with data management. It's a little ironic, because we all think about just the end result - the Siri you talk to, or the Alexa. But in fact, getting to that level of conversational fluidity in an enterprise requires two things first.

The first is data aggregation - a significant exercise in figuring out where all the answers might live. The second is assessing the hygiene of that data. Every enterprise has significant gaps: data is inconsistent and incomplete. Those first two steps are what we call an AI data analysis and a configuration workshop, and a hundred percent of those two steps are just assessing the state of the data. That then feeds the third step: actually configuring the virtual agent. But by the time your data hygiene is at a point where it's usable, it's actually relatively easy to bolt it onto a user experience. But only once you've made sure it's capable of learning from high quality data.

Chris Featherstone: We had a situation with a healthcare organization - a reseller of healthcare goods - that was trying to surface products including Peloton bikes. Every time someone asked for a Peloton, the AI didn't bring back the right responses. When we went in and looked at the data, the ASR was returning "palette on" instead of "Peloton" because the word had never been trained. As soon as we labeled it correctly, they went to a hundred percent accuracy on that term. It was such a simple fix, but the AI was completely missing all the utterances where people were asking for something like a Peloton bike - because the word itself was unrecognized. A little bit of data labeling goes a long way.

Dan Turchin: And how do you get the business to understand that? Because often you're not selling to a deep technical crowd - you're selling to HR or a business leader. The way we approach it is by starting with the why.

Just before we started recording this podcast, I was doing a workshop with a university on the East Coast. The problem they described is that when faculty members experience a technology outage during a lecture, it impacts the learning experience. Students who pay a lot of money to be educated by world-class professors don't expect a network outage to impede their learning. So eighty percent of our conversation was focused on: why does that matter? How often does it happen? What is the impact on the educational institution's mission? It's less about why the network went down or how many engineers it takes to diagnose it. It's really about what is the human experience, what happens to the students, what does the faculty member do?

Only once we fully diagnose why it matters and understand the impact - in this case on the learning experience - can we step back and think about what a technology solution might look like: automating the monitoring of network health, using AI to look for patterns in data flows that might indicate when the network is going to go down. But eighty percent of the time is spent understanding the business problem and the human impact. Because then, and only then, can you understand where the data lives, what its current state of hygiene is, what the AI model needs to look like, and how to use humans to label data to augment the solution's accuracy.

Seth Earley: I imagine when you begin with a customer, you're identifying scenarios and use cases first, and using those to focus remediation efforts. Too many organizations, when presented with the large-scale problem of getting their data house in order, see it as too broad, too big, too difficult. But that misses the point - you're looking for a very specific intervention. Not trying to replace a human or an entire process, but understanding where the gaps in a process are and where you can have a targeted intervention. How many use cases do you typically begin with?

Dan Turchin: When you initially speak with a customer, they'll have a dozen different problems that seem like good candidates for AI and machine learning. When you start whiteboarding, the list grows without bound. We feel it's our job as the experts to narrow that list down. It's addition by subtraction. We will never commit to a project unless we can commit to delivering value within thirty days. That means picking not just the right business problem to start with, but also understanding the KPIs - the success metrics the customer will use to measure success.

At PeopleReign we have four applications for three different personas, and they can all be deployed independently: a virtual agent for the employee, two applications that assist the case worker or agent, and one that does predictive analytics for the service owner. We don't come in with assumptions about which application to deploy for which persona in which order - we listen. In the case of that university I mentioned, the recommendation is actually to start with predictive analytics. The virtual agent is often the shiny object, but it's not always the right tool. In this case, the business problem is to proactively remediate network issues - having a professor pause a lecture to ask a virtual agent to troubleshoot network latency is the last thing you want. What you do want is for someone in IT to get proactively notified four hours before any performance degradation. One business problem. Solvable within thirty days. Then and only then do we propose a technical solution.

Seth Earley: You're bounding the problem, bounding the domain, bounding the knowledge, and making it achievable and demonstrable in terms of return. And then over time these things grow. Use cases are testable - you can build evidence: can we support this task? That goes into role definition, job definition. The more you support those use cases, the more sophisticated the capabilities become. It sounds like this becomes a discipline the organization needs to build maturity around.

Dan Turchin: Absolutely - and further to that point, it's a learning process for everyone. A good example is a European insurance organization that has been using PeopleReign for a long time. Their initial vision was simply to automate the IT trouble ticketing process. They started by assisting live agents - when the phone rings, the agent gets a screen pop showing what to do next, reducing research time and decreasing downtime for the employee. Fast forward a few years, and now there are seventy-five different departments within this insurance organization spanning twenty-five countries using the platform. But none of us - not us or them - knew what the art of the possible was until we saw that first use case working. Once they shared the success they were having in IT, facilities had a similar need, and finance, and legal, and sales operations. A mushroom cloud of use cases emerged that all relied on the same underlying technology. Until we understood what was possible, it was hard to get beyond that first use case. Keeping it time-bounded at thirty days with one success metric is almost always the way to catalyze organizational creativity and build a roadmap with any integrity.

Chris Featherstone: I cannot tell you how many organizations I've run into where they want one self-service assistant or bot to rule them all. And it's a pipe dream. You're seeing it from your own perspective, not from the global, departmental, organizational perspective. What you need is a network of agents that can be built off the same structure - but with different data points - and that can share context across scenarios. We're just now getting into the realm of being able to hold context between different functional bots and hand off a coherent picture.

Dan Turchin: I can't tell you how many technology architecture diagrams I've seen that place some version of a cloud in the middle with a bunch of technology features orbiting around it. I want to vomit every time I see one of those, because we're missing an opportunity to put the human in the center of that technology ecosystem. And that's not just a foundational flaw in architecture diagrams - it's a foundational flaw in how we think about the value of technology. We should always think about the human experience first. If we do that, it becomes easy to take the technology and map it to the business process and the human experience, rather than what we've historically done: taking the technology and telling the user to map their process to what the technology can do. That's what I want my epitaph to be.

Seth Earley: We're getting to the end of our time today. It's been a real pleasure, Dan. What's on your agenda these days?

Dan Turchin: Yes, I am a bit of an adrenaline junkie. But I think the fusion of work and life makes us the best versions of humans we can be - that desire to be our best and compete at the highest level. I coach myself that if I can make myself the best version of myself, I don't have to worry about anyone else. And I firmly believe I was put on this planet to help make a billion employees have a better experience at work. I take no greater pride than building teams and building products that change lives by applying the power of technology. If we think the right way about the human experience, we can almost always take the fiction out of the science and use it to make all of us better.

As leaders we have a responsibility to educate everyone out there about the potential for AI and machine learning to make life better. If there's one thing your listeners take away: be optimistic. Be proud. Be a leader. Think about ways we can help other humans with the power of these technologies.

Seth Earley: It's a responsibility for sure. Thank you so much, Dan. It was a pleasure.

Chris Featherstone: Thanks for the inspiring conversation. Great to meet you, my friend.

Dan Turchin: Thanks for having me guys.

Meet the Author
Earley Information Science Team

We're passionate about managing data, content, and organizational knowledge. For 25 years, we've supported business outcomes by making information findable, usable, and valuable.