Earley AI Podcast - Episode 1: Omnichannel Virtual Assistants with Chris Featherstone

Why Every Chatbot and Virtual Assistant Is an Information Retrieval Problem - and What to Build First

Guest: Chris Featherstone, Business Development Lead, AI and Speech Language Services at AWS

Host: Seth Earley, CEO at Earley Information Science

Published on: September 24, 2021

 

 

 

 

In this inaugural episode of the Earley AI Podcast, Seth Earley is joined by Chris Featherstone - Business Development Lead for AI and Speech Language Services at AWS - for a wide-ranging conversation on omnichannel virtual assistants. Seth and Chris met at a conference on call center innovation, where Seth's opening question about the role of ontologies and taxonomy in AI systems stopped Chris short: everyone else jumped straight to solutions, but Seth was asking about foundations. That shared instinct for foundation-first thinking shapes everything in this discussion. They cover the information retrieval continuum from basic search to intelligent virtual assistants, why voice and text channels have fundamentally different design requirements, how to identify where to start, what skills are genuinely needed (and which are commonly misapplied), and why the organizations building knowledge foundations now will be the ones with competitive advantages when virtual assistant capabilities reach maturity.

 

Key Takeaways:

  • Every chatbot and virtual assistant is fundamentally an information retrieval problem - the sophistication of the interface does not change the underlying requirement for well-structured, intentionally curated knowledge that the system can actually retrieve and act on.
  • The information retrieval continuum runs from basic search through knowledge portals, virtual agents, and intelligent virtual assistants, and the same knowledge architecture investments that improve search also improve every more sophisticated application built on top of it.
  • True omnichannel means seamless continuity of context across devices, channels, and handoffs - including the critical moment when a bot escalates to a human agent who must receive the full conversation history, not start from scratch and force the customer to repeat everything.
  • Voice and text channels require fundamentally different design approaches: voice enables long-form sentiment analysis, interruption detection, tone inference, and real-time keyword triggering for automated actions, while text loses all of that nonverbal signal and must compensate through other behavioral data.
  • The biggest skills gap in virtual assistant implementations is not technical - it is the misassignment of data scientists and business analysts to conversation design work that requires a different kind of thinking: understanding customer psychology, cultural and demographic context, and what the user is actually trying to accomplish, not just what data is available.
  • Knowledge management for customer service and virtual assistant development are running as parallel work streams in most organizations when they must be integrated: bots are only as good as the knowledge they can retrieve, and AI projects pulling funding away from knowledge foundation investment are undermining themselves.
  • You cannot outsource your competitive advantage: organizations that treat their virtual assistant and knowledge capabilities as vendor-managed black boxes will find themselves blindsided by competitors who own those capabilities internally - and by the time the urgency becomes visible, it may already be too late to catch up.

 

Insightful Quotes:

"Chat bots are simply a mechanism to retrieve information. Just like any other mechanism, it requires certain data sources, it requires some mechanism for interaction, it requires increasing levels of sophistication in information architecture. The 300-page document doesn't help - the hundred search results returned doesn't necessarily help. People want an answer. That is what these systems need to provide, and the only way they can do that is if the knowledge has been structured to make it possible." - Seth Earley

"The difference between a reactive system and a true virtual assistant is the difference between looking things up and actually thinking. I can have an environment that is super good and has really rich data - but the minute it doesn't have that information and it fails, what does it do? It escalates to a human, because humans are proactive and can free-think for what you need even when they don't have the context. Moving toward truly proactive systems means layering behavioral science and behavioral data on top of the information retrieval foundation - and that is a hell of a lot harder to design." - Chris Featherstone

"You can't outsource your competitive advantage. This cannot be a black box. These are skills that need to be built internally - and in the next five years, companies are going to be caught flat-footed. We had a publisher who found a competitor was beating them to market on textbook sales by six months. The competitor was using a component repository. The team had been trying to get that project funded for three years, kept getting shut down, the CEO finally approved it - and it was already too late. They had lost the business." - Seth Earley

Tune in to hear Seth describe building the Ozzie Business Insurance Expert - a search application the client insisted on putting an avatar on top of, which Seth thought was a bad idea at the time and now considers brilliant - and why he believes building internal helper bots to assist agents before deploying anything customer-facing is one of the most underutilized strategies for getting virtual assistant implementations right the first time.

 

Watch on YouTube

 

 

Podcast Transcript: Omnichannel Virtual Assistants - Foundations, Voice vs. Text Design, and the Knowledge Architecture That Makes It All Work

Transcript introduction

This transcript captures the inaugural Earley AI Podcast session between Seth Earley and Chris Featherstone covering the full landscape of omnichannel virtual assistant design - from the information retrieval continuum that underlies all of these systems, to the unique challenges of voice versus text channels, to the organizational skills, governance, and knowledge architecture investments that determine whether these initiatives succeed or stall.

Transcript

Seth Earley: Good afternoon, good morning, good evening - depending upon your timezone. Welcome to our webinar. We're going to be talking about omnichannel virtual assistants and how we need to think about those versus other mechanisms for communication - chat and virtual assistants that can cut across both voice and text.

My name is Seth Earley and I am founder and CEO of Earley Information Science. We are a professional services firm that works in the area of information management - making information more findable, more usable, more valuable. We do projects across multiple industries to improve things like e-commerce performance, support digital transformations, replatforming, customer experience enhancements. I've also written a book called The AI-Powered Enterprise, which won the Axiom silver award for books on AI in business. For the first five or so people that reach out, I'd be happy to send a copy - just send me your mailing address.

I'm joined by Chris Featherstone. Chris, do you want to give a thumbnail of your background?

Chris Featherstone: Thanks for the opportunity. I'm Chris Featherstone and I lead the business development for our AI and speech language services at AWS, specifically around speech recognition. I've had the opportunity to work with a number of customers across all industry types, centered around how to maximize speech recognition and utilize AI services to pull really rich data out of information - augmenting human spoken conversations, both in short form like intent-based interactions and in long form non-structured conversation. Thanks Seth, I appreciate the opportunity to speak with you on this topic.

Seth Earley: What AWS has done is they've done a lot of the heavy lifting in terms of the heavy-duty mathematics and machine learning to provide platforms that can support people at multiple levels - you can be a data scientist or software engineer who goes in and tweaks algorithms, or you can be a business user and actually use it to configure solutions for business problems. There's a whole range of things we'll cover in future sessions.

Some of you may have seen Ritesh Jugon was originally scheduled to be one of our panelists but he had a family emergency. So Chris and I are going to carry the conversation today.

Here's what we want to cover: the information retrieval continuum and thinking about virtual assistants and chatbots on that continuum. I always think that a chatbot is simply a channel and a mechanism to access information - we still need to have that information in a good structure. We'll talk about what we mean by omnichannel, the differences between voice and text channels, some of the nuances around voice-to-text and text-to-voice, content classification and analytics, and how you can provide contextual triggers for human agents by monitoring a conversation and surfacing content that helps the agent in real time. And then thinking about when you're getting started, what do you need to look at in terms of use cases, where should you start, and what are the obstacles to going to production and really scaling these things.

Seth Earley: Let me start with the information retrieval continuum. Chatbots are simply a mechanism to retrieve information. Just like any other mechanism, it requires certain data sources, it requires some mechanism for interaction, it requires increasing levels of sophistication in information architecture, different nuances around the experience, and the enabling technology.

When you think about search as a starting point - you can apply it to any text source, you don't need metadata, everything's better with metadata but you don't have to have it. It's a search box and a list of documents. When you start moving up the continuum toward more sophisticated information retrieval, you can actually use the same knowledge architecture to support these other classes of application. Knowledge portals are getting more rigorous in terms of defining the structures of the data, defining use cases, integrating multiple sources. There's a case study in my book about Applied Materials - they had about 14 different data sources that needed to be integrated through a knowledge portal.

Then you start looking at virtual agents and intelligent systems. There's no hard and fast definition between them, but what you're now doing is recognizing language and intent to provide an answer - not just to provide a list of documents. People don't want 100 documents where one of them is a 300-page policy. They want an answer. We've done this for insurance companies where you have very large policy documents but you just need to surface a specific piece of information for a specific question.

And then the most sophisticated applications - the intelligent virtual assistants - where you're able to have a conversational interaction with greater levels of context depth, information sources, and the ability to switch context as necessary. We built the Ozzie Business Insurance Expert many years ago. It was a search application - and the client wanted to put an avatar on top of it. I didn't think that was a good idea at the time. In retrospect, it was brilliant, because people would say "good morning, how are you today?" to the avatar. But again - it was a retrieval mechanism first.

What are your thoughts on this continuum, Chris?

Chris Featherstone: I think the discussion warrants some context about our history, because I think the key was you and I visited a conference together, and it was fun because we just struck up a conversation centered around the topic being discussed - I think it was centered around call centers and how to provide innovation. The person presenting was using old methods of thinking to try to apply them to next-generation approaches, and that's just not going to work. After I introduced myself, I remember the first question out of your mouth was: how important do you believe that ontologies and taxonomy and classifications are for the data structure?

And I was like - I think it took me back a minute. Because everyone we talk to, including customers and partners, automatically jumps to the solution space as opposed to the foundation space. And the key point was: unbelievable, Seth, thank you for actually asking the right questions. That got us to even the point of having this discussion.

I think what you should be seeing from an external perspective is: where do I fit within this matrix of the information retrieval continuum? And I could fit at each location - maybe some areas are further along, maybe not as mature - but where do I fit, and what can I do to break these foundations down? That context, from asking that first foundational question, was the most intriguing thing for me. This is a huge problem - people don't know where to start, and yet we've been at this for decades. We're still providing information agents and chatbots out in the market that just are not hitting the mark, because they're bringing back way too much information, or not the right information, or missing the intent altogether, or not understanding the conversation.

Seth Earley: Absolutely. Most of the time when I ask those questions, people just don't know how to answer it - or they think the AI is going to solve the problem for them. They're going to train their ML using some source of information but they're very fuzzy about it.

The point is that it needs to be very intentional and very structured. You need the knowledge, you need the answers, you need the details. Machine learning and AI can do it in certain instances if you have large amounts of data - it's like machine translation, if you have enough examples you can translate questions to answers. But that's not necessarily reliable if you're in a regulated industry or if you don't have that volume of data. So you still need to be intentional and have human judgment around the foundation.

And what's important is to imagine a future where we will be talking to these virtual assistants day after day, as naturally as we interact with colleagues. My book's first chapter talks about Alan Perkins and how his day starts with interacting with virtual assistants - his own virtual assistant that knows his preferences, will read and negotiate on his behalf, interacting with other virtual agents at other organizations. He's checking his banking portfolio, checking parts availability, arranging travel - all by voice. One day we will get there. These things are really bad today, but they will get there. The organizations doing the right foundational work now will be the ones that can take advantage of that. For others who are not - it's going to be existential.

Seth Earley: Omnichannel is really looking across devices and channels. You're trying to maintain context even if you're switching devices - starting a process on your phone, finishing on a website, talking to an agent, then shifting back to an app. It's important for virtual assistants to provide consistent messaging, to be independent of channels, and to be able to escalate to a human agent seamlessly.

Messaging should be part of your content operations and publishing so that you can publish once and consume anywhere. I see a lot of fragmentation in AI initiatives, where people are building separate groups to do AI content - and that's just not the way to think about it. You should be looking across your information ecosystem, at all the content you're publishing, and standardizing, normalizing, and componentizing it.

Here's what that looks like in practice: I was trying to buy a product from a home goods retailer - something to clear algae from my water feature. I got to the website, couldn't find it, started a chat. The chat bot couldn't help me and escalated me to a human. But the human had none of the chat history. They couldn't see what I was doing, couldn't see what was in my cart. I had to copy and paste product information multiple times. I had to give the agent my problem over again from scratch. He had no context. That was a really, really bad experience - and that is exactly what we want to avoid. We want the context and all relevant information communicated seamlessly to whoever picks up the interaction.

Chris Featherstone: You're outlining the core issue with the difference between reactive and proactive systems. When we look at it from the perspective of just providing information - that's 100% reactive. I can have an environment that's super good and has really rich data, but it's only as good as the information it has, and what it's indexed, and what it knows. I can ask it in 50,000 different ways and it's going to bring back hopefully good results because it has reinforcement learning. But the minute it doesn't have that information and it fails - what does it do? It escalates to a human, because humans are proactive and can free-think for what you need even when they don't have the context.

The difference between having a simple chatbot and a true virtual assistant is that reactive to proactive nature. If I want to just provide information reactively, I'm just duplicating what's on the website. If I want a proactive experience - one that uses thinking, gives suggestions, helps the user through the process - that's a true intelligent digital system. And it's a hell of a lot harder to design, because you need data, context, additional data sources, behavioral information. And behavioral information isn't necessarily collected through text. So how do you collect the behaviors of somebody working through these interactions? Culturally and demographically we have different needs through different channels. My behaviors are going to be different as a digital immigrant versus a digital native.

We're moving from a reactive scenario that's purely information-providing toward something that layers behavioral science and behavioral data on top, to get to a more truly reactive and proactive type of assistant.

Seth Earley: We need to monitor the digital body language of the user - what are they actually doing across different channels and devices - and model the use cases, scenarios, and actual tasks, and see how well we can support those. Organizations should be building libraries of use cases that become the source of truth for all of these systems. They can come from job descriptions, from customer interactions, internally and externally. And you can't do these things seamlessly externally if you have lots of friction internally. We need to improve internal processes first - those disconnects are where a lot of the problems originate.

A question came in: what business skills do we need to develop or hire internally to evolve with increasing functionality? There's a lot packed into that.

Chris Featherstone: I always work backwards. What is the end goal? And I would say the end goal is never the end state - it should be an ever-learning, ever-evolving process. So what's the first hypothesis? That first hypothesis should be centered around: here's what this thing needs to be able to do first - give it the description, the FAQs - then give it out to everybody to poke holes in. Part of that is looking for additional data cues about what we didn't think to put in, so we get a holistic perspective of what needs to be prioritized first to accomplish that first hypothesis.

In terms of skills - it's thinking backwards, but also understanding where the data is and what data is not there. And then getting into the operational deployment types of things: how to not shoot ourselves in the foot by trying to do too much too fast.

The biggest mismatch I see in skills is: we take folks who are actually looking for the data, finding the data, and coming up with data responses, and we naturally take them over and apply them toward what the conversational experience should be. And they don't think that way. I studied psychology and human factors - what we didn't call UX at the time. Now we're getting into the derivatives of user experience, behavioral analytics, fit and finish. But you almost need to get people who can truly understand the goals of what this needs to be, and design the conversation in that way. That's really hard because you can't take a binary thinker - a business analyst - and say "design my conversation." I need somebody who can get in and really think from the perspective of: when I interact with my customers, what is it that I'm actually trying to solve? How can I speak to them in a way that helps them solve that, knowing there are demographic and cultural needs as well as compliance requirements?

What should happen with this is indicative of the overall goal - mapping back to who the key personas are, what their needs are, getting almost a psychological profile of who you're going to interact with, and designing the conversation centered around that. If you let data scientists and data researchers design your conversation, you'll miss 100% of what matters most.

Seth Earley: I agree completely. And I would add that we need very strong skills in information architecture and content operations, because content and knowledge drive these things. In a regulated environment I can't just turn an algorithm loose on lots of information and see what it comes up with. We have to be very clear and specific about what those instructions are, and that requires intentionally curated training data - which is content, which is knowledge, which is the information the customer needs. That's the only way the bot will understand: it doesn't get it out of thin air. Expertise and skills around knowledge processes, information architecture, metadata - combined with those conversational skills. We have to have the back-end system structured correctly, we have the information in the right format, and then we design that retrieval mechanism.

Whether it's proactive - you're listening to a conversation and you surface content for the human agent - or reactive, or monitoring sentiment: "this customer seems really upset, let me cue the agent to use this approach to de-escalate." There's a lot that can be done in being more proactive.

The other part of preparing people for the long journey is: as you said, Chris, it's never done. What organizations are missing right now is the governance and structure around change management - the authority and mechanisms for changing things. Previously too much sprawl has led to knowledge fragmentation, not knowledge consolidation. These are large-scale information management problems.

Chris Featherstone: Of the companies you consult with, how many do you find that actually have an ontology or taxonomy for their data?

Seth Earley: It really depends on how you define it. They have something, but it's usually fragmented. There's a lot of taxonomy out there but no formal structure that puts these into an ontology. For those unfamiliar with the term: we build mechanisms to organize information - taxonomies are hierarchies with parent-child and whole-part relationships. There are also controlled vocabularies and thesauri, and these help us manage terminology. When we have multiple sets of those, we don't build one grand galactic uber taxonomy - we build multiple taxonomies for products, services, solutions, customer types, document types. When we relate them - "here is the solution to this problem," "here are the risks in this region" - we're building knowledge relationships between concepts. That's an ontology. It gives you much greater control and visibility across all your data, and with knowledge graphs it provides a comprehensive picture. But organizations are very early in that learning curve. A lot of these are science projects not producing ROI. What you really need to think about is: how do we apply our organizing principles to solving a specific problem - to personalization, to all the other things we want to do - not just building the ontology for the sake of having a knowledge graph.

Seth Earley: This is actually on the Gartner Hype Cycle, and what's really interesting - and frustrating - is that knowledge management for customer service is in the trough of disillusionment. Why? We have not seen companies make significant investment. It's one of those things that makes me nervous, because what an obvious way to get ROI: build knowledge for your call center. There are metrics showing 50% reduction in time per incident, improvements in first-call resolution, savings of millions of dollars. Yet a lot of the AI projects and bot projects are taking funding and resources away from the basics - from investment in knowledge architecture. That money should be invested in the foundation, and then building on that foundation is where you get the value. Virtual customer assistants are also in the trough of disillusionment.

Chris Featherstone: And folks who have invested a lot of dollars into knowledge management without a goal - it's one thing to collect all the information, but if the overall goal is just to gather data: great, now what? What we also see here is that knowledge management for customer service and virtual customer assistants appear to be parallel work streams. They shouldn't be. They're missing the glue that needs to come together.

Seth Earley: Let me talk about voice versus text and how they're different. Interactions and workflows are interesting because voice is very different. We have regional, cultural, and generational gaps to support. We lose a lot of nonverbal communication when we go to text, and there are ways of dealing with that.

Chris Featherstone: The last question and the first question - nonverbal communication and demographics - are really, really functionally tied together. The only thing I can see non-verbally through text is if you capitalized something. Even emojis - I can't quantify or qualify what even a standard emoji means across all the subcultural innuendos that people highlight in a textual conversation. This gets into pulling out the demographic piece and understanding how younger generations will communicate versus digital immigrants.

The nonverbal piece is super important especially when you're talking about an actual audio conversation, where I can get a ton of information out of what's not being said - the sentiment, the interruptions, speaking over someone. That can actually set the ability to score a call correctly, so I can understand what my next action needs to be - automated or manual. If I can detect that sentiment is super poor on this call, what automated alerts do I send? Do I have a supervisor barge in? What does that look like in terms of the downstream effects on brand and loyalty?

Let me also distinguish the two flavors of speech recognition: long-form and short-form. Long-form speech recognition is non-structured conversation - like what you and I are doing now. We can pull out with high accuracy the speech from each speaker separately, get it into text, look at sentiment, look at keywords - and then use natural language processing to understand key phrases. Short-form conversations are intent-based: they're looking for keywords to drive a reaction or an action. The interesting thing is when you have a helper bot listening to a call center conversation - that scenario combines both. The long-form conversation is happening between the agent and customer while short-form keyword detection can trigger automated actions in other systems in real time. It's the mixing of both, and understanding which ASR frameworks to apply for which part of the conversation.

Seth Earley: The real-time coaching scenario - a helper bot that listens in on the conversation and says "wait a minute, they're talking about this product, let me get that ready for the agent." Or "this is getting heated, what can I prompt the agent with?" Prompting the agent during those conversations, or escalating during those conversations, is a really critical piece of an omnichannel virtual assistant framework.

I'm a big believer in building internal bots to help agents before you do anything customer-facing. You can do customer-facing things that are very transactional and straightforward. But you can learn a lot by building bots that help your agents - you can train agents how to use them, you can learn what types of questions are coming up, what content is missing, and evolve the content from there. That's how you use real interaction data to improve the foundation before it's fully exposed to customers.

Chris Featherstone: Now we're getting into a scenario where bots are specific to the workload, the approach, the job, or the type of data. And we have the ability to not only capture the long-form conversation but to utilize keywords in short-form to drive automated or manual actions. When we say "Sharon" in a conversation with our digital assistant, that keyword highlights the intent, and the intent framework is to build out a schedule for a particular time - things like that. So it's also a network of activity-based bots, job-based bots, each using the right type of data.

Seth Earley: There will be different types of specialized bots - concierge bots, helper bots - and the information has to be managed across all of those systems. That's a critical challenge. Workflows can be very different - when people are having conversations, they switch topic mid-stream, and if we're doing this with a virtual system we have to be able to handle that.

Let me share one more framework: the spectrum of structured-to-ambiguous processes and defined processes versus subjective advice. On the easier end we have things that are very structured - RPA and transaction support. On the more complex end we have subjective advice - robo advisors, complex domain knowledge. We can think of this as task-and-dialogue complexity versus domain complexity. For life sciences troubleshooting a PCR machine - that's a complex domain and a complex task. We can start on the continuum with simple processes in simple domains: where's my package. Then move up toward "I have a really complex domain, even if it's a transaction." The point: don't start with high complexity domain plus high complexity task. Cut your teeth on things that are straightforward and easily modeled. The MD Anderson Watson project failed at $78 million - that's what happens when you try to boil the ocean.

Chris Featherstone: The easiest way to start is: for each section of that complexity spectrum, ask what the desired first-state needs to be, work backwards, put together a description and the FAQs for it, and get your stakeholders around it. What I can guarantee will happen is two things: first, you'll have a well-vetted understanding of your minimum lovable product for that first piece. Second, you'll get stakeholder buy-in and investment quickly, because before you make that investment, if you understand what that first iteration needs to look like, you're going to know if it's going to be successful. Then go find the data to put it together, and it will help you prioritize the roadmap from there. And then start - because it's going to be a network of bots, not a one-size-fits-all solution. But you can put them into a hierarchy, bridge context back and forth between them, and have some that are specific to particular work streams and personas.

Seth Earley: What do you instrument to measure value? What process are you going to impact? Pick a high-value process where you can see measurable results, and cherry-pick that use case. If you can show ROI on a narrow set of use cases, that's where you get support and funding. If you can't show that, organizations won't track the metrics - they're too distributed. Part of what we do is helping organizations build metrics-driven change management and governance.

One of the things that has to happen is you have to consolidate your efforts. Lots of experiments, lots of platforms - great. But what's really working well? Can you start centralizing expertise while decentralizing execution at the right level, managing the initiatives in some centralized way so you're not reinventing the wheel and further fragmenting things? There are millions of dollars spent on AI content when that shouldn't even be a category - it should be content, with AI as one consumer of that content.

When we think about scaling, you need to think about a bot factory approach. What that means is: leverage the investments you've already made. Repurpose and standardize your assets so they can be consumed across channels. Standardize many of the design elements of the bots, extract those into an ontology. We actually have a patent in the works to make bots more scalable - to reuse elements, to update multiple downstream bots when you make a change to a product, service, or conversation. Portability is important: you need to refactor content and reuse it in different systems. You can't get locked into a single vendor. You have to be able to substitute new best-of-breed components as the technology evolves.

Seth Earley: You can also use machine learning and AI to refactor the content. We talked about componentizing - breaking content up into pieces. People think that's an enormous effort but you can use certain machine learning algorithms to do that: to tag it, organize it, so we can standardize and componentize it, structure it for multiple downstream systems. That's the way to think about knowledge architecture, and about knowledge at scale.

Seth Earley: In closing - one of the things that has to happen is that organizations need to own these capabilities. I had a customer who was outsourcing this to a vendor, and I said: you can't do this, you need to own this capability. This cannot be a black box. This is going to be a source of competitive advantage. You can't outsource your competitive advantage.

In the next five years, companies are going to be caught flat-footed. There will be competitors with fantastic virtual assistants providing better customer service, lower costs, greater accessibility, and efficiencies that are hard to match from behind. We had a publisher who found that a competitor was beating them to market on textbook sales by six months - a very significant loss of business. The competitor was using a component repository - breaking content into components so an editor could get quick answers and assemble products faster. The publisher's team had been trying to get that content componentization project funded for three years and kept getting shut down. The CEO finally approved it. It was already too late - they had lost the business.

That is the type of thing that will sneak up on companies that are doing digital transformations without regard for knowledge processes. It's the missing ingredient to digital transformations. Customers are being sent down the wrong path by believing this can all be handled algorithmically by technology alone. You need to understand the fundamentals, build the foundation, and own the capability.

Chris Featherstone: This is the first of a bunch of different sessions that we're putting together. Looking forward to working with you on the series, Seth. Have a good week, everyone.

Seth Earley: Thank you Chris, thank you to Sharon for all the magic behind the scenes, and thank you to everyone listening. We'll see you next time.

Meet the Author
Earley Information Science Team

We're passionate about managing data, content, and organizational knowledge. For 25 years, we've supported business outcomes by making information findable, usable, and valuable.