Knowledge Engineering, Knowledge Management and AI Assistants

AI assistants are a form of cognitive AI. They go by many different names. The different terms used to identify AI assistants are described below. The role of AI assistants is to support customers or employees as they carry out tasks. In some cases, for repeatable actions with unambiguous outcomes, a chatbot or AI assistant can fully automate interactions. In other cases, when the answer requires judgment and human expertise, the AI assistant can surface the necessary information to help the human solve a problem – whether for a customer self-service app or a customer service agent supporting an end user task. These virtual job assistants are designed to help customer support staff be more efficient, accomplishing the task more quickly and accurately with a consistent outcome.

What are the Different Types of AI Assistants?

Here is a partial list of the terms used to describe AI assistants:

  • Virtual assistant
  • Intelligent assistant
  • Intelligent virtual assistant
  • AI assistant
  • AI virtual assistant
  • Cognitive assistant
  • Shopping assistant
  • Digital assistant
  • Virtual customer assistant
  • Voice assistant
  • Personal AI assistant
  • Personal assistant

The bot family includes the following types of cognitive AI. They differ from AI assistants in that they have a very specific and limited role – that is, they are focused on a specific knowledge domain (such as medical information or manufacturing solutions) or a narrow task (such as configuring products, retrieving vacation policy information or supporting a loan application)

  • Shopping bot
  • Chatbot
  • Helper bot
  • Configure Price Quote (CPQ) bot
  • Knowledge retrieval bot
  • Agent assist bot
  • Troubleshooting bot
  • FAQ bot
  • Service bot

Conversational cognitive AI software is another type of AI assistant and the most challenging to develop. It includes:

  • Conversational AI
  • Conversational commerce
  • Conversational assistants

This type of AI assistant attempts to support the user in multi-turn interactions using voice recognition and natural language processing. Multi-turn interactions allow the customer to continue a dialog beyond the initial task, or to respond in a variety of sequences rather than being confined to a linear path. The dialogs and tasks can be more complex as the user is guided through specific tasks or asked to provide more information all as conversations.

The user issues a voice command using natural language and the system responds to the trigger with data or content. Some vendors focus specifically on conversations which may feel more natural to users, but these can quickly degenerate if the conversation becomes too complex. In that case, there should always be a mechanism to handle those failures (such as escalating to a human service agent). In some cases, there is little different between conversational AI and conversational commerce other than the specific application (commerce is a specific use for conversational AI).

What Kind of Jobs Do AI Assistants Perform?

Additional examples may are defined through functionality; for example, medical transcription bots or airline reservation assistant bots. They may also include retail assistants that help a user navigate through a series of interactions, such as updating a credit card and then completing a transaction. A specialized AI virtual assistant uses natural language dictionaries to understand a voice command in a particular field that contains domain-specific terminology (such as medical terminology). Speech recognition converts voice to text for processing by the virtual assistant so that a natural language query can retrieve the correct information. Amazon echo and Amazon Alexa are trained in different dictionaries of terminology to understand specialized functionality. For example, characters in a TV show would not be in a standard natural language dictionary. A specialized Alexa “skill” would understand those characters based on training the app on those names.

This same specialization is required for businesses that have unique offerings or technical content that is not commonly part of typical natural language processing.   An AI model can respond to unique customer queries only if it has been trained on those queries or queries that are similar.  

How Does Knowledge Management Relate to AI Assistants?

Regardless of what they are called, all these assistants are information access mechanisms. A chatbot is a channel--to data, content and knowledge. This is where knowledge management comes in. AI assistants, like new employees, need to be taught the answers. This is what is referred to as “training data.” It’s important to remember that the same information needed to train humans is needed to train an AI assistant.

The information to power the cognitive AI functionality has to be stored somewhere – either within the chatbot platform or in a stand-alone knowledge base. The advantage of having this information in outside the chatbot platform is that it can be repurposed in different ways – powering downstream applications like marketing automation, product support sites and channel partners. Some chatbot approaches require that the knowledge be built into the chatbot platform, which is not ideal since it leads to further fragmentation of knowledge rather than consolidation of that knowledge for reuse.

Many different mechanisms and technologies can be leveraged in a cognitive AI and AI assistant environment. However, at the simplest level, they consist of two things: A mechanism for understanding a user input (called an utterance) and a way of providing an output (a response). They rely on natural language processing (NLP) to interpret utterances, and in the case of voice assistants, speech recognition, which uses AI technology to convert speech to text.

Virtual assistants are increasingly used to handle routine customer queries, leaving more complex tasks for human agents. An AI-powered virtual assistant can offload these tasks, reducing the cost of customer service while improving responsiveness. That may sound overly simplistic, but thinking of inputs and outputs will help explain how these applications need to be configured, deployed, and maintained.

What is an Utterance

The first concept to familiarize yourself with is that of an utterance – this is the input. Human speech is varied and can be ambiguous. An utterance is the text that a user types into the chatbot interface or, if a voice recognition agent, the words that they use to express what they need. Voice is translated into text using speech recognition machine learning algorithms, so only the interpretation of the text needs to be considered. Because users use different wording to express a need, those variations in utterances need to be classified into a common request called an intent. An AI personal assistant will function only if it is programmed to interpret the intent. Analytics can reveal when intents are not recognized so that new training data can improve utterance interpretation.

This interpretation of utterances is the first area of application for machine learning for AI assistants. Machine learning uses variations in phrasing of questions as an input and produce an output that says, “These ten different ways of saying something really means this.” The utterances are classified to an intent. The more variations in phrasing that the algorithm sees, the more likely it will be to recognize a new variation that it may not have seen. This is because the phrases live in a vector space where similar phrases are classified as the same intent if they are near one another (in mathematical terms). This is the first opportunity to use “training data” – the more examples of a phrase variation that inform the classification algorithm, the more accurate it will be. The AI assistant then has a better chance of retrieving the right information and delivering it to the user.

What is an Intent?

Intents can contain multiple details that allow a more specific request to be interpreted by the system. I might want a place to eat, and a chatbot could return a list of 50 restaurants, but if I want a Japanese restaurant that is within a mile of my location and is moderately priced, providing those details within the knowledge base will allow the chatbot or AI assistant to return a more specific recommendation. Those details are additional metadata and area referred to as “slots” – they are variables that need to be filled in so users can complete their intended actions. The chatbot will need those variables to make a better recommendation, just as it does when it needs to complete a task. If a chatbot is making a reservation for the user, it needs to know the number of people and the time.

Once the user’s goal has been classified to a recognized intent, the artificial intelligence system needs to provide an answer. In this way, we can think of chatbots as informational retrieval mechanisms – just as search is an information retrieval mechanism. In fact, “slots” can be thought of as metadata for retrieval, just as facets such as color, size, brand and price are used in retail websites. Extracting the details from an utterance helps to ensure that the intent is as specific as it needs to be for the chatbot to retrieve an answer, rather than just a long list of documents. The retrieval can be considered as a very specific search. In order to make search work effectively, knowledge and content need to be curated and correctly structured.

How Does Knowledge Management Make Retrieval (Search) Work?

Search works best when the content being searched is well optimized for search. Consider getting web pages to rank on Google. That does not happen without a great deal of effort. An entire industry with tools and consultancies is devoted to optimizing search. The same thing needs to happen to our knowledge and content so that a chatbot or virtual assistant can correctly retrieve it.  

This process is the realm of knowledge management and its artificial intelligence focused cousin, knowledge engineering. Knowledge engineering is also referend to as “symbolic AI.” In the early days of AI, a great deal of work went into knowledge representation so that researchers could build expert systems that could produce answers such as the diagnosis of an illness. This rule-based approach went out of fashion as statistical machine learning approaches became more effective with the increased computing power, improved algorithms, and enormous amounts of data available from the cloud and from the ecosystem of technologies throughout the world. However, the need for correctly structured and curated knowledge has never gone away. If the knowledge is not available, a bot will not be able to create it from nothing. We have to teach chatbots how to answer questions and that is done using a knowledge source – a knowledge base.

AI assistants of all types are experiencing rapid growth, because they aid customers and employees alike in completing their tasks quickly and accurately. However, matching the type of AI with the target task requires considerable thought and planning. Is a chatbot with a well-defined path the best match, or is the interaction more complex, requiring conversational AI and multi-turn options? Do the likely users prefer using a keyboard or voice recognition, or should both be offered? No matter what the choice, the AI assistant will not be able to find the answers if the information is not well organized and well structured. Humans and AI assistants both need to retrieve information in order to provide answers to customers. A knowledge base with the right content, structure, and metadata allows precise retrieval of information. Combined with the appropriate virtual assistant, this knowledge will allow companies to automate many customer interactions, improving customer satisfaction and increasing revenue.

The data science behind smart devices such as a smart speaker, or a platform like Google home (where multiple smart devices are connected with the Google assistant as the interface) is the enabler of enhanced functionality, but customers are wary due to privacy concerns. An AI-powered virtual assistant will be the interface to many smart devices. However, when a customer mentions a topic and coincidentally receives relevant adverts to that topic, their concerns over privacy will be magnified. Providing controls such as a privacy preference center may allay those fears, but it will take time for people to become acclimated.

Ready to get started building an AI Assistant? Contact us to learn how we can help.

Contact Us

 

Meet the Author
Seth Earley

Seth Earley is the Founder & CEO of Earley Information Science and the author of the award winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable. An expert with 20+ years experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. He has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance.