Expert Insights | Earley Information Science

Chatbot Best Practices - Webinar Overflow Questions Answered

Written by Seth Earley | Mar 22, 2018 4:00:00 AM

WATCH: Knowledge Engineering, Knowledge Management, and Chatbots

This webinar resulted in several great questions that we were unable to get to during the event. Here are answers to those questions.

When will B2B chatbots become mainstream? When will B2C chatbots become mainstream?

Chatbots will evolve over the next few years and certainly play an important role in the overall customer experience for B2B and B2C organizations. Chatbots are a channel to information and form another mechanism for interacting that allows for easy access of unambiguous information. When I say “unambiguous” I am referring to clear answers that are not subject to interpretation. Completing a task such as a transaction (making a reservation, or completing a purchase) can be clearly mapped out and contain steps to the process that a bot can walk the user through. These types of tasks will be more commonly handled through bots and virtual assistants so that users don’t have to call a support center or sift through pages of help content. As the industry matures, bot capabilities will increasingly handle edge conditions, ambiguous questions and more complex tasks. 

The key to successful deployment will be understanding the end to end use cases, mapping out user interactions and setting user expectations. As organizations traverse the learning curve, chatbots will be very common and users will come to expect them as part of a selection of communication avenues. 2018 will see more organizations experiment with and deploy bots with B2C leading and B2B companies focusing on other web site capabilities before tackling bots.

What would be the primary design considerations for designing for a chatbot opposed to a speech/voice interface like Alexa/Google Home?

When designing a conversational interface, voice interactions versus text interactions are very different. For voice, the user is essentially blind and for text they are deaf. It is easy to scroll back to a prior step in a process or look at a list of choices when using text. In voice, the user can only keep track of so many choices and the conversational context needs to flow naturally and logically. We all know the frustration of phone menus that have too many choices (and usually not the one that we want). It is possible to provide visual cues using rich text – in fact some chat interfaces can have images and structures that are similar to apps. Whether designing for voice or text, it is important to give the user hints about the things they can ask and a way to get to help instructions. It is also important to allow for escalation to a human whenever they get stuck. 

I love a text only chatbot interface; which also allows for a uniform deployment across platforms (SMS, Facebook, Telegram etc.) What would the considerations be when introducing menus, graphic and other non-text aids to enhance content?

The UX considerations are similar to those for small footprint applications where real estate is limited. Simpler is better. The user interface should be carefully aligned with target use cases and be tested against those use cases. Graphics need to have a purpose – the user should not be overwhelmed with information or encounter distractions from the task at hand. But they can be prompted with choices, guided in their interactions and given options to expand beyond their initial task if the context is appropriate for doing so. (For example, being given a recommendation for related products to suit their need or provided with options for upgrading their selection.)

When building the NLU model, how does a bottom up, defined linguistic model approach compare with a machine learning AI model?

Understanding a user’s utterance (the text they enter into a chat or the question they ask in voice response) is the fundamental challenge in bot development. There are a number of ways to approach this challenge. One way is to capture examples of how a question can be expressed either through logs of prior interactions or through crowdsourcing of phrase variations. Those examples are used as training data for a machine learning algorithm that, in essence, classifies phrases according to the user’s intent. When a new example is encountered, the algorithm can process the phrase and, if the model is complete enough, determine the user’s objective. For example, “I forgot my password”, “I can’t log in” and “My account is locked” all require steps to confirm the user’s identity and reset their password. If a new phrase “My log in isn’t working” is encountered, that phrase will be classified as having the same intent. This approach requires large sets of training data depending on the complexity of utterances. 

Another approach (the bottom up model as you describe it) is to analyze the user’s utterance by deconstructing grammatical structures (such as identifying subject/object relationships) and using ontologies that contain language relationships (such as synonyms and conceptually related terms) to understand the user’s objective or intent. The challenge lies in the complexity of the utterance. After understanding what the user is asking for, the system has to retrieve that information. If the topic domain is broad and the task complex, a wide range of potential responses need to be captured and tagged in a rich knowledge base with a content model that allows for tagging across multiple dimensions. With this type of scenario, a bottom up semantic deconstruction may make it easier to extract entities (facets, dimensions or attributes) and then pass those entities back to the retrieval engine. (I say “may” because the approach is highly use case dependent). A machine learning approach can still allow for entity extraction (also called “slotting”) which allows a generalized representation of an intent to have many nuances and details. For example, identifying “food” as a variable and developing a vocabulary of the various foods that a chat bot will recognize (along with synonyms and term variations) will allow the bot to recognize the phrase variations that actually add detail to the intent rather than just being a phrase variation that is classified as a generic intent.

Any insights on using chatbots as ITR for live agent chat? Analogues to IVR for voice?

Using bots for interactive text response (ITR) is an excellent way to improve live agent productivity. It can be analogous to interactive voice response (IVR) in vetting a request and directing the chat to an agent with the correct specialization and level of expertise. A higher value customer might also be routed by a bot to a concierge type of service versus a standard support agent. But the more exciting and higher value approach is to use the bot to support the support rep. The bot can interpret the intent of a query and present a candidate response. The agent can accept that response or they can correct/modify the intent classification and/or the response. Correctly configured, this approach can improve the agent productivity by hundreds of percent while improving the accuracy of the bot intent classification and knowledge base retrieval. 

What is the difference between ontology and taxonomy?

An ontology consists of multiple taxonomies and all of the relationships between them. For example, there may be a taxonomy of products and one of applications. An associative relationship can be “applications for a product” which can enable a bot to make cross sell and solution recommendations. 

What about using schema.org for inline semantic tagging of web content?

Schema.org is an excellent source of standard metadata structures to tag micro content and provide inline semantic tagging to identify specific entities and data elements on web pages. Standards allow for efficiencies and interoperability. Differentiated metadata structures can allow for competitive advantage by creating a unique user experience.

What technologies are people using to build these chatbots? 

For advanced bots, Microsoft LUIS (Language Understanding Intelligent Service) and the Microsoft bot framework are integrated platform services worth investigating as are API.ai (Dialogflow by Google), Facebook’s Wit.ai, and some of the Watson modules. A great site for more information on the simple drag and drop bot building tools can be found at https://www.webdesignerdepot.com/2017/03/a-beginners-guide-to-designing-conversational-interfaces/ 

Can ontology development be automated or does it require human intelligence?

The short answer is that ontology development requires human judgement to be applied to outputs created through automated text analysis tools. For common tasks, a standard can be leveraged (Some bot frameworks and platforms have prebuilt domain models and high level ontologies. These need to be augmented by the products, services, tasks, processes and specialized terminology of the enterprise.)

Do qualitative data analysis tools such as NVivo and MAQQDA play a role in developing ontologies for an organization?

While I have not worked with these technologies, from limited research on this class of software, it appears that they would offer some utility if these are tools that you are fluent in. Ontologies have to be based on use cases and data. It appears that these types of tools could help to capture, codify and analyze text and user requirements and therefore inform ontology development. 

Are there times when a labeled property graph works better than an RDF graph?

That is a question that requires a deep discussion of use cases and technical details. For an in depth discussion of LPG versus RDF see https://neo4j.com/blog/rdf-triple-store-vs-labeled-property-graph-difference/

WATCH: Knowledge Engineering, Knowledge Management, and Chatbots