Expert Insights | Earley Information Science

The Coming Chatbot Craze (and What You Need to Do About It)

Written by Seth Earley | Sep 21, 2016 4:00:00 AM

At the Opus Research (@opusresearch) Intelligent Virtual Assistant Conference in San Francisco this week, I heard from vendors, analysts, gurus, and customers about how the world will be transformed by artificial intelligence, agents, machine learning, intelligent virtual assistants – collectively known as “the Bots.”  One of the more entertaining sessions consisted of a panel of venture capitalists, including former Evernote CEO Phil Libin (@plibin), Joshua Kauffman, of Wisdom (@joshuakauffman), and Sarah Guo of Greylock Partners (@saranormous).  One of Phil’s comments related to how people think AI will kill humans.  He said that is a logical assumption because the AI will be really smart and humans are #$$holes. 

The VCs observed that every pitch coming across their desks these days includes “AI” or “Chatbot.”  According to one estimate, there will be over 100 million chatbots in the marketplace very quickly. (The exact timeline is vague, but “very quickly” by Internet standards is likely within a year.) 

Enormous sums are being invested in this nascent field, and the big tech firms are clearly betting on this as the next wave.  Bots really will be the next wave, but as with any wave, its evolution will be marked by hype, unrealistic expectations, failures and wasted resources.  That is the nature of the technology beast.   It is, however, possible to identify times when AI vendors are making unrealistic promises– simply watch for when their lips are moving. 

OK, that is an old joke but we need to be very circumspect when it comes to vendor claims and leading edge technologies.  Yes, someone needs to be the trailblazer and trailblazers sometimes get the best land.  They are also the ones eaten by grizzlies and that get arrows in their backs.  There is a risk/reward set of considerations.

  1. If your industry is ripe for chatbots, immediate action is essential.  Call centers and customer support are obvious candidates.  The leading call center vendors are beginning to offer human assistance (rather than replacement) via augmentation using intelligent assistants to support their call center representatives, and this is a viable approach. Consider what this means.  We don’t have to have a lot of technology in order to “’augment” a human.  Many times good old knowledge engineering does the trick (a well-curated, well-structured knowledge base with tuned search).  Add a sprinkling of natural language processing, and call center rep efficiency can increase dramatically.  
  2. AI and chatbot vendors are missing key pieces. The most glaring gap is in the ability of current products to manage content and data.  Many of these systems have administrative functions for configuring scripts and responses to questions. In some cases, they are very rudimentary with no structure; they simply provide long lists of questions and responses.  This approach might be fine for 50, 100, or 500 of the most typical use cases, but will break down under thousands or tens of thousands.  Many of the speakers discussed the need “to get your data and content house in order” and “the hard part is having the knowledge.” Really now.  Hmm… No kidding.  OF COURSE THE HARD PART IS THE KNOWLEDGE!!!  It’s all about the knowledge. Knowledge powers the chatbot.  How do you get that knowledge into the system and manage it?  I heard over and over again that chatbots are all about “having the right information at the right time for the right person.”  Does that sound familiar to anyone who has been in the knowledge management field for, I don’t know, the last 5, 10, 15, 20 or 25 years?  It is the same problem.
  3. Chatbots are a channel and need to connect to sources of intelligence.  This was my big takeaway. Chatbots are a channel. An interface.  We still need the data and we still need the content and knowledge.  If you don’t have that, you can’t have a bot.  (Just take your messy information and go home.  You cannot play with bots today.)  Just as IVR (interactive voice response – the telephone menus that many of us hate) is a voice interface to information, bots are a text interface.  But additional pieces are integrated to make this a richer environment.  For example, machine learning can correct text spelling mistakes and make otherwise unintelligible text understandable. One vendor had a list of spelling error variations as part of chat dialog entries (to interpret the user’s query).  This is a brute force method that was not leveraging the machine intelligence of spell correction.  Rather than trying to interpret spelling errors out of context, the system should have been interpreting the machine learning corrected terms.  (Have you noticed that your iPhone will correct an error to one word and then change that word based on the next word entered?  That functionality was lost in the vendor tool I looked at).  The point is that the AI does not always operate at the level of the bot itself but in an underlying source of knowledge. Many chatbots are configured to interact and direct user queries to other systems (or humans).
  4. Machine learning is not “one size fits all.” Machine learning can improve its responses over time, but requires a very large set of data to derive the knowledge.  Tobias Goebel, Director of Emerging Technology for Aspect Software, a contact center and workforce optimization solutions provider, suggested that deriving knowledge indirectly was a poor approach in some cases, such as a banking customer, because the answers need to be precise from the start.  "You cannot have an AI guessing and providing the wrong answers – especially when the downside means angry customers or regulatory challenges. It is possible to use entity extraction or auto-categorization to help to organize content, but a structured and curated knowledge source is a better approach for training certain applications," according to Goebel.
  5. "Chatbots are like employees; they won’t know all of the answers on day one."  This interesting insight came from Joe Gagnon, also with Aspect (Joe is Aspect’s Chief Customer and Strategy Officer) during a panel discussion.  People have very high expectations about the technology – that they can turn it on and it will perform with a high degree of functionality.  New employees at your neighborhood electronics store don’t know everything on day one.  They know the basics.  And they know when (and where) to get help.  Chatbots are the same. Even if they know only a few things, that will help. If a chatbot can help a customer with a few routine tasks – like password reset, balance inquiry, routine reorders, or simple commands -- it is moving things in the right direction and freeing someone up from a boring routine, costly task.  Alexa turns my lights on and off, adjusts heat, sets timers, and orders an Uber if I need a ride.
  6. Humans need to be trained.  I have more limitations around using Alexa than Alexa does.  I don’t know the vocabulary and syntax that she uses.  I don’t know her 5,000 skills. (Ha – I am anthropomorphizing already.  I know Alexa is not a “her.”) A skill is a set of terms and commands that tell Alexa to do something, that she “understands.” Well, she doesn’t truly understand but can act on a command.  One could imagine that over time, Alexa will be able respond correctly even if I use the wrong syntax. The learning algorithm will improve and allow me to ask things in different ways than the original skill allowed, but it is a bit of chicken and egg (AI and human?).  I need to know what to ask first in order to provide the input for Alexa to learn from.  If I don’t know where to begin, then the skills are not used or improved.   
  7. Chatbots and humans need the same things to learn from.  Chatbots need content and answers.  These are in the form of machine readable text but can also include images and sounds.  Humans also need these things – structured knowledge bases, FAQ’s, e-learning courses, troubleshooting guides.  Imagine trying to train a bot without this information.  What would the bot consume to troubleshoot your laptop connecting to your broadband?  How would that information be parsed, processed and summarized so that the bot could give you a clear response?  It is better to give the bot, just as we would give the human, a troubleshooting guide to learn from. The content is targeted to the task.  That benefits humans and machines. 
  8. Build capabilities incrementally.  If you are trying to develop a chatbot, you need to develop content first. Before your company is sophisticated enough to develop advanced chatbot capabilities, it should be augmenting human knowledge with relevant, accessible information.  Remember that humans need the same content that your chatbot will need.  Improve content and knowledge quality through content architecture and curation.  That will have short term payoff and prepare you for the longer term.  These initiatives have a long horizon. 
  9. Beware vendor promises. One vendor I spoke with said the price they charged for standing up a chatbot was $5K. I asked “That’s all? Including content?”  Well, not all of the content, he said; “that assumes you have the knowledge base.”  Abby, the award-winning intelligent agent that EIS developed for Allstate cost about $500K, but was still very cost effective, as it led to a very significant reduction in call center traffic.  Other vendors have confirmed that support chatbots can cost between $200K and $1mm.  Many vendors have overly complex tools that attempt to do more than is practical.  I know of organizations that have spent $2 - $3mm and got very little for the effort.  No matter what vendor you speak with, insist on talking to customers, and test the applications in the wild as they are deployed.  Try to break them.  Bring in an expert in machine intelligence to speak with the data scientists, and determine where the AI algorithms function and how learning actually takes place.  Ask about scaling content to hundreds, thousands or tens of thousands of pieces of information.  How will that be managed?  What formats need to be ingested?  How does the system leverage your unique processes, data standards and terminology?  How are training sets developed? 
  10. Manage expectations. The process of developing artificial intelligence applications is a difficult one, and has been worked on by experts for many years. Do not expect to be able to develop a full solution immediately.  There are numerous technical challenges that relate to “training” – giving the system the information needed to correctly function.  This takes time. Your chatbot application should allow access to routine information to begin with. It should have a mechanism to hand off to a human and degrade gracefully when it reaches its limits.  Plan on developing the application incrementally, and don’t promise leadership (or employees) miracles.  I have seen vendor promises make their way into internal proposals (including the vendor’s nonsensical made up terminology) that sounded too good to be true.  Whenever things sound too good to be true…  This will lead to disappointment and disillusionment and could be career limiting.
  11. Be prepared for a science project.  These initiatives require some degree of research and experimentation. Vendor claims are difficult to validate without a proof of concept (PoC).  Many of the AI and intelligent virtual assistant (IVA) vendors are small and incapable of offering free PoC’s.  The large players want significant skin in the game before they offer their time.  In any industry, innovation is required to stay competitive and survive long term.  Innovation entails trying a lot of things and failing at them before achieving success.  Scott Bair, Innovation Lead for Nationwide Insurance (https://www.linkedin.com/in/scott-bair-84a2258), said “90% of the things I try will fail.  But one will be a billion-dollar business.” His failures along the way were investments in the eventual success.   His approach is to use IVAs to help scale the successes by developing a chatbot to handle simple questions, which reduces the need to set up a new call center to handle his company’s increasing volume of calls for new products.  

IVAs and chatbots of various types are going to increasingly be part of every information ecosystem.  Sarah Guo of Greylock (@saranormous) wrote a recent blog post about the drivers of the chatbot craze, which she attributes to several converging trends https://news.greylock.com/the-conversational-economy-whats-causing-the-bot-craze-4dd8f1b44ba1#.nnxpw7ne3.  Lauren Kunze of Pandorabots (@laurenkunze) said she had many more inquiries in the past quarter from enterprises interested in standing up chatbot programs than in prior years, with organizations now saying “We have executive sponsorship, a budget and a team assembled.” Understanding the value and limitations of a chatbot approach needs to be high on the innovation agenda and be part of every enterprise digital transformation roadmap.

The information ecosystem will become more and more crowded, and organizations will need new ways to deal with information overload.  Chatbots will eventually be part of that solution but not before being an even bigger part of the problem. 

Learn more about the approaches Earley Information Science uses to enable chatbots check out these assets:

[Webinar] Training AI-driven Support Bots to Deliver the Next Generation of Customer Experience

[White Paper] Making Intelligent Virtual Assistants a Reality

[Article] There is no AI without IA

[Article] How AI-Driven Search Could Bring Us Closer to the Intelligent Workplace