All Posts

Artificial Intelligence - What Really Works for the Enterprise and How to Get There

I was just given an Amazon Echo as a gift and was very excited to connect it and set it up. For those of you less familiar with Echo, it is Amazon’s cloud based intelligent agent, which interacts using voice recognition and performs some useful tasks using voice commands.  The hardware is a cylinder about 9 inches tall and 3 inches in diameter.  The “wake up phrase,” and therefore the name of the agent, is “Alexa.”  After setting it up I walked through some of the features – asking about the weather, setting a timer, listening to a radio station, asking about specific facts (like capitals of states or countries, measures, math problems), asking about local movies, getting an NPR “flash briefing” and a few other interesting functions.

These functions all worked well, but it quickly became apparent that Alexa was not robust when one goes off script or makes requests with even slight phrase variations.  For example, I asked Alexa how large the Echo hardware was. I tried several variations of this. “How large are you?” and “What are your dimensions?” elicited: “Sorry, I didn’t understand the question I heard.” (Of course, Alexa has no dimensions but the Echo does.) When I asked “What are the physical dimensions of the Echo?” Alexa still could not understand.  When I asked “How tall is the Echo?” Alexa finally was able to tell me. I also had success with “How large is the Echo?” and “What are the dimensions of the Echo?” As it turned out, adding the additional term “physical” to “dimensions” caused the algorithm to fail. 

I was also able to connect the Echo to my Hue wireless LED lights as well as my NEST learning thermostat.  Pretty cool. However, I had to use precise language to make the functions work.  I made lots of mistakes but eventually learned how to control my home automation with my voice.  In this case, “machine learning” (the technology behind many intelligent agents) turned out to be a human learning to talk to the machine rather than the machine learning to interpret the user. 

In this case, “machine learning” (the technology behind many intelligent agents) turned out to be a human learning to talk to the machine rather than the machine learning to interpret the user.

These emerging intelligent agents have many limitations.  That does not diminish their value, and the improvement in quality will accelerate. Alexa can already be programmed with “skills” that are specific use cases regarding specialized domains of knowledge.  They range from the whimsical “Magic Eight Ball”, to trivia games about Cricket, self-help and daily meditations, and the more utilitarian Capital One Hands-Free Banking.

The concept of “skills” points to the vast problems of terminology, meaning and interactions.  Intelligent assistants are programmed with specific users and use cases in mind.  The same will be true of enterprise uses for intelligent assistants.  The more carefully designed and scoped, the more valuable the application.  Artificial Intelligence, which include applications like Alexa, is a class of application that requires a defined set of use cases and an information architecture. You can’t have artificial intelligence without information architecture.  (No AI without IA.)

What’s real about artificial intelligence?

News about  artificial intelligence shows up practically daily in the popular media – how it will change our world and every aspect of work and our daily lives.  There is no doubt that this is happening.  It is happening on multiple levels. Some of the applications are glitzy, some are sexy, and many more are under the radar.   Artificial intelligence has only made limited and specialized inroads into most organizations. In some cases, these are high end, costly “science projects” where management is convinced that they need to spend vast sums of money with specialized vendors that work in secrecy and promise black box algorithms that will allow them to dominate their competitors and the marketplace.  In other cases they are promised tools that “define their own algorithms” to personalize interactions with users and use machine learning to determine what to offer whom, when and under what circumstances.  Speak with any of the leading analyst firms before buying into such claims.  More often than not, the claims and hype are outpacing the reality and practicality.  Well-funded startups with large marketing budgets, professional sales teams selling to the c suite and competent technical staffs will eventually figure out their space in the market, but after spending large amounts of client and VC money experimenting and “pivoting” (the venture term for making mistakes with large amounts of VC and client money) until they solve some customer problems and find a way to repeat what is very often a services based solution.  Far more organizations are trying to deal with the “we can’t find our stuff” types of problems or “our customers can’t find the right stuff” problems. These include knowledge base organization, call centers and support centers, customer self-service, product search, product content, and overall web content management.  Problems than are trying to develop sophisticated applications.  Some  artificial intelligence vendors claim to solve this class of problem, too; however, looking under the hood we see a lot of work that is not necessarily  artificial intelligence as it is promoted, but is actually more about good information architecture  practices combined with certain types of learning algorithms, metrics based processes and natural language interfaces. 

[White Paper] Making Intelligent Virtual Assistants a Reality

[Article] There is no AI without IA

The Relationship of  Artificial Intelligence to Information Architecture

Artificial intelligence encompasses a class of application that allows for easier interaction with computers and also allows computers to take on more of the types of problems that were typically in the realm of human cognition.  Every  artificial intelligence program interfaces with information, and the better that information is structured, the more effective the program is. A “corpus” of information contains the answers that the program is attempting to process and interpret.  Structuring that information for retrieval is referred to as “knowledge engineering” and the structures are called “knowledge representation.” 

Knowledge representation consists of taxonomies, controlled vocabularies, thesaurus structures, and all of the relationships between terms and concepts. These elements collectively make up the “ontology.”  An ontology represents a domain of knowledge and the information architecture structures and mechanisms for accessing and retrieving answers in specific contexts.  Ontologies can also capture “common sense” knowledge of objects, processes, materials, actions, events, and myriad more classes of real world logical relationships.  In this way, an ontology forms a foundation for computer reasoning even if the answer to a question is not explicitly contained in the corpus.  The answer can be inferred from the facts, terms and relationships in the ontology.  In a practical sense, this can make the system more user friendly and forgiving when the user makes requests using phrase variations, and more capable when encountering use cases that were not completely defined when the system was developed.  In effect, the system can “reason” and make logical deductions. 

Product Data, Customer Data and Ontologies

Organizations already have ontologies in the form of master data, customer attributes, product catalogs, dictionaries, glossaries, structured taxonomies, metadata, transaction systems and other sources of information.  These need to be integrated, structured, managed, and normalized to be most useful. They then become the knowledge architecture for internal and customer-facing applications, ecommerce systems, marketing technologies, content management tools, social media listening applications, and customer engagement platforms.

Adding  artificial intelligence and machine intelligence tools is a natural evolution of the technology ecosystem and will begin for most enterprises as an extension of their customer facing information systems such as search and content personalization mechanisms.  At its core, search is a recommendation engine that processes user intent signals (the search term or phrase itself, but also the user context: where they came from and any information we may have about their preferences) and presents a “recommended” result. Personalization tools make content and product recommendations.  (Content can be in the form of promotions, offers, sales, next best action, products for cross sell and upsell, answers to questions, etc.)    

Artificial Intelligence Driven Search

In the case of  artificial-intelligence-driven search and user experience, the results are specific answers instead of lists of documents or more appropriate selections of products that suite the user’s needs. The interface is more conversational and more forgiving of ambiguous requests, or variations in phrases and terms than typical approaches.

 Artificial-intelligence-driven search leverages context of users, their characteristics and where they are in their journey.  This is why  artificial intelligence and machine learning is behind personalization technologies and also makes use of big data sources, transactional information, social graph data, real time user behaviors on web sites and mobile devices and other signals that the system processes in order to infer user intent

To learn more about the data architecture approaches that support artificial intelligence see the following additional content from Earley Information Science:

[Webinar] Training AI-driven Support Bots to Deliver the Next Generation of Customer Experience

[White Paper] Making Intelligent Virtual Assistants a Reality

[Article] There is no AI without IA

[Blog] The Coming Chatbot Craze (and what you need to do about it)

Seth Earley
Seth Earley
Seth Earley is the Founder & CEO of Earley Information Science and the author of the award winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable. An expert with 20+ years experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. He has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance.

Recent Posts

[Earley AI Podcast] Episode 26: Daniel Faggella

Human Cognitive Science Guest: Daniel Faggella

[RECORDED] Master Data Management & Personalization: Building the Data Infrastructure to Support Orchestration

The Increasing Criticality of MDM for Personalization for Customers and Employees Master data management seems to be one of those perennial, evergreen programs that organizations continue to struggle with. Every couple of years people say, “we're going to get a handle on our master data” and then spend hundreds of thousands to millions and tens of millions of dollars working toward a solution. The challenge is that many of these solutions are not really getting to the root cause of the problem.  They start with technology and begin by looking at specific data elements rather than looking at the business concepts that are important to the organization. MDM programs are also difficult to anchor on a specific business value proposition such as improving the top line. Many initiatives are so deep in the weeds and so far upstream that executives lose interest and they lose faith in the business value that the project promises. Meanwhile frustrated data analysts, data architects and technology organizations feel cut off at the knees because they can't get the funding, support and attention that they need to be successful. We've seen this time after time and until senior executives recognize the value and envision where the organization can go with control over its data across domains, this will continue to happen over and over again. Executives all nod their heads and say “Yes! Data is important, really important!” But when they see the price tag they say, “Whoa hold on there, it's not that important”. Well, actually, it is that important. We can't forget that under all of the systems, processes and shiny new technologies such as artificial intelligence and machine learning lies data. And that data is more important than the algorithm. If you have bad data your AI is not going to be able to fix it. Yes there are data remediation applications and there are mechanisms to harmonize or normalize certain data elements. But looking at this holistically requires human judgment: understanding business processes, understanding data flows, understanding dependencies and understanding of the entire customer experience ecosystem and the role of upstream tools, technologies and processes that enable that customer experience. Until we take that holistic approach and connect it to business value these things are not going to get the time, attention and resources that they need. In our next webinar on March 15th, we're going to take another look at helping organizations connect master data to the Holy Grail of personalized experience. This is an opportunity to bring your executives to a webinar that will show them how these dots are connected and how to achieve significant and measurable business value. We will show the connection between the data, the process that the data supports, business outcomes and the and the organizational strategy. We will show how each of the domains that need to be managed and organized to enable large scale orchestration of the customer and the employee experience. Please join us on March 15th and share with your colleagues - especially with your leadership. This is critically important to the future of the organization and getting on the right track has to begin today.

[Earley AI Podcast] Episode 25: Michelle Zhou

Data Tells the Story Guest: Michelle Zhou