All Posts

    While predictions are difficult to make, especially about the future, there is no doubt that technical capabilities in the field of making computers more responsive to humans are accelerating and producing powerful new applications. 

    Different terms are used to describe these types of systems.  People refer to AI as a system that emulates human approaches to information processing for providing answers or solving problems.  Intelligent agents are algorithms that interpret requests and provide responses in a narrower domain or for specific tasks, and produce unique results to questions that have not been pre-programmed.  Cognitive Computing is a broad term that incorporates many AI capabilities and integrates a variety of mechanisms to allow for continuous learning and improvement. 

    As is true with many applications, we lose sight of AI capabilities once they become ubiquitous and embedded.   In fact many applications that are taken for granted these days contain AI – everything from speech recognition to machine vision used by robotic systems and, though still in development, self-driving cars.   Many everyday applications were developed in AI laboratories but are no longer considered AI: the computer mouse, financial trading systems, aircraft simulators, computer assisted design, and email spam detection were all once considered AI[1].   Stephen Gold, World Wide VP of Marketing at IBM goes so far as to say that almost all of the core technologies that are part of Watson have been around for many years[2]

    Human needs are at the core of cognitive computing and AI

    At the core of AI applications is the need to translate human needs and intent into something that the computer can provide or respond to.  Think of this as the ultimate in usability.  In some cases the function or capability is sifting through more data than a human can handle and making sense out of that data to provide an answer.  Cognitive Computing is a newer description of software that enables more powerful capabilities including those where the system is able to:

    • Understand natural language
    • Interpret and vary responses based on context
    • Personalize the results or response based on multiple signals
    • Learn and improve based on experience
    • Detect patterns in large data sets
    • Optimize results through iteration based on complex parameters
    • Understand and respond to intent
    • Predict results and actions
    • Operate autonomously  
    • Deal with unique situations
    • Conduct logical inference
    • Disambiguate queries
    • Understand word sense
    • Apply judgement and expertise
    • Process large volumes of data
    • Deal with uncertainty
    • Handle dirty data
    • Apply fuzzy logic
    • Combine multiple data sets to produce unique results

    Susan Feldman of cognitive computing consultancy Synthexis describes the following characteristics of Cognitive Computing:

    • Meaning based
    • Probabilistic 
    • Iterative and conversational
    • Interactive
    • Contextual
    • Learns and adapts based on interactions, new information, users
    • Big data knowledgebase, multiple sources and formats
    • Analytics
    • Highly integrated:  Search, BI, analytics, visualization, voting algorithms, categorization, statistics, machine learning, NLP, inferencing, content management, voice recognition, etc.

    These are very ambitious and comprehensive lists.  These capabilities are in place in a number of commercial applications, albeit with limited practical implementation.  Algorithms to achieve these outcomes function best within narrow domains and contexts.  General purpose AI (like that in current science fiction) is a very long way away. 

    Thousands of vendors entering the space

    Hundreds, if not thousands of solution providers are springing up in the space which will make the job of the CIO and CMO even more difficult and complex.  A recent blog post by investor Shivon Zilis describes a landscape of technologies and classifies them according to her own taxonomy: Core Technologies, Rethinking the Enterprise, Rethinking Industries, Rethinking Humans/HCI, and Supporting Technologies.  http://www.shivonzilis.com/machineintelligence. This landscape is bound to garner as much attention as Lumascape’s or Chiefmartech’s in their respective focus areas. 

    Shivon analyzed over 2,500 companies (more than the 2,000 marketing technology companies in Chiefmartec’s latest landscape) in order to settle on the couple of hundred represented in her landscape. As one reads through her blog post, it is difficult to see meaningful and tangible enterprise applications. The statement “business models tend to take a while to form, so they need more funding for longer period of time to get them there” suggests that a clear value proposition for the enterprise is not quite there at the moment.  The conclusion is that “they’re coming”.  “DeepMind blew people away by beating video games. Vicarious took on CAPTCHA”.  Playing video games and beating the “visual Turing test” are significant achievements.  For those businesses who compete on playing video games and passing Turing tests, these technologies are must have. 

    Practical applications require significant investment

    Of course there are many practical applications of AI, Intelligent Agents and Cognitive Computing however these are applications that are developed over a period of months and years with millions of dollars of development funding. There are very few that can be considered “out of the box” and even that term is misleading.  Many of the software packages require extensive configuration, curated content, training sets of data and ongoing tuning and evolution. 

    Regardless of what they are called – agents, intelligence, machine learning, AI - despite my tongue in cheek statement, beating CAPTCHA and CAPTCand learning to play video games are significant technical achievements that are foundational to many powerful applications that will change the multiple industries.  For organizations trying to evaluate these tools today, there is still a bit of arm waving and research intensive application development.  A quick review of  many of the web sites of Cognitive Computing and AI startups reveals a great deal of market speak and motherhood and apple pie – the types of statements and assertions that everyone wants but are difficult to demonstrate in a meaningful way.  The benefits are described as “Improving decision making”, “making data more accessible”, “helping managers answer questions”, and my favorite “automatically formulating hypotheses”.  These are ambiguous claims that require a great deal of faith to get behind.  Venture funds are supporting companies that are going to market with lots of promise without the bottom line, clear cut, unambiguous, hard hitting results that CIO’s and CMO’s need to see in order to dedicate scarce funds and organizational resources. 

    Two things are true – 1. These are real applications (though they are not as far along as vendors claim they are) and 2. They will change your business, no matter what that business is.  Though this landscape is confusing and it is difficult to tell what is real from the snake oil, there are practical steps that organizations can take in order to prepare for the inevitable market shifts that will force all organizations to embrace cognitive computing.

    Déjà vu all over again

    Cognitive computing can be considered search and retrieval on steroids.  A question or request is a query that the system needs to respond to.  In order to make these applications practical, listen to Mike Rhodin, Senior Vice President of the Watson Program:[3] He states that these tool are different.  When asked “what is Watson?” He responds, “The best way to think about it is that Jeopardy is a demonstration of a new class of application.  It understands natural language, it can generate hypotheses and it learns.  These new systems are information based as opposed to program based.”

    He goes on to say that you need to start by “thinking about the problem you are trying to solve, the information that may be necessary to solve that problem, where you are going to find that information, how you are going to curate it, how you are going to put it into a system, how you are going to train the information - once you have that done, then you write the app.  So it’s a very different kind of model”

    Wow.  Let me repeat that.  Wow.

    Content curation and structure

    Identification of the problem, locating the information, curating the content, and structuring it to put it into a system is at the core of the problem that the knowledge and information management community has been trying to solve for years.  Yes, Watson is a new, powerful tool in the toolkit.  But it does not solve the problem out of the box.  In fact, most of the AI and cognitive computing systems require significant levels of configuration, tuning and content processing to be effective.  Another example of this is from the Wellpoint Watson implementation[4]:

    “Watson isn't simple or inexpensive. While Bigham wouldn't disclose WellPoint's financial arrangement with IBM, the process of training Watson for use by the insurer includes reviewing the wording on every medical policy with IBM engineers, who define keywords to help Watson draw relationships between data.”

    ” The nursing staff together with IBM engineers must keep feeding cases to Watson until it gets it. Teaching Watson about nasal surgery, for example, means going through policies and inputting definitions specific to the nose and conditions that affect it. Test cases then need to be created with all of the variations of what could happen and fed to Watson.”

    Organizations will need to build these capabilities, however many of the fundamentals are the same fundamentals that support basic search and retrieval.  The starting point for this new realm is to develop proof of concept and proof of technology pilots that will leverage the fundamentals of content curation and corpus creation.  A PoC will identify the success factors and gaps in current processes and data.   One way to approach this is through development of search driven intelligent agent technology. 

    Building an intelligent agent for information retrieval

    At the core of an intelligent agent is the retrieval of information.  Retrieval might be in the form of a simple search where the result is a set of documents.  Or the result could be finer grained in providing a specific answer to a question. There are a variety of search driven applications that could be classified as a form of intelligent agent.  What makes a search driven intelligent agent?  The degree of sophistication of the algorithms used to process the user input, mechanism used to retrieve information and the ways of surfacing that information in context to the user including use of text to speech and avatar interfaces to guide the user.  

    These can support question answering systems for narrow tasks like filling out a form to more sophisticated and complex approaches for understanding context and interpreting language and ambiguous questions in order to guide the user in a task that requires judgement.  There are a number of approaches to developing information access mechanisms that can be placed on a continuum of sophistication.  Key components of these systems include:

    Search drives the interaction:

    • Search is “front and center” for content delivery
    • Approaches for measuring customer interactions with content
    • Mechanisms to allow content rating and feedback
    • Search analytics and feedback to indicate high value and missing content  
    • A “sense and respond” business process for info dev

    Metrics driven governance:

    • Content developed to address a known need
    • Content metrics that define whether it meets needs
    • Identification of needs, delivery of content, tracking of impact, and course correction mechanisms

    Answer based content:

    • Focus on supporting Customer performance of high-value tasks that support Value Drivers
    • Component content provides answers to questions and multi-channel delivery

    Ongoing quality management:  

    • Optimize for search, retrieval, and compact presentation
    • Content and interactions that are clear, concise, relevant
    • Structures are compliant with metadata and style guidelines

    Types of organizations embracing Cognitive Computing

    There seems to be several tiers of organizations developing capabilities that are emerging in the marketplace.  The first is the type of large digital enterprise that understands the importance of these technologies and has the capital and resourcing to invest in capabilities. Think Google, Microsoft, Amazon and other technology giants.  They are investing in machine learning and analytics in applications that comprise their core business.  At a recent Big Data Innovation Summit in Boston, the head data scientist at Uber described several fascinating products in development that leveraged machine learning and predictive analytics.  Amazon’s recommendations engines are based on pattern recognition through analysis of large volumes of data. 

    Another tier of organization is the large enterprise for whom data is important but where many of the advanced applications of machine learning and cognitive computing are not primary (at least not yet) to their businesses.  They do have the resources to invest in these emerging areas and understand that their businesses will benefit and remain competitive through application of these technologies.  One example is the reinsurance company Swiss Re.  Swiss Re’s Riccardo Baron, Big Data & Smart Analytics Lead Americas VP, revealed that the company has engaged in over 100 pilots in diverse information areas and that there are several that have demonstrated clear value for the company and their customers. 

    The third tier is that of organizations that don’t have the resources, the interest or perhaps the understanding about how to apply these tools to their models.  There may be significant impacts on these companies as the market develops around them with more “proven” solutions. 

    For any of these organizations it is possible to solve today’s problems using today’s proven technology.  Cognitive Computing does not have to be academic or require millions of dollars.  Intelligent agents can deliver real value in a very short timeframe.  A bonus is that it allows the organization to start down the path or more sophisticated, complex and powerful applications that will truly be game changing.  Take the first step today and begin the conversation. 

    Seth Earley
    Seth Earley
    Seth Earley is the Founder & CEO of Earley Information Science and the author of the award winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable. An expert with 20+ years experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. He has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance.

    Recent Posts

    [Earley AI Podcast] Episode 41: Ian Hook

    Ian Hook on Advancing Operational Excellence with AI and Knowledge Management - The Earley AI Podcast with Seth Earley - Episode #041 Guest: Ian Hook

    [Earley AI Podcast] Episode 40: Marc Pickren

    Search Optimization, Competitive Advantage, and Balancing Privacy in an AI-Powered Future - Marc Pickren - The Earley AI Podcast with Seth Earley - Episode #040 Guest: Marc Pickren

    [RECORDED] Product Data Mastery - Reducing Returns to Increase Margin Through Better Product Data

    Improving product data quality will inevitably increase your sales. However, there are other benefits (beyond improved revenue) from investing in product data to sustain your margins while lowering costs. One poorly understood benefit of having complete, accurate, consistent product data is the reduction in costs of product returns. Managing logistics and resources needed to process returns, as well as the reduction in margins based on the costs of re-packaging or disposing of returned products, are getting more attention and analysis than in previous years. This is a B2C and a B2B issue, and keeping more of your already-sold product in your customer’s hands will lower costs and increase margins at a fraction of the cost of building new market share. This webinar will discuss how EIS can assist in all aspects of product data including increasing revenue and reducing the costs of returns. We will discuss how to frame the data problems and solutions tied to product returns, and ways to implement scalable and durable changes to improve margins and increase revenue.