Growth Series BLOG

Its a Premium BLOG template and it contains Instagram Feed, Twitter Feed, Subscription Form, Blog Search, Image CTA, Topic filter and Recent Post.

All Posts

A Primer on Cognitive Computing

This Article originally appeared in KMWorld Magazine.

Cognitive computing is “the simulation of human thought processes in a computerized model. According to technology publisher Tech Target[i], “Cognitive computing is “the simulation of human thought processes in a computerized model. Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works.”

Other definitions refer to “computer systems modeled after the human brain.”[ii]   IBM’s well known foray into the space centered on Watson, which competed with humans in playing Jeopardy, and won. Watson’s expected applications include medicine, finance and a range of consumer facing applications.    

The definition put forward by an industry consortium[iii] suggests that cognitive computing “addresses complex situations that are characterized by ambiguity and uncertainty,” that they learn from experience, and understand users’ context and intent. 

This Cognitive Computing stuff sounds pretty good.  You may be asking yourself, “Where can I get me some of that?”

The potential offered by new approaches at the intersection of technology developments is tremendous, in cloud computing, processing power, machine learning and others.  Each of these are at various points in the hype cycle with a great deal of confusion and inflated vendor claims.  This confluence of innovations will evolve and mature over the next decade and will create a very different way of interacting with our technologies, the environment, and each other.  The question is, how to make intelligent decisions today and prepare the enterprise for developments that are coming. 

By understanding some of the components of a cognitive computing approach, one can determine what aspects are most relevant, what deserves deeper investigation, and what is impractical or unfeasible with today’s approaches and tools.  Below are three essential components of a cognitive computing system:

  1. A way of interpreting input: A cognitive computing system needs to answer a question or provide a result based on a signal.  That signal might be a search term or phrase, a query asked in every day, conversational language (“natural language”), or it may be a response to an action of some sort – perhaps hitting a help button or purchasing a particular product. 

The first thing a system needs to do is understand the context of the signal.  In the case of Siri, one contextual signal is location, another is speed of motion.  Each of those contexts will allow the system to narrow the potential responses to those that are more appropriate.  Cognitive computing systems need to start someplace – they need to “know” or expect something about the user to interpret the signal.  A cognitive computing system built as a shopping assistant “knows” the context of the shopper.  One such system built to optimize marketing offers “knows” the parameters of the offer and the audience that the offer will be presented to.  The more contextual clues that can be derived, defined or implied, the easier it will be to narrow the appropriate types of information to be returned.  At one level, cognitive computing can be considered a type of advanced search or information retrieval mechanism. 

  1. A body of information that supports the decision: The purpose of cognitive computing is to help humans make choices and solve problems.  This is not done in a vacuum.  The system does not make up the answer. Though some might argue that this is the ultimate goal, even synthesis of new knowledge is based on foundational knowledge.

The “corpus” or domain of information is a key component of a cognitive computing system.  The more effectively that information is curated, the better the result.  In some ways, this is knowledge management 101 or even content management 101.  It means that knowledge structures are important, that taxonomies and metadata are required, and some form of information hygiene is required.  (I can hear the collective disappointment – I am sure there are many people who thought that this new way of doing things would obviate the old fashioned approaches of capturing and managing knowledge assets.)

Creating and maintaining information structures is not very exciting or interesting to many people.  But the fact is, many applications and environments require highly vetted, curated information sources as a foundation for cognitive computing applications.  Question-answering systems, intelligent agents and field service expert applications come to mind.  High-value knowledge and information can be made more accessible and useable through cognitive computing systems; however, the quality of that core knowledge is essential to the success of the application.  Judith Hurwitz, Marcia Kaufman, and Adrian Bowles’ excellent book Cognitive Computing and Big Data Analytics devotes a chapter to representing knowledge in taxonomies and ontologies and states “to create a cognitive system, there needs to be organizational structures for the content,” which “provide meaning to unstructured content.”  

IBM’s Watson ingested many structured and semi structured repositories of information: dictionaries, thesauri, news Articless and databases, taxonomies, and ontologies such as DBpedia, Wikipedia, and Wordnet.[iv] These sources provided the information needed to respond to questions – they formed the corpus of information that Watson drew upon.

An Articles in KMWorld suggested that “by synthesizing the complex and unique customer context, and comparing it to similar past scenarios in real time, the system can help identify reliably the best customer actions to take, such as best resolution, best product, best follow-up, etc.”[v] This ability requires that the system somehow model ‘customers,’ ‘context,’ and ‘scenarios’ as well as ‘products’ and ‘resolutions.’  This initial modeling requires a non-trivial investment of time and expertise to build the foundational elements from which the system can then synthesize responses.  Each of these actions requires content modeling and metadata structures, use cases, and a customer engagement strategy and approach.

  1. A way of processing the signal against the body of information: This component is where elements such as machine learning come into play.  In fact, machine learning has long been applied to categorization and classification approaches, text analytics and processing, and search index creation.  The processing might be in the form of a query/matching algorithm or may entail other mechanisms to interpret the query, transform it, disambiguate, derive syntax, define word sense (context of “stock” for example – a financial instrument or a farm animal), deduce logical relationships or otherwise parse and process the signal against the body of information.

Machine learning has many incarnations – from various types of supervised learning approaches where a known sample or result is used to teach the system what to look for, to classes of unsupervised approaches where the system is simply asked to find patterns and outliers and even combinations of these approaches at different stages of the process.   (An unsupervised learning approach could find hidden structures and then the output could be applied as a “training set” to another source of data). 

The key here is to iteratively improve the system’s performance over time by approximating an output and using that as an input for the next round of processing.  In some cases, incorrect answers (as judged by a human or another data source) might be input for the next time the system encounters the problem or question.  Systems can also optimize over time to meet a target state or condition, such as providing the most efficient operating parameters for a piece of industrial equipment or maximizing sales to a particular customer base through multiple offers. 

These three components of cognitive computing mechanisms can be broken down into endless combinations of technologies and algorithms.  Cognitive computing systems have additional characteristics have been left out of this model for simplicity – however, I would suggest that most of these other characteristics fall into one of these broad classes of functionality.  These are broad and encompassing –akin to describing buildings as things that contain walls doors and a roof.  That definition applies to the Mall of America and to a shack on an island.  The range of cognitive computing systems is as diverse as this analogy implies. Yet they form a good starting point for understanding cognitive computing, because essentially every cognitive computing system will need to include these components and each of these components require answering key questions about the organization’s customer strategy, business processes, and knowledge and information systems.

There are two main take-aways from this discussion:

1. Cognitive computing will increasingly be part of our world and will be subsumed into every system and process just as smart phones and the Internet are part of our world today. 

2. Organizations need to put certain foundational elements in place in order to remain competitive as transformational technologies upend and disrupt the marketplace.

To prepare for cognitive computing, organizations should do the following:

  1. Assess areas of opportunity in client-facing processes (customer support, customer service, marketing automation and ecommerce).
  2. Continue to manage and curate knowledge and data (foundational governance and data onboarding will be key capabilities moving forward).
  3. Understand and build on your organization’s maturity in data science and analytics (this does not necessarily mean hiring a team of data scientists, but means being intentional about enabling critical functions with analytic capabilities).
  4. Investigate and experiment with technologies in key competitive areas that will differentiate your products and services in the evolving marketplace (use envisioning sessions to get a shared understanding of the future state of the industry and organizational capabilities).
  5. Invest in educating the organization in foundational technologies and processes (knowledge management is not going away or being superseded in the immediate future – these technologies will build on core knowledge capabilities and processes).

Beware of vendor claims such as “our system develops all the algorithms,” “you don’t need to organize any content or worry about data quality”, “our software emulates the human brain,” “it’s based on our proprietary algorithm – you don’t need to tune it,” or “we develop and test all of the hypotheses – you don’t need any special expertise to use it”, etc.  I have heard each of these claims and they are only reasonable in very narrow use cases.  There is no magic here – cognitive computing requires that we design systems with the customers’ needs and tasks in mind, and support them with upstream internal processes.  Cognitive computing is a tool that will allow for amazing new capabilities.  Getting there will still require the blocking and tackling of data, content, and knowledge processes - though with new tools and improved outcomes.   

Every major technology enterprise is investing in this area in one way or another, and many are already gaining advantages and improving the ways in which they conduct business.  Cognitive computing will change the business landscape. With the speed of adoption and technology evolution, it will likely happen faster than many might expect.  Which is all the more reason to get your knowledge house in order.   


Earley Information Science Team
Earley Information Science Team
We're passionate about enterprise data and love discussing industry knowledge, best practices, and insights. We look forward to hearing from you! Comment below to join the conversation.

Recent Posts

How to Design Omni-Channel IVAs That Humans Love To Use

During our webinar, "Omni-channel Virtual Assistants - The Next Frontier in Voice and Text for Customer Service guest speaker Chris Featherstone, who leads business development for AI and speech recognition services at Amazon Web Services (AWS) and Seth Earley, CEO of Earley Information Systems (EIS) discussed ways to design omni-channel virtual assistants to optimize their use across voice and text. When supported by an appropriate information architecture and designed with a deep understanding of the customer, virtual assistants can access enterprise knowledge efficiently, saving time and money. The key to success is to structure the underlying information so it can be retrieved and used by any channel, including humans, to deliver the responses that customers need. Here's a recap of the conversation.

LIVE: Getting Value From Out-Of-The-Box AI Applications

Join us for a live recording of the Earley AI Podcast on September 29, 2021 at 1:00 PM Eastern Time.

How To Choose The Best Marketing Process For Digital Transformation

Most organizations are realizing that understanding and embracing the sea change that is impacting the marketing function will mean the difference between success and failure of go-to-market activities of entire businesses.  Unfortunately, the industry is so crowded and noisy that it is difficult to distinguish significant developments from distractions.  The wide range of tools, technologies and approaches that require familiarity, fluency and expertise means that marketing directors can spend inordinate amounts of time on researching options without achieving meaningful impact.