Expert Insights | Earley Information Science

Cognitive Search – The Next Generation of Information Access

Written by Seth Earley | Jun 21, 2016 4:00:00 AM

You no doubt have heard about cognitive computing and see the inroads that AI is making in daily life as one technology company after another introduces their version of a virtual assistant.  If you are a long-time professional working in the information and knowledge space, you are no doubt dismayed by the attention and funding that vendors and new entrants are gaining.  After hearing promises of instant answers, computers that understand the user’s intent, and machine learning algorithms that get better just by operating in your environment, you perhaps have realized that what sounded too good to be true was just that. 

As happens with any change in approach or thinking, the possibilities precede the practicalities. Yes, there is such a thing as a learning algorithm, and computers are doing a better job of understanding unstructured information. They seemingly are making judgements based on ambiguous inputs.  But the simple truth is that core information and knowledge management principles have not been superseded by these super-smart technologies.

Cognitive computing depends on having the right data and structuring it so that the algorithms can operate effectively on it. Whether the users are employees carrying out analyses of internal content, or visitors to a website that is facilitating their search for a product to purchase, information needs to be available and organized in order to produce the best results.

We've heard numerous stories from clients about vendors that overpromise: Virtual assistant projects that proved valueless, large investments in semantic science experiments with little business benefit, and a cognitive search implementation by a large, well known technology company failing after two years and a $3 million investment. These are just some of the examples, and there will be many more to come. But these are natural growing pains of a new technology, and progress will be made as the understanding of its true capabilities increases along with skill in implementing it. In fact, cognitive computing is already causing knowledge management (KM) to experience a rebirth, or, in the words of one analyst, “Cognitive computing is KM’s grand makeover.” 

Valid principles, flawed execution

Just because early incarnations of technologies fail does not mean the approach is fundamentally flawed.  Even if the execution is flawed, the core principles may still be valid.  The problem lies with overpromising and a lack of understanding of success factors.

In order to achieve success using cognitive computing, an assessment of what is realistic at this stage of the technology’s maturity must be made, and the proper groundwork laid for the implementation.

While knowledge management has lost some adherents after enduring its own cycles of failure, it has also gained new support after success, evolution, and maturation.  Now, it is a known entity in many corporate environments. That does not mean it has been implemented effectively across the board, but organizations no longer view it as a mysterious thing.

KM is entering a new era, buoyed by the attention that machine intelligence is gaining in the C suite.  Organizations that have developed some maturity in the content and knowledge process domain are experiencing a shorter learning curve and reduced time to value by leveraging this institutional knowledge about collaboration, knowledge capture, curation, and stewardship.  The same evolution can be expected for cognitive computing, and for its contribution to knowledge management. Cognitive computing will be a strong driver for the next generation of information access solutions, both for internal enterprise content and for customer-centric site search. The main take away is that the core principles of knowledge management are foundational to cognitive computing and that maturity and lessons learned in KM are transferable to these emerging approaches. 

Search as recommendation engine

Add some new approaches to the mix and we no longer just have “search on steroids”--we now have cognitive computing or its more glamorous cousin, artificial intelligence (AI).  These approaches bring some new capabilities and produce many new possibilities.  At a fundamental level, these efforts are now resulting in recommendation engines of various degrees of complexity.  Such engines can take sparse signals from a few key terms, interpret them, and then make a recommendation.  That outcome can be achieved through a simple keyword match or through new algorithms that can integrate additional signals into their determination of an appropriate result.   The recommendation can be simple, such as a restaurant that serves a particular cuisine in a given price range and with a particular rating, or it can be a complex interpretation of spoken language in which the output is the textual interpretation of the audio signals.  The recommendation can be a product, a web page, a spelling correction, an action, etc.    

Search as a conversation

The recommendation can be considered in the form of a conversation.  The user says “Give me some information about “purification’.“  If the user enters that term into Google, the results indicate a variety of possible options: a definition of the term; e.g., a health application, a scientific application, and the topic of water purification.  A search of the same term on a scientific site such as Thermo Fisher, the term still has many interpretations, including protein purification, DNA purification, purification supplies for various applications, purification equipment, or purification solutions from various brands that the company distributes. Even to a scientist, the term will have different meanings, and the search engine is asking for clarification by presenting choices just as a human would when asked a broad or ambiguous question.  Search is a conversation because it is an iterative process-- the result may not be exactly right the first time, nor can it be expected to in many circumstances. 

Understanding and engagement

Imagine speaking with an expert about a particular problem that you are trying to solve.  Before recommending a product or solution, the expert would want to know what you have tried before, what your ultimate goal is, and where you are in the process now.  There would be an exchange of information through questions and answers – a dialog.  The more you tell the expert, the better they can advise you. If they already knew that you needed to purchase more supplies for an experiment (in the scientific scenario), then you could perhaps say “I need more supplies for my protein purification experiment,” and by looking at your prior order the expert might recommend the same product you had purchased previously.

This conversation takes place in plain English, of course, and search is evolving to emulate such an interaction. In an actual conversation with a person, it is intuitively clear that context is necessary. Depending on the response, the user will stay engaged. If the expert started talking about water filters and that was not the topic of interest, the person making the inquiry would quickly end the conversation.  

Search interactions are the same – if the result seems to provide some meaningful answers, the user will continue to refine the inquiry and provide more clues.  Chat-based search and product recommendations are simulating human interactions, prompting users to explain more of their needs.  Even without such an interface, search can be more engaging when more signals, such as customer data are provided to assist in understanding the user’s intent.  Cognitive search processes these signals and interprets the context of the query in order to continue to engage with users and ultimately convert them to customers.

The heart of cognitive computing

At the heart of all of this is meaningful content, clean, well-structured product data, and integrated customer data from various touch points. New technologies are emerging that will more precisely target results based on processing additional signals from users.  These include agent-based search, semantic search, and cognitive search, all of which leverage data from user behaviors and preferences contextualized with understanding of use cases, user tasks, solutions and processes. The result is a system that “understands” more of the user intent and provides a specific answer, rather than a list of results. 

These approaches all depend on structured data sources, combined with the ability to interpret unstructured signals. Without product data and curated content, they will not live up to the promise and certainly not the hype.  Organizations need to continue on the path to consistent organizing principles, product and content taxonomies, and curated, high-quality data, which has been the basis of KM all these years.  The robots are coming, but they still can’t create order from chaos – not yet, anyway. 

For a deeper dive into how we use information architecture as the foundation for digital transformation read our whitepaper: "Knowledge is Power: Context-Driven Digital Transformation