This Articles originally appeared in the November/December 2015 issue of KMWorld Magazine.
Personalization is one incarnation of a longstanding objective that embodies the essential elements of knowledge management—the right information in the right context at the right time. From the customer or user’s perspective, it is “personalized.” What does personalized mean? Personalization uses information about time, place, goal, task, objective, intent, frame of reference and other contextual cues to get me what I need when I need it. Yet the way it is generally implemented leaves a great deal to be desired.
Many analysts feel that personalization approaches are limited by internal organizational constraints as well as by fundamental approaches to modeling the customer experience that do not effectively leverage location, past purchase, preference or demographic data. Making assumptions about what customers want based on that data can also be dangerous. Personalization rules might say “since you are a woman, you must want women’s clothing.” However, women shop for men’s clothing as well. Models are always approximations of reality, and models of human behavior are more challenging than models about the physical world, because people don’t behave in predictable ways.
Some organizations are trying to solve the problem with an approach called “machine learning.” Machine learning algorithms make changes to a system, process how that change has affected behavior and adjust based on that outcome to continue to optimize performance. Machine learning is also used to find patterns, detect trends and cluster or categorize information. Theoretically, machine learning could provide a user with more personalized choices based on processing a variety of information sources, including their past behavior or demographic data.
An emerging and related field of interest is that of cognitive computing—a catchall term to describe computers that interpret and “understand” human communications and respond with appropriate information. Apple’s Siri is an example of that class of software. IBM’s Watson computer famously beat a human at “Jeopardy.” IBM defines the space as “systems [that] learn and interact naturally with people to extend what either humans or machine could do on their own. They help human experts make better decisions by penetrating the complexity of big data.” The field had been called “artificial intelligence” and was the dream of computer scientists since the 1950s. In the 1970s and 1980s, some of the early attempts at artificial intelligence were applied to building layout rules for electronic typesetting and publishing software.
How do these three concepts relate to one another? Personalization allows for presentation of a specific piece of information or action based on various “signals.” Those signals could be in the form of a single word entered into a search engine, a series of past purchases and current behavior on a website, social graph data or any of hundreds of attributes that can describe a user.
Machine learning looks for patterns in data and then produces an outcome—anything from a similar collection of data (for example, if the system was categorizing documents), to outliers and anomalies (in an application for fraud detection), to suggestions or recommendations (a search engine “recommends” a piece of content based on similar purchases).
As we gather more signals that are specific to an individual or group and use those signals when producing the result, the more personalized the output—whether a search result, a product recommendation or a suggested action or next step in a complex process. Cognitive computing essentially combines those and other approaches (such as interpreting natural language) to interpret the user request, integrate and process the appropriate data sources to influence the output, and produce the result that helps the user complete a task or achieve a goal. Think of cognitive computing as the ultimate user experience. The system understands who I am and what I want and gives me the result with minimum work on my part.
Computers that think? Some think so …
The above discussion notwithstanding, the term cognitive computing is a misnomer, because cognition describes the act of thinking, and computers neither think nor understand in the literal sense. What then does the term mean? A recent Wall Street Journal Articles, entitled “It’s Time to Take Artificial Intelligence Seriously” (wsj.com/Articless/its-time-to-take-artificial-intelligence-seriously-1408922644), described an intelligent agent that was able to schedule a meeting among various parties by responding to e-mail messages and interpreting the text responses from each individual in order to find a time that worked for all.
The experience was described as if the reporter were working with an assistant. The program provided responses and asked for alternative times and clarifications to arrive at the meeting time and date. The concept of an “intelligent agent” works well when the domain, tasks and use cases are very narrow. In fact, numerous chat bots act as intelligent agents for customer service requests and are used to augment call centers and customer help requests. So cognitive computing can emulate human thinking if the domain is narrow enough.
Rules-based, brute force approaches can be time-intensive
Applications such as those leverage use cases, rules engines and various search algorithms to render results. They perform most effectively when the domain is narrow, tasks are well defined and procedural, and when the user context and situation can be inferred based on who they are and what they are doing. Mobile devices can indicate whether the user is walking or driving in a car, for example. In that situation, use cases can be applied to the question being asked that only pertain to the context of driving versus being at home. That narrows the context and makes answering the query easier, because the scope of the query is less open-ended and therefore less ambiguous and less open to broad interpretation. Modeling those scenarios and use cases is an arduous and labor-intensive process.
Training set development requires content and expertise
The well-known IBM Watson system is considered an example of cognitive computing. In this case, the domain was very broad, making the challenge that much more complex. In fact, narrowing the domain but making the answers more specific, nuanced and technically detailed has been Watson’s challenge. Watson needs to be trained with content and by people with expertise. The content must contain the answers, and training with real-world questions can take months. Configuration of Watson requires that the knowledge domain be contextualized for particular types of users asking particular types of questions. Personalization and contextualization are implied in the process—details of the user tasks and needs are incorporated into content and question/answer training sets.
Cognitive computing, machine learning and a personalized customer experience all have a common foundation based on data and analytics. New approaches are allowing computers to more effectively cater to the needs of humans by understanding requests, anticipating needs, providing information and improving performance over time. Multiple technologies are being leveraged in new ways that are speeding up the pace of innovation.
Every organization and industry will be impacted by those developments, and people will find their lives being served in ways that were not conceivable just a few years ago. There is no magic at the core of those tools and approaches—they still require creative human input, judgment and expertise to produce results. The best way to leverage those developments is to consider the continuum of what the technology is capable of and to address a focused set of business and technical problems in a practical, provable way.