Knowledge management has had a bad rap. For the past few decades, it has gone through cycles of popularity after being introduced in the early 90s, and in some of those cycles, it has been significantly devalued. That is the online incarnation of KM.
Knowledge has been passed on for centuries through written words and apprenticeships; formal teaching and training; and cultural experience and folk teachings. Knowledge management as a digital endeavor started with early collaboration tools — listserves, online discussions, communities, bulletin boards, and the like as well as their corporate “groupware” cousins such as Lotus Notes and SharePoint.
The idea of connecting people and their knowledge and expertise was conceived long before that, though. In 1968, the “Mother of All Demos” demonstrated personal computing concepts that were later elaborated by Xerox PARC and commercialized by Apple. Now knowledge management is experiencing something of a revival, as its value in enabling AI is being increasingly recognized.
“We Need to Get More Information Online”
This was the problem statement that one customer of mine articulated back in the early 90s when I started my company. And he was right. Having the world’s knowledge widely accessible was still a long way from reality, but the explosion of content and information on corporate intranets and on the web became overwhelming and demanded a mechanism for easy and comprehensive access.
IBM and Lotus came out with Discovery Server, which indexed content to enable information retrieval and expertise location. But that tool became IBM Omnifind, and that DNA was ultimately used in Watson.
How AI Depends on Knowledge
It makes sense that a suite of cognitive technologies such as Watson began with knowledge management. The phrase “cognitive computing” brings to mind a computer with a mind. While that metaphor has been abused — computers, no matter how sophisticated, do not think; they help support human cognition.
Cognitive AI reduces the “cognitive load” on the human. It helps us process information and make decisions with less mental work by surfacing information as we need it. “right information at the right time for the right person.” has been the mantra of knowledge management for years, and is today’s goal of personalization and recommendation algorithms.
Everything that an organization does is based on knowledge flows. They take in knowledge and information and produce knowledge and information. Products are materials plus knowledge. The device that I am working on and the magical device in my pocket is made of sand, metal, and oil — all very cleverly arranged. Products developed today, no matter what the industry, are more knowledge-intensive and use less physical material than products manufactured even a few years ago.
Why AI and ML Will NOT Come to the Rescue
For many years, I have been preaching the need for knowledge structures to support artificial intelligence. In an article written several years back, I made the statement that artificial intelligence requires an intentional approach to information architecture. The premise is: “There’s No AI without IA” (There’s no artificial intelligence without information architecture).
I came to this conclusion after extensively researching applications that were supposed “AI-powered,” especially those in the cognitive realm, such as the bots and virtual assistants that were being sold as the answer to call center and customer service automation. In each case, I would ask vendors how their tools worked, how they were trained, and how new functionality was developed and deployed.
The answers ranged from “that’s proprietary” to jargon-filled nonsense about how their algorithms simply “learned from all of the data” without human intervention. “You don’t even need to define the problem it needs to solve” stated one particularly bold huckster. I was shown admin interfaces that had question/answer pairs — with phrase variations and misspellings in the list so the bot could identify intent (which is not the way intent classification works) that would be impossible to manage and maintain. But my favorite answer was “oh, well the customer has a knowledge base”… Really? The customer has a knowledge base? That particular flavor of non answer simply assumes that the answer to the problem is assumed. No, that is actually the problem that needs to be solved — the knowledge source.
AI is fundamentally about classification. Algorithms classify signals to separate them from noise. Image recognition classifies these images as cats and those as dogs. These x-rays are classified as either cancer or not cancer; auto parts are either good quality or defective. Cognitive assistants first classify a phrase variation as an intent (a signal meaning a particular thing) and then use that signal to retrieve a matching piece of information. Content is classified as the right response to that signal.
However, AI cannot judge the value of a piece of content. It cannot fix content in the “knowledge base” that does not solve a problem. If the information is missing AI cannot fill it in. AI can help improve and curate content and tag content through semi-automated indexing, but we first need the architecture and reference data. These include the terms and concepts that are important to the organization. The solutions to problems are in fact the taxonomies and ontologies that form the knowledge scaffolding of the enterprise.
AI needs structure — it needs to understand the soul of the business. That soul is the ontology that contains the problems, solutions, roles, processes, questions, answers, topics, content types, customer types, product categories, equipment types, attributes, regions, skill areas, research areas, and more — every concept that the business uses in its operations. All of those have to be defined and mapped if AI is to function.
The Knowledge Problem
Since cognitive assistants seek to answer questions and provide information to support specific tasks, where does the information come from? The same things that are needed to train a human are needed to train a virtual assistant. An FAQ bot needs the FAQs. A troubleshooting bot needs trouble codes and procedures. Knowledge creation is uniquely human and comes from the creative application of experience and expertise to a problem.
When engineers come up with a unique product design, they need to define features and functions, installation procedures and guides and support content that customers and field service or call center reps need to do their jobs. There is a great deal of captured, codified knowledge in the enterprise in the form of processes, procedures, workflows, system designs, software, reference materials, training content, presentations, white papers, methodologies, templates, exemplars, and other high quality, highly valuable knowledge assets that comprise the competitive advantage of the organization.
The enterprise competes on knowledge. This includes knowledge that is part of people’s jobs, that they learn over a period of years, knowledge of customer needs, how to communicate with customers, and what resonates with different audiences.
Over time, organizations learn how to best serve them, how to differentiate from the competition, how to help customers choose products, use them, get the most from them, fix them or bring them in for service, maintain, upgrade and replace them. There is an enormous body of knowledge, from how to procure parts or ingredients to the best ways to manufacture, handle, or transport finished goods. The physical supply chain is inextricably linked to a knowledge and information supply chain.
So how do people find products, services, and solutions? In the pre-digital days, the knowledge they needed came directly from other people or written materials. In our digital world, most people are well versed in products and options because information is so readily available to them. There is still a role for humans, but not to provide the basic education that they once did.
There is a looming knowledge crisis in the corporate world because human expertise is becoming scarcer. In many cases, expertise requires years of on-the-job experience to master an area. People not only have less patience for this but the nature of work is changing. Fewer people are coming into the workforce who want to spend ten or twenty years in the same industry.
Fortunately, more human expertise is being designed into applications and products so they are easier to service (or replace if the knowledge of manufacturing efficiency is taken far enough) and easier to operate. However, to do this successfully, subject matter expertise needs to be made explicit by capturing experience before people retire or leave for another organization or industry. More and more service is being enabled through digital channels because those technologies, unlike human expertise, scale up very well. But again, where does the knowledge come from? It has to be captured and structured in a way that allows the digital machinery of the organization to serve it up in the correct context for users to accomplish a particular objective.
Large services manuals have to be broken up and chunked to provide easy access directly to the information needed to answer a question. Years ago when working with a medical insurance provider, it was very difficult for claims processors to find exactly what they needed–claims policy information was buried in 300-page documents. When people need an answer, they don’t want to search through a hundred search results and then open one of those which is in turn a 300-page document. They just want the answer. That is why breaking the content up into pieces is so useful for a human. And the same structure that allows a human to answer a question will allow a bot to answer a question. That is the beauty of having information well organized and searchable.
In addition, chunked knowledge is also needed for personalization. Messaging pieces can be recombined and offered to different audiences with slight variations. Machine learning algorithms can further fine-tune the variants for particular audiences and contexts.
One large tech organization serves up four million knowledge objects per day across channels, sites, and contexts. Those components enable functionality in every aspect of marketing, service, support, e-commerce, product development, and the internal processes that enable those experiences. At that scale, other signals about the user allow automated processes to fine-tune the exact answer to put into someone’s hands. Industry, role, equipment owned, configurations, technical expertise, and other weak signals can be correlated with knowledge usage and help to prioritize exactly the right information for others in the same or similar circumstance.
AI Content? No, Content…
I recently heard of one enterprise that was creating a group for “AI-ready content”. While a worthy objective, it begs the question of why a new group? Why new content? AI content — that is for bots and virtual assistants — needs to be carefully aligned with a purpose and use case. It needs to specifically serve a purpose and help a user with their task.
Hold on, shouldn’t all content be written with a purpose and user in mind? Yes! I once worked with a government agency that had 20 content writers in a group covering a particular area of Medicare. I held up a document and asked what is this? Who is it for? Why should they read it? What value does it have? No one could answer me. No one could tell me what that group did from the customer’s perspective. That is why we need a more focused approach for content — not an AI content group but a content group with purpose.
The point is that content for AI is content and mature content operations are necessary to build training material for virtual assistants. But the same discipline will make information easier to use for everyone and will solve problems today while preparing for the future.
This article originally appeared on CustomerThink.com.