All Posts

How AI Supports Knowledge Workers

This Article originally appeared on CMSWire.

Some knowledge workers fear they are next in line to be made obsolete by artificial intelligence (AI). 

Automation, which impacted manufacturing and blue collar jobs, is beginning to encroach on areas previously considered strictly within the realm of human intelligence and judgement — advisory services for financial services firms, fraud identification, disease diagnosis, editing and structured Articles creation, and language translation.  

Job Destruction vs. Job Displacement 

When first introduced, sophisticated tax software appeared to threaten the jobs of income tax preparers. But it turned out to only be effective for relatively uncomplicated returns. According to the Washington Post, tax preparers — most using intelligent software to do their work — completed the same number of returns in 2011 as they did a decade earlier.  

This forecast assumes that people will continually upgrade their skills, learn how to work with intelligent software, and embrace change rather than be intimidated by it.  Automation's impact is mixed, with jobs changing rather than disappearing.

For example, though typesetters are a rarity in the market, demand is high for graphic designers. The labor replaced by AI (and yes, typesetting was considered AI at the time because it used algorithms to determine layouts) allowed for lower costs and shifted the human activity to more creative tasks.  

The point is this: AI is already in the workplace and will only grow in prevalence as technologies evolve.

Feeling AI's Impact in the Workplace

Making Sense of Unstructured Data and Content

The most obvious place AI technologies will impact is in processing and making sense of unstructured data and content. Structured data from financial and transactional systems has been the target of BI and data warehousing efforts over the years. 

The newest ways of dealing with structured data incorporate unstructured content to provide the “why” with the “what” (What happened? “Sales increased in a New England for product line A."  Why did it happen? “Here are the market research results along with the promotional plans to explain the increase”). They also leverage unstructured data signals from voice of the customer, sentiment analysis and clickstream “electronic body language.”  

Machine Learning Identifies Patterns and Surfaces Outliers 

Humans cannot deal with the amount of data currently being produced. Using clustering can identify major themes in unstructured content, and unsupervised learning algorithms can look for the outliers and flag those for humans to process. 

However, these tools don’t know what to look for or flag without a framework, which is why they need to be a part of an overall analytics program. Once a person characterizes the sets of rules and patterns of interventions, the interventions can be automated.  

These algorithms incorporate human judgment. This process takes time because humans first need to identify the scenarios, then develop metrics associated with the scenarios, then conceptualize the interventions. After all this is done, machines can test hypotheses and make optimization decisions.  

Machine Learning and Customer Offers

Imagine a company is testing customer promotions. The AI program ingredients include the promotion variations or components that can be combined, a historical set of data (even if the promotions are new, past behaviors can provide a baseline), a target population on which to test the promotion, and metrics that indicate success.   

Rather than displacing the marketer, AI demands new skillsets. Marketers must raise their game by designing campaigns more creatively with additional variables to test. The process becomes more sophisticated. 

An excellent Articles by Ankesh Anupam, senior consultant at Wipro Digital, discussed how machine learning can identify more nuanced and dynamic customer personas in the multiple contexts of their journeys. Marketers still need to come up with content, offers and products for all of these dynamically generated multi-variant contexts. Part of the solution is processing data sources as input signals to machine learning. The other part is determining what will satisfy customers throughout those journeys and choosing the correct products to offer.  

AI-Driven Search

AI also can support more effective and personalized search. Search, after all, is a recommendation engine. AI-driven search reads signals from a broad range of sources — beyond the search term and document's metadata — in order to personalize results.  

Microsoft Delve is an excellent example of AI-driven search that analyzes the information that people create and read, and who they interact with in order to surface important content. Delve works by using the Office Graph — a metadata schema that is composed of an ontology with terms and concepts connected by actions or adjectives. The Office Graph uses sophisticated machine learning techniques to connect people to the relevant content, conversations and people around them.

Office Graph = Enterprise Knowledge Graph

A relationship between a person and a project, also signals a relationship with other members of the team. The searcher may also find documents that team members are reading important. Machine learning mechanisms in Delve look for those relationships and weight search results according to the Office Graph relationships (this is really an enterprise knowledge graph akin to Google’s knowledge graph — a set of content relationships based on metadata).  

The system uses metadata for the documents — either present or derived — in the weightings and as facets upon which to further filter results. In fact, metadata models and controlled vocabularies are even more important because those structures form the foundation for the ontology. When auto-categorizers operate on documents, they operate best with a “white list” of preferred terminology. 

Robot Advisors

We're seeing the introduction of intelligent virtual assistants, which support users in complex tasks or answer help desk questions via chat. Content is at the heart of these tools — they draw from a curated, tagged corpus of content as their knowledge base. In fact, many cognitive computing applications rely on curated content, requiring businesses to create training sets to support specific tasks, such as customer service. 

AI tools can also process large amounts of unstructured content and mine that content for data and product relations. For example, chat logs and transcripts of recorded conversations contain customer questions, variations on the questions, and answers that the AI engine can ingest and use to support knowledge extraction.  

Human intervention is still required to program these tools, and to develop use cases that the system will support — the systems are not yet able to self-create.  

Data Extraction Through Pattern Recognition 

AI tools can also recognize, extract and relay patterns in text into old-fashioned faceted search. For example, product spec sheets contain technical information that a user may wish to search and filter on. But structured search cannot leverage data locked up in a PDF or on a page of text.  

AI tools can process text and look for patterns and clues that indicate data. For example, the tool might spot variations on a value for “speed,” including RPM, R.P.M., revolutions per minute, shaft rotation, synchronous revolutions, speed of motor shaft, etc. An adaptive pattern recognition tool can extract that information despite the varying forms.  

Businesses can create libraries of patterns to support specific use cases for data extraction. This “docs to data” approach provides value to e-commerce providers that don’t have all of the data necessary to properly merchandize their products. While not a terribly exciting application of AI, it is very practical.   

Taking Steps Toward AI 

AI is an evolution. It is already embedded in the software we use every day and take for granted.  

Think of some forms of AI — like the virtual assistant — as search on steroids. The governance elements, architectural structures and content processes that make search more effective will make AI more effective. Though approaches like Deep Learning algorithms can work on messier data, for AI to be effective, data inputs should be clean and of high quality. So any AI initiative must begin with data quality initiatives.

AI is here and will continue to change how we interact with computers. Organizations should prepare now, by laying a foundation for AI as it matures in the enterprise. 

Earley Information Science Team
Earley Information Science Team
We're passionate about enterprise data and love discussing industry knowledge, best practices, and insights. We look forward to hearing from you! Comment below to join the conversation.

Recent Posts

Conversation with ChatGPT on Enterprise Knowledge Management

In another article, I discussed my research into ChatGPT and the interesting results that it produced depending on the order in which I entered queries. In some cases, the technology seemed to learn from a prior query, in others it did not. In many cases, the results were not factually correct.

The Future of Bots and Digital Transformation – Is ChatGPT a Game Changer?

Digital assistants are taking a larger role in digital transformations. They can improve customer service, providing more convenient and efficient ways for customers to interact with the organization. They can also free up human customer service agents by providing quick and accurate responses to customer inquiries and automating routine tasks, which reduces call center volume. They are available 24/7 and can personalize recommendations and content by taking into consideration role, preferences, interests and behaviors. All of these contribute to improved productivity and efficiency. Right now, bots are only valuable in very narrow use cases and are unable to handle complex tasks. However, the field is rapidly changing and advances in algorithms are having a very significant impact.

[February 15] Demystifying Knowledge Graphs – Applications in Discovery, Compliance and Governance

A knowledge graph is a type of data representation that utilizes a network of interconnected nodes to represent real-world entities and the relationships between them. This makes it an ideal tool for data discovery, compliance, and governance tasks, as it allows users to easily navigate and understand complex data sets. In this webinar, we will demystify knowledge graphs and explore their various applications in data discovery, compliance, and governance. We will begin by discussing the basics of knowledge graphs and how they differ from other data representation methods. Next, we will delve into specific use cases for knowledge graphs in data discovery, such as for exploring and understanding large and complex datasets or for identifying hidden patterns and relationships in data. We will also discuss how knowledge graphs can be used in compliance and governance tasks, such as for tracking changes to data over time or for auditing data to ensure compliance with regulations. Throughout the webinar, we will provide practical examples and case studies to illustrate the benefits of using knowledge graphs in these contexts. Finally, we will cover best practices for implementing and maintaining a knowledge graph, including tips for choosing the right technology and data sources, and strategies for ensuring the accuracy and reliability of the data within the graph. Overall, this webinar will provide an executive level overview of knowledge graphs and their applications in data discovery, compliance, and governance, and will equip attendees with the tools and knowledge they need to successfully implement and utilize knowledge graphs in their own organizations. *Thanks to ChatGPT for help writing this abstract.