All Posts

How AI Projects Fail, and 3 Key Ingredients for Success

Over the years, many definitions of AI have emerged, and a lot of them are misleading--particularly the ones that describe AI as the creation of intelligent machines that work and react like humans. A better way to look at AI is to think if it simply as software that learns to recognize and react to patterns in a way that appears to emulate human reactions. AI works best when it is automating specific, repetitive tasks or is helping humans do their jobs better by reducing their “cognitive load.” Embracing this view of AI will help you avoid the trap of setting unrealistic expectations about what AI can do, which is a major cause of failure for AI initiatives.

Align your AI project goals with business goals

Unrealistic expectations are just one example of misalignment with business goals. Often, projects are too broad, and lack a clear definition of success. In some cases, executives in a department want to “use AI” without really specifying what they want to use it for. Finally, it might be surprising to hear, but the processes that organizations use are not always well understood within the organization. Departments may understand their own portion, but a comprehensive view of the information flow often has not been articulated. This is important, because you can’t automate what you don’t understand.

Build a reference data architecture

The next obstacle is data management. An AI application needs to see many examples of a pattern before it can do the task it is being assigned, which is generally to detect other similar patterns. When a company is piloting an AI program, the data is usually very carefully curated and “clean.” The examples generally will not have missing data or confusing outliers. In addition, the data is not coming in from a variety of different systems that have different structures and organizing principles. In real world applications, the data will not be as constrained. In addition, an enterprise-wide reference architecture may not exist, which means you can’t correctly label the output. If you are looking for patterns that indicate fraud, for example, then how do you structure the results so that they go to the right person?

Avoid change management fatigue with useful communication

A third roadblock is the lack of governance, which is related to reference architecture, because the reference architecture needs to be managed by stakeholders with a common understanding of its value. Part and parcel of developing a governance process is having adequate socialization and change management for the new approach. Change management is a concept that has been around for decades, and one executive expressed to me (rather wearily) that organizations have “change management fatigue.” But communication is vital--people don’t trust what they don’t understand. They have to know what’s inside the black box in order to be motivated to do their part.

Governance includes metrics that indicate whether a given initiative has been successful. Both the baseline and the subsequent performance outcome needs to be measured. If a company has defined their idea of success, that is great, but if they don’t verify it, then all the effort that went into developing the automation of the task ends up seeming futile. Otherwise, executives may be quick to call it a failure and end the initiative. Define the task narrowly, and measure it precisely.

Keep communication lines open after roll out

Once the AI solution is in production, socialization should not stop; users should always be in the loop for evaluating and modifying the system. Two examples with contrasting outcomes illustrate this point. One was a large entertainment venue that wanted to know how visitors were engaging on their website. The second was a company that wanted to identify risks of internal cybersecurity breaches. In both cases, the goal was to detect underlying patterns of behavior. And in both cases, the solution was delivered and the customers got good value. However, in the case of the company that was addressing security issues, the client interacted almost daily with questions, comments, and adjustments. They ended up with a fully operationalized system. In the case of the entertainment company, no such adjustments were made, and at the end of three months, users were asking questions like, “Where is the dashboard?”

In short, AI development is an iterative process. People need ongoing reminders of why they are putting in the effort to maintain data quality, be consistent in their data governance, and modify the reference architecture to accommodate new products and processes. In addition, the process of experimenting, testing, rebuilding, and measuring should continue. Experimentation is about whether a company can build the right model, does it understand what assets are available and how the assets can be integrated. Operationalization is about taking a successful model and delivering it into the hands of decision makers. You know it’s working when business decisions are being made based on metrics coming out of the AI application.

Set and adhere to centralized standards

Centralized standards are what allow for decentralized decision making. The reference architecture needs to be consistent across departments in order for the data to flow properly and be integrated no matter what the source is. People need to be incentivized to use the systems and align them with business goals, measure the results, and incorporate them into people’s existing jobs. Conversely, the lack of centralized standards limits what can be done in terms of having practical, scalable, feasible, and cost effective AI systems. These investments need to be made in order to have measureable business outcomes.

Case study - how one little thing can ruin everything

Something to keep in mind is that it’s not necessarily the big things that can tank an AI application. In one case, a company was trying to identify highly qualified leads to join their loyalty program. They had access to a lot of data, including transactional data, products their customers looked at, and other digital actions. The company wanted a way of automatically scoring the leads and prioritizing which ones the sales organization should spend their high touch time on.

The solution was very successful--it identified with very high certainty 30% more potential members by matching behaviors with those who had in fact become members. However, about a year later, they made a very small change in the definition of a qualified lead. Suddenly, the model did not work, and began flagging people who did not match their new definition. The people who changed the definition did not connect with the technical people to adjust the model, and although the change seemed minor, it undermined the functionality of the system.

Gotta think big picture

Most companies are in pilot stages with AI. Early adopters are moving from pilots into production but for the most part, companies are experimenting. However, since organizations are increasingly being driven by data, being able to use it effectively to automate specific tasks will become progressively more important. However, without the foundational principles in place for organizing and managing the data, the outcome will not be the one that’s expected. Yet getting funding for these seemingly mundane tasks is often a challenge.

Many times when we are working with organizations to improve their data management, it is hard to convince C-level staff that there will be an ROI on information architecture. A chief data officer at a leading bank said to his colleagues, “We can do this but it will be expensive and will take five years. We can fix the problem but don’t ask for an ROI.” A better alternative is to say, “This technology will support processes that will support business objectives. Achieving those objectives will get us where we need to be as a business.” Decision makers need to see the linkage. It’s important to measure the impact on a task that has been automated because that’s a piece of the puzzle but it takes time to have an impact on overall business performance and to demonstrate it in a convincing way.

Align, focus, socialize

So start out by aligning your AI project with overall business objectives, but pick a narrow, focused task to start out with, and manage expectations carefully. Second, focus strongly on data. The AI will rely on data, and it needs to be clean and well organized, with a well-articulated reference architecture in place. Be aware of the difference between data you use for pilot and the data you use when you to scale up. Finally, make socialization a central component of your plan, so that change is managed before the pilot project is even off the ground. Stakeholders need a full and detailed understanding of the intent of the AI project and believe in the process that will be used to carry it out. With these elements in place, your company will avoid the major pitfalls of launching an AI initiative.

Our team of information architecture experts is ready to help your team launch a successful AI initiative.  Give us a shout to set up time to talk about how we can help.

Seth Earley
Seth Earley
Seth Earley is the Founder & CEO of Earley Information Science and the author of the award winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable. An expert with 20+ years experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. He has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance.

Recent Posts

[Earley AI Podcast] Episode 26: Daniel Faggella

Human Cognitive Science Guest: Daniel Faggella

[RECORDED] Master Data Management & Personalization: Building the Data Infrastructure to Support Orchestration

The Increasing Criticality of MDM for Personalization for Customers and Employees Master data management seems to be one of those perennial, evergreen programs that organizations continue to struggle with. Every couple of years people say, “we're going to get a handle on our master data” and then spend hundreds of thousands to millions and tens of millions of dollars working toward a solution. The challenge is that many of these solutions are not really getting to the root cause of the problem.  They start with technology and begin by looking at specific data elements rather than looking at the business concepts that are important to the organization. MDM programs are also difficult to anchor on a specific business value proposition such as improving the top line. Many initiatives are so deep in the weeds and so far upstream that executives lose interest and they lose faith in the business value that the project promises. Meanwhile frustrated data analysts, data architects and technology organizations feel cut off at the knees because they can't get the funding, support and attention that they need to be successful. We've seen this time after time and until senior executives recognize the value and envision where the organization can go with control over its data across domains, this will continue to happen over and over again. Executives all nod their heads and say “Yes! Data is important, really important!” But when they see the price tag they say, “Whoa hold on there, it's not that important”. Well, actually, it is that important. We can't forget that under all of the systems, processes and shiny new technologies such as artificial intelligence and machine learning lies data. And that data is more important than the algorithm. If you have bad data your AI is not going to be able to fix it. Yes there are data remediation applications and there are mechanisms to harmonize or normalize certain data elements. But looking at this holistically requires human judgment: understanding business processes, understanding data flows, understanding dependencies and understanding of the entire customer experience ecosystem and the role of upstream tools, technologies and processes that enable that customer experience. Until we take that holistic approach and connect it to business value these things are not going to get the time, attention and resources that they need. In our next webinar on March 15th, we're going to take another look at helping organizations connect master data to the Holy Grail of personalized experience. This is an opportunity to bring your executives to a webinar that will show them how these dots are connected and how to achieve significant and measurable business value. We will show the connection between the data, the process that the data supports, business outcomes and the and the organizational strategy. We will show how each of the domains that need to be managed and organized to enable large scale orchestration of the customer and the employee experience. Please join us on March 15th and share with your colleagues - especially with your leadership. This is critically important to the future of the organization and getting on the right track has to begin today.

[Earley AI Podcast] Episode 25: Michelle Zhou

Data Tells the Story Guest: Michelle Zhou