Over the years, many definitions of AI have emerged, and a lot of them are misleading--particularly the ones that describe AI as the creation of intelligent machines that work and react like humans. A better way to look at AI is to think if it simply as software that learns to recognize and react to patterns in a way that appears to emulate human reactions. AI works best when it is automating specific, repetitive tasks or is helping humans do their jobs better by reducing their “cognitive load.” Embracing this view of AI will help you avoid the trap of setting unrealistic expectations about what AI can do, which is a major cause of failure for AI initiatives.
Align your AI project goals with business goals
Unrealistic expectations are just one example of misalignment with business goals. Often, projects are too broad, and lack a clear definition of success. In some cases, executives in a department want to “use AI” without really specifying what they want to use it for. Finally, it might be surprising to hear, but the processes that organizations use are not always well understood within the organization. Departments may understand their own portion, but a comprehensive view of the information flow often has not been articulated. This is important, because you can’t automate what you don’t understand.
Build a reference data architecture
The next obstacle is data management. An AI application needs to see many examples of a pattern before it can do the task it is being assigned, which is generally to detect other similar patterns. When a company is piloting an AI program, the data is usually very carefully curated and “clean.” The examples generally will not have missing data or confusing outliers. In addition, the data is not coming in from a variety of different systems that have different structures and organizing principles. In real world applications, the data will not be as constrained. In addition, an enterprise-wide reference architecture may not exist, which means you can’t correctly label the output. If you are looking for patterns that indicate fraud, for example, then how do you structure the results so that they go to the right person?
Avoid change management fatigue with useful communication
A third roadblock is the lack of governance, which is related to reference architecture, because the reference architecture needs to be managed by stakeholders with a common understanding of its value. Part and parcel of developing a governance process is having adequate socialization and change management for the new approach. Change management is a concept that has been around for decades, and one executive expressed to me (rather wearily) that organizations have “change management fatigue.” But communication is vital--people don’t trust what they don’t understand. They have to know what’s inside the black box in order to be motivated to do their part.
Governance includes metrics that indicate whether a given initiative has been successful. Both the baseline and the subsequent performance outcome needs to be measured. If a company has defined their idea of success, that is great, but if they don’t verify it, then all the effort that went into developing the automation of the task ends up seeming futile. Otherwise, executives may be quick to call it a failure and end the initiative. Define the task narrowly, and measure it precisely.
Keep communication lines open after roll out
Once the AI solution is in production, socialization should not stop; users should always be in the loop for evaluating and modifying the system. Two examples with contrasting outcomes illustrate this point. One was a large entertainment venue that wanted to know how visitors were engaging on their website. The second was a company that wanted to identify risks of internal cybersecurity breaches. In both cases, the goal was to detect underlying patterns of behavior. And in both cases, the solution was delivered and the customers got good value. However, in the case of the company that was addressing security issues, the client interacted almost daily with questions, comments, and adjustments. They ended up with a fully operationalized system. In the case of the entertainment company, no such adjustments were made, and at the end of three months, users were asking questions like, “Where is the dashboard?”
In short, AI development is an iterative process. People need ongoing reminders of why they are putting in the effort to maintain data quality, be consistent in their data governance, and modify the reference architecture to accommodate new products and processes. In addition, the process of experimenting, testing, rebuilding, and measuring should continue. Experimentation is about whether a company can build the right model, does it understand what assets are available and how the assets can be integrated. Operationalization is about taking a successful model and delivering it into the hands of decision makers. You know it’s working when business decisions are being made based on metrics coming out of the AI application.
Set and adhere to centralized standards
Centralized standards are what allow for decentralized decision making. The reference architecture needs to be consistent across departments in order for the data to flow properly and be integrated no matter what the source is. People need to be incentivized to use the systems and align them with business goals, measure the results, and incorporate them into people’s existing jobs. Conversely, the lack of centralized standards limits what can be done in terms of having practical, scalable, feasible, and cost effective AI systems. These investments need to be made in order to have measureable business outcomes.
Case study - how one little thing can ruin everything
Something to keep in mind is that it’s not necessarily the big things that can tank an AI application. In one case, a company was trying to identify highly qualified leads to join their loyalty program. They had access to a lot of data, including transactional data, products their customers looked at, and other digital actions. The company wanted a way of automatically scoring the leads and prioritizing which ones the sales organization should spend their high touch time on.
The solution was very successful--it identified with very high certainty 30% more potential members by matching behaviors with those who had in fact become members. However, about a year later, they made a very small change in the definition of a qualified lead. Suddenly, the model did not work, and began flagging people who did not match their new definition. The people who changed the definition did not connect with the technical people to adjust the model, and although the change seemed minor, it undermined the functionality of the system.
Gotta think big picture
Most companies are in pilot stages with AI. Early adopters are moving from pilots into production but for the most part, companies are experimenting. However, since organizations are increasingly being driven by data, being able to use it effectively to automate specific tasks will become progressively more important. However, without the foundational principles in place for organizing and managing the data, the outcome will not be the one that’s expected. Yet getting funding for these seemingly mundane tasks is often a challenge.
Many times when we are working with organizations to improve their data management, it is hard to convince C-level staff that there will be an ROI on information architecture. A chief data officer at a leading bank said to his colleagues, “We can do this but it will be expensive and will take five years. We can fix the problem but don’t ask for an ROI.” A better alternative is to say, “This technology will support processes that will support business objectives. Achieving those objectives will get us where we need to be as a business.” Decision makers need to see the linkage. It’s important to measure the impact on a task that has been automated because that’s a piece of the puzzle but it takes time to have an impact on overall business performance and to demonstrate it in a convincing way.
Align, focus, socialize
So start out by aligning your AI project with overall business objectives, but pick a narrow, focused task to start out with, and manage expectations carefully. Second, focus strongly on data. The AI will rely on data, and it needs to be clean and well organized, with a well-articulated reference architecture in place. Be aware of the difference between data you use for pilot and the data you use when you to scale up. Finally, make socialization a central component of your plan, so that change is managed before the pilot project is even off the ground. Stakeholders need a full and detailed understanding of the intent of the AI project and believe in the process that will be used to carry it out. With these elements in place, your company will avoid the major pitfalls of launching an AI initiative.
Our team of information architecture experts is ready to help your team launch a successful AI initiative. Give us a shout to set up time to talk about how we can help.