Expert Insights | Earley Information Science

Is Your Organization Is Ready To Take On An AI Project?

Written by Seth Earley | May 18, 2020 4:00:00 AM

Quite a few companies are actively experimenting with AI projects, but not many are in full production. Successful AI projects must have certain supporting processes and data in place, but many organizations do not understand the dependencies associated with their AI projects. Often, they attempt to substitute technology for having a true understanding of how their processes and data operate together and how they affect the functioning of the AI initiative. It is easy to get distracted into a sales process (what does the vendor want you to buy) and leave behind the procurement process (what are your company’s requirements).

The mechanics of implementing a technology can divert attention from the desired outcome—what it was you wanted to accomplish when you decided to initiate an AI project in the first place. Focusing too much on technology also lead to overlooking the process of establishing a team that will ensure that all the interdependent parts of the project mesh correctly. Without the foundational elements, such as clearly defined processes and well organized data, more advanced steps such as developing chatbots or recommendation engines will not be feasible.

Assess Your Place in the Maturity Model

Understanding your company’s level of maturity with respect to readiness for AI projects will go a long way toward helping you develop a viable path for the project. Our maturity model reflects five different stages of maturity for nine functional areas, which include customer data management, product information management, and governance. Using content management as an example, the stages are as follows:

  1. Unpredictable – At this level, content is scattered throughout the enterprise and not managed in any systematic way.
  2. Aware – Content is stable but maintenance is labor-intensive. Content is siloed departmentally, and re-use of content is low.
  3. Competent – Content is managed through a comprehensive lifecycle, and rudimentary sharing takes place; audiences are broadly segmented.
  4. Synchronized – Content can be repurposed across applications and channels, and reporting of content effectiveness can be accomplished.
  5. Choreographed – At the highest level, automation-supported production allows for meaningful personalization, and accountability for content effectiveness is possible.

Although the different functional areas do not all have to be at the same level, they cannot be too disparate either. It is not possible to reach Stage 5 for overall performance if content management is at Stage 1. So take the time to assess your company’s maturity level.

Gaining New Competencies

If any of the enabling competencies are not present in your company to the extent that they need to be, you have several options. One is to develop in-house skills, and another is to outsource the activities. For many small to mid-sized companies, the best approach is a combination of the two, but it is important to know which ones to keep in house. Typically, the ones an organization should not outsource are its core competencies.

Do not outsource core competencies.

Understanding your customers is a core competency, and should not be relinquished or delegated to an outside organization. This understanding is essential to personalization, which some would argue is not a core competency but in fact it almost always is, or should be. Building a digital experience based on your company’s mental model needs to be a core competency, and if you are outsourcing that, you are missing the boat on any digital transformation that involves AI.

Aspects of an AI project that can be outsourced are developing pilots, building and tuning algorithms, and training people how to use or maintain the AI. If the information architecture is not sufficiently robust, additional support there can be helpful. Building taxonomies specific to your organization, and establishing a governance program to sustain good practices, are also supporting activities that can help your organization move up the learning curve. However, there should be a roadmap for those functions to be moved in house on a permanent basis.

Building the Team

The AI team often consists of people with job titles such as data scientist, data analyst, and data engineer—people who are scarce and have high price tags. These individuals all have key abilities: Data scientists understand statistics; data analysts do the visualization and dashboards, and data engineers provide software skills. Rather than get bogged down in job titles though, it is better to think about the hats people wear and who needs to be on the bus to get the project rolling along.

Some of the key functions in an AI project are user adoption and user experience; change management and storytelling; machine learning and research; and privacy and security. Think less about the titles and more about the functions. The AI team can operate in a centralized way, where all the people with AI skills are in a single team, or in a diffused way, where most of the AI team is centralized but the data analysts are in different departments. The latter is the most common model for smaller organizations. In a third model, the specialized model, people with data scientist skills might need to be a part of each departmental team.

But Why Are You Initiating an AI Project?

One of the signs of organizational readiness for an AI project is the existence of a valid reason for doing it. Given the challenges of taking this journey, it is important for companies to think through why they are choosing to launch an AI project. Sometimes companies just feel like they should be venturing into this realm. But serious consideration should be given to all the options. Is AI the only way to accomplish the goal? The best way?  The ideal application for an AI is a narrow, focused, repetitive task that can be described by patterns of data to which the AI can match new data. Make sure your requirement is something that an AI solution can do.

Assuming you have a clear vision for the AI solution, make sure you can measure whether you have achieved the desired outcome. For example, if the goal is to improve customer satisfaction, you need to know the current level of satisfaction. What are the sources of dissatisfaction? Knowing that will help select a strategy for improving it. Do you want a 10% improvement in customer satisfaction or does it need to double? No matter what, having the metrics both before and after the implementation is the only way you will know if the initiative has done what your company hoped for.

Always Ask AI Vendors These Questions

When you are selecting an AI software solution, start by asking the vendor how they define AI. See whether they give a puzzling, vague, or general answer, or a thoughtful, understandable one. We like this definition:

AI is software that can learn patterns from a variety of training data sets and learn to recognize complex patterns in a human-like way. Typical tasks for an AI are to understand natural language and categorize complex data sets.

Ask the vendor how their model gets trained. Often in the demos, they are using highly curated data that will perform very well. This data is unlikely to be comparable to the data you want to use in your application. Also, ask how often they update the model based on changing circumstances such as new products your company introduces. Don’t hesitate to ask about the qualifications of the implementation team. And of course ask what sort of ROI their customers are achieving.

Ready with Governance

In order to be ready for an AI project, an organization must have or be willing to build the structures required for effective governance. These structures include working groups, overarching councils, subject matter groups, and others. Their charters address what each one does, who they report to, and what they produce. Together, these elements constitute a governance program.

The groups enable centralized data management, including consistent metadata, and they establish procedures for intentional sharing of best practices. They help build the right use cases, and train users in what they need to operationalize the project. In addition, they enable the development of metrics, and ensure compliance with standards. Without such governing structures, projects cannot be scaled up after the pilots are completed, and inconsistencies in user the experience emerge, among other things.

Determining organizational readiness for AI entails an array of activities, and the best place to start is a maturity assessment. Other factors such as the ability to build a team, measure outcomes, and develop a governance program are key indicators of organizational readiness.

Our team of data scientists are ready to help you with developing pilots, building taxonomies, designing governance programs, and creating roadmaps. Give us a shout to set up time to talk about getting started.