Shiny Things and Bad Data: The Two Biggest Challenges Facing CIO’s and CTO’s

When I speak to CIO’s and CTO’s, they are grappling with two things: 

  • Staying up to date with new capabilities that the business either wants or needs, and
  • Building the foundational data capabilities to enable digital agility

“Squirrel!”

One of the characters in the 2009 Pixar animated hit was a dog (named Dug) whose attention would be immediately hijacked by the site of a squirrel.   CIO’s and business leaders have lots of squirrels to wrest their attention these days.  Change is happening at a faster rate, agile, born-digital competitors are threatening to upend longstanding business models, and digital capabilities increasingly depend less on the tools themselves and more on rethinking the business process and customer value proposition. When business leaders come to IT with an ask, the things they want are not always aligned with what they really need.  Shiny new technologies such as cognitive computing, AI, bots, the Internet of Things (IoT), personalization, and machine learning have their intrinsic conceptual appeal, but novelty should not be confused with business value. Nevertheless, management may read an article about technologies that are not fully baked or that are not cost effective at the present level of industry maturity, and think “we need this.”  Or they hear a vendor pitch that, though not actually an outright lie, promotes “aspirational functionality” that is not realistic at present (OK they are lying). 

Perhaps that is a little unfair.  Many new products are coming into a marketplace that is continually evolving. New marketing and customer engagement technologies number in the thousands. Data integration, normalization and management technologies are proliferating almost as quickly.  New software-as-a-service stacks for every department and function are causing further fragmentation of processes, content, and data.    Some initiatives that are theoretically possible may not yet be practical.  Much of the cognitive computing and virtual assistant technology functionality is still far from being cost effective or scalable. But organizations do need to get outside their comfort zone and try some of these new things – sometimes knowing they are not yet baked.  The challenge is in sorting out what is ready and practical from what is in the experimental, science project stage and knowing what to do with each one.

What exactly does that mean?

I attended a presentation for an organization that was investing an enormous amount of money in a phased approach to personalization.  A slide in the strategy deck for the leadership team said:

  • Phase one – limited personalization implemented
  • Phase two – personalization with demographic and interest specific content
  • Phase three – dynamic machine learning based personalization  
  • Phase four - predictive personalization enabled by machine learning
  • Phase five – AI-driven personalization with advanced prescriptive algorithms

First of all, these terms meant very little in terms of practical approaches.  They were jargon-filled and buzzword-compliant. Leadership always has to ask the question “so what?”  What does that mean?  What is “limited personalization”?  What will it look like?  What will it mean for the user?  For the departments that have to support the capability? 

After the vendor describes functionality in tangible, specific terms with actual examples, the next question is “How is this next phase different?”  What demographics are going to be meaningful?  How will we message these different targets with different information?  What interests are useful for personalization? 

If those answers make sense, terrific.  If they don’t, it’s time to pause and look for what’s real and what’s noise.  In the talk that I attended, the team was unable to describe what the rest of the phases meant.  What is the difference between dynamic machine learning and predictive machine learning?  Is the distinction meaningful?  Are the differences academic and technical or are there practical implications? 

How much do you need to understand?

Part of investigation is exploring what you don’t yet fully understand.  But there has to be a foundational explanation that makes sense with a range of options to explore and questions to answer.  If that basic understanding is not there, and this is a pure exploration of something completely new, then go into the project knowing that.

In this situation, when the design team began to develop content architecture to support personalization even at a basic level, the company trying to develop personalization had no idea how its various audiences differed from each other or what content to use to personalize the message.  The message was really the same across their different audiences.  The company lacked the underlying infrastructure or strategy to develop personalized content, let along build “advanced predictive, IoT-enabled smart machine learning dynamic AI-enabled prescriptive…” and so on.  It was nonsense.  But no one asked the hard questions and no one would say that the emperor (technology vendor) had no clothes. 

Here is question that always needs to be asked: “So what?”  What difference will this (approach, tool, technology, methodology, term, even data element) make?  Why do we need it?  What will it do for us, the user, our customer, the process, system, model, whatever element is being impacted?  When users ask to capture data for a process, the question is “what will you do with it?”

What does the user really need?

At the end of the day, all of this comes down to use cases and user scenarios – the day in the life of the user.  What are the things that they need to do to solve their problem or achieve their goal?  It is very easy to lose sight of the day-to-day tasks and activities of the people whom our technology is serving. 

I once worked with a large Medicare administrative contractor – one of the insurance processing organizations that handle Medicare claims.  Entire departments of people were churning out hundreds of documents about claims regulations and processes each week.  When we asked these people who they were creating the content for and what the purpose was, no one knew.  Further investigation revealed that much of the content was never even read or downloaded by anyone! 

Do you really need 600 applications to distribute products?

Technology and business processes have become so complex in large enterprises that it is difficult to understand the reasons why many of them exist.  They exist because its part of someone’s job or part of a legacy environment.  One distributor of retail goods used 600 applications to run their business.  No one understand that ecosystem and the underlying complexity.  In one financial services firm, nearly an entire department of people who managed data was eliminated.  Guess what?  No one complained.  This is why digital transformations are such incredible opportunities to question everything and continue to ask “So what?”

What the business needs is not a new application but the questioning of fundamental assumptions about the things that are in place and that everyone accepts as part of the process.  When a new application is being considered, that question is even more important.

It’s not fun or sexy

The reason why this is so difficult is that it is not fun or sexy.  The answers seem obvious.  The process is taken for granted. Questioning and examining the basics does not get people excited.  If the process is broken, the objective is to “just fix it” – don’t tell me its broken or how broken it is.

Digging deeper into underlying assumptions or core processes means potential disruption and the risk of impacting short-term profits.  The concept of a digital transformation has lost its meaning and become a catch all for just about any technology initiative. The true value is in questioning the obvious and looking for the real need, instead of becoming fascinated by the shiny new thing that is making headlines at the moment. 

One of the other challenges that is increasingly causing significant problems realizing the benefits of digital transformations is in the thoroughly unsexy area of building foundational data capabilities.

Digital Agility and the Data-driven Business Mindset

CIO’s have to deal with an enormous amount of process complexity built up over years of technology cycles and business evolution.  Today’s constantly connected, multi-channel, multi device world requires end-to-end data flows throughout those processes.  New digital technologies – especially those related to the customer experience – require revamped or entirely new supporting processes.  Integrated digital marketing requires a very different skillset, mindset and organizational design. A 360-degree understanding of the customer – the holy grail for many business leaders - requires that data move through digital value chains without the friction caused by brittle integrations or manual processes. 

Digital transformations require a view of processes that may turn traditional value chains upside down--for instance, disrupting field service with predictive analytics that trigger preventive maintenance, rather than having the service staff wait for a phone call.  This process entails onboarding and managing new data streams, understanding how to interpret those streams and instrumenting the appropriate analytics to monitor various points in the value stream.

Customer lifecycle management, for instance, is cross-departmental and cross-functional. It entails many processes, functional areas, departments and applications that cut across every aspect of how the organization serves its customers.  Dashboards need to tell the organization how to remediate an out-of-bounds indicator such as high bounce rates from an ecommerce site after a product launch or large numbers of returns after a promotion.  By monitoring the effectiveness of each step of the customer journey, the health of upstream systems and supporting functions can be measured continually, with remediation triggered when analytics reveal out of bounds values.  The IT organization can provide infrastructure and instrumentation; however, the business side needs to develop metrics and KPI’s as well as a remediation playbook based on its knowledge of customer needs and how to optimize the means to deliver the desired experience.    

Who is Responsible for Quality Data?

That experience is dependent upon complete, consistent, high quality data.  While the CIO and IT organization are seen as the providers of customer experience data, that is not actually where responsibility belongs, and it is unreasonable to expect it to.  Just as the business owns its processes, it also has to be responsible for the data that both fuels those processes and the exhaust from the machinery of those processes. 

Consider a sales organization that builds lead databases and designs sales campaigns.  Would the sales leadership blame the IT organization if an inside sales team failed to enter information about the calls they made each day?  Entering that information is part of the job of the sales representative.   This line of thinking can be extrapolated to many parts of the organization.  Unless the IT group owns an end to end process without involvement of the business, it can’t be held responsible for the quality of data that is essential to that process.  If data quality issues arise due to a flaw in an import or integration owned by IT, clearly that responsibility is IT’s.

IT needs to educate the business units about how to own their data, and should also provide the business the correct initial design, tooling, instrumentation, and infrastructure.  IT also needs to help develop the playbooks that allow business functions to optimize data driven processes. But data quality, consistency, completeness, and provenance need to be owned and managed by the business.  Getting there requires extensive cultural and process change and a significant investment to get off to the right start. (The business side will not have the skills or resources to design content and data architectures or to clean up a backlog of bad data from years of neglect and under investment) 

Thinking digitally

In addition to the process challenges and technical issues, getting the organization to develop a data-driven business mindset is also a significant cultural challenge.  This mindset begins at the design and capability development phase. - It requires framing digital capabilities in terms of end-to-end processes and experiences rather than focusing on point solutions. Once capabilities are developed, i the business must “think digitally” – meaning driving decisions with data, testing products, services and offerings iteratively, finding what does not work quickly (“failing fast”) and rapidly evolving the customer experience.

Most organizations are weighed down by legacy tools and applications that were designed and deployed without long-term integration, management, governance, and adaptability in mind.  Problems were considered from a limited span of processes -usually owned by a siloed function. Multiple cycles of development with this line of thinking incurred significant technological debt and required dealing with incompatible architectures and inconsistent data structures. Rapid, agile, iterative evolution of offerings, products and processes under these circumstances is not possible.  Adding new capabilities requires significant time and cost.  

All of this adds up to a lumbering, complex ecosystem of tools, technologies, and processes that is difficult to modify, leading to an inability to respond to customer and market changes in a timely fashion. 

But wait, there’s hope

It is possible to remediate some of these challenges without a massive, risky “rip and replace” approach.  Data virtualization, ontology-driven integration layers, semantic search/unified information access tools, and back-end robotic process automation (RPA) are all viable approaches to help organizations make up for their past digital sins. A good starting point is to inventory data sources, develop ownership policies, establish quality measures, map trust dependencies and create consistent reference architectures. As clean, consistent, quality data is the fuel for the organization’s applications, the role of a Chief Data Officer may prove to be a valuable catalyst for effecting needed process and culture change.   

CIOs and CTOs need to be aware of emerging technologies and their potential, but also to understand the foundational elements that are required to achieve this potential and not let their business stakeholders get distracted by slick vendor demos.  CIO’s also need to help the business understand the business’s role in owning the data that enables the advanced capabilities to serve customers, and to help foster a mindset that is focused on coordinated action rather than decisions made in isolation.  Getting that buy-in can go a long way toward improving the business/IT partnerships.

For a look at how we use information architecture to design and build an Intelligent Virtual Assistant download our white paper: Making Intelligent Virtual Assistants a Reality. In this paper we show how to deploy intelligent applications in your enterprise to achieve real business value.

 

Meet the author

Seth Earley

Seth Earley is the Founder & CEO of Earley Information Science and the author of the award winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable. An expert with 20+ years experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. He has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance.

Ready to talk?

Let us know how we can help you out, and one of our experts will be in touch right away.