I attended a conference a while back about the future of the digital worker and the use of artificial intelligence. I came to the event with my skeptic hat firmly on my head.
The event was attended by approximately 1000 people. It was held in lower Manhattan near Wall Street, in the heart of the financial district. The staff-to-attendee ratio was quite high, and the conference was clearly a well-produced, high-quality event. The session opened with a welcome by a virtual assistant. Although it was difficult for me to hear what she was saying since I was outside the main room, I heard the laughter of the audience as the virtual assistant made some corny jokes that one might hear from a Siri or Alexa.
At one point the main speaker at the conference stated that the AI developed by his company “thinks and learns like the human brain.” I took exception to that assertion, tweeting a link to an article I had written, “Five Myths About Artificial Intelligence.” At a break, I had a conversation with the head of an innovation lab at a large financial firm about how inaccurate these assertions were. Later, this individual told me that our discussion was “an island of sanity in an ocean of preposterous claims.”
"...an island of sanity in an ocean of preposterous claims."
There is no question that we are at an inflection point in human history when it comes to AI. The capabilities that we struggle to develop – helpful chatbots and virtual assistants that can answer our routine questions – will one day be taken for granted, just like we now take for granted speech recognition or high bandwidth connectivity.
It was not long ago that I spent time training my Dragon Dictate system to recognize my words. Even after lengthy training, it still proved cumbersome and difficult to use. Today, Siri recognizes even difficult to understand or disambiguate phrases. Similarly, it was not long ago that loading video on a “smart” phone (they were not that smart at the beginning) was an exercise in futility. Now, we take fast-loading streaming video for granted.
Today, I have to configure many different tools, all using different approaches, to stitch together some semblance of a smart home. A great deal of manual work goes into automating these functions. Even after being configured, they are brittle, difficult to update or add to, and require continual updates of multiple applications.
Just as I write this, my office smart lights dimmed unexpectedly, causing me to stop writing and futz with the app to get it working correctly. When adding new lights, I discovered that an old IT vendor removed the smart hub requiring me to enter each bulb’s serial number into the system so the new hub could connect. Since that tedious and time-consuming job was not completed, some bulbs are connected to one system and others to another system. (Kind of like deploying a new content management system and not getting around to moving old content to the new tool.) It takes time and effort to work with new technologies.
Corporate IT systems are going through analogous growing pains. New groups are being created to manage “training content” for artificial intelligence systems – causing further fragmentation of data and content, while the organization tries to understand these new systems and approaches. Different technologies from Amazon, IBM, Google, and Apple, while able to talk to one another at the API level, use different approaches for managing and processing the content that drives functionality.
Where do you place your bets? Which systems will become the de facto standard? Organizations are struggling with how to best move through the learning curve and not be left behind this next wave of transformation.
Artificial Intelligence is a broad term representing many classes of technology and its various incarnations. The technologies are having an impact on the bottom line of many organizations but in some cases the developers are not calling it artificial intelligence. Semantic search, for example, leverages machine learning, advanced clustering, and categorization algorithms. These techniques can enable more personalized access of information, whether for internal audiences and digital workplace tools, or to provide external audiences with product recommendations and related content suggestions.
Predictive analytic algorithms are another type of recommendation engine. They process data signals and provide a result that anticipates business risks, equipment failures, or market demands. Manufacturers are instrumenting their products to provide real-time usage and performance data that can be used for multiple purposes: to improve functionality, increase usable life, avoid maintenance failures or enhance offerings with data driven optimization services. Such enhancements reduce commoditization pressures, increase value and reduce competitive encroachment on market share.
The common element in all of these applications is the data and content, which fuel all organizational processes. They require common architectures and organizing principles to allow that data to flow to the people and processes that require it as their needs evolve.
The core requirements (the table stakes) to enable results have not gone away, although many would-be users have thrown lots of money at the problem. Challenging issues include:
- Managing an increasingly complex bundle of technologies and sorting out what’s real vs. what is vendor vaporware
- Getting timely access to data in disparate systems
- Orchestrating functional departments to communicate, coordinate, and accelerate progress
- Process bottlenecks and redundant manual work arounds
- Poor data quality, structure, and incompatible data formats
AI and machine learning alone cannot solve these problems which are caused by lack of a consistent information architecture, enterprise ontologies, product taxonomies, and knowledge infrastructure. These problems have long been brushed aside or swept under the rug, lost in the shuffle of the “next big thing,” the new upgrade, or the shiny new technology and empty vendor promises that go with them.
So how does the CXO deal with this?
- Don’t look at AI as a panacea, promise, or “the latest toy.” Give serious consideration to the underlying processes and upstream information flows that will be required to solve the business problem at hand.
- Develop the right information architecture for AI before even thinking about spending money on tools/software.
- Build context and use case specific taxonomies and metadata models that are tied together by a unifying domain model (a big picture view of the enterprise information environment).
- Implement the appropriate foundational requirements around data quality and structure, which includes going upstream to fix data problems at their source, rather than trying to remediate after the fact, and approach that can be orders of magnitude more costly.
The lesson to learn is that you can’t take shortcuts to get to AI, no matter how good the shiny new technologies sound. You need to build on foundational principles and practices – which is not a bad idea anyway.
For a look at how we use information architecture to design and build an Intelligent Virtual Assistant download our white paper: Making Intelligent Virtual Assistants a Reality. In this paper we show how to deploy intelligent applications in your enterprise to achieve real business value.