All Posts

The Architectural Imperative for AI-Powered E-Commerce

This Article originally appeared in E-Commerce Times.

We've been promised, repeatedly, that artificial intelligence would revolutionize e-commerce. Think about it for a moment. Has it lived up to expectations for you?

If not, the problem may lie not in the technology you've deployed on your site, but in the architecture and data that support it.

Consider a few typical e-commerce scenarios that tap into artificial intelligence.

You enter an e-commerce site and start to navigate through categories. The site decides, based on who is visiting and other factors, to list high-margin categories first. Now let's add some more information: In the past you've always bought things that were on sale. The site knows this and therefore displays clearance items and the best deals first, maximizing the chances that you'll click and buy.

You enter an industrial site and search on the term "mold stripping." As in any search, the site has a choice: Show as many relevant results as possible (opting for completeness) or attempt to guess the exact right answer (opting for precision).

Does "mold-stripping" refer to a cleaning an injection mold, removing mold and mildew from a damp surface, or resurfacing wooden molding? A smart site takes into account all the context it can -- such as what you bought in the past and what other searches you've conducted -- and offers the best answer, whether it's lubricants, cleaning chemicals or abrasives.

You've clicked around a site and have added a few favorites in a category. The site interprets your actions as a buying signal and nudges you into a purchase.

You've added a selection of items into your shopping basket -- say chamois cloths and glass cleaner for cleaning windshields. The site knows that many other people who've made those choices also bought car wax and tire cleaner, so its shopping basket analysis algorithm suggests those items.

AI powers all of these scenarios: search, navigation, predictive offers and shopping basket analysis. However, they have more in common than that.

My analysis of numerous technology projects, both successes and failures, has shown that the effectiveness of AI-powered features like these ultimately depends on a high level of discipline about data and architecture -- and to a degree that few site managers recognize.

The Right Data and Organization Make All the Difference

The e-commerce customer experience is made up entirely of data. The quality of the underlying data determines the quality of the experience. While this sounds obvious, in practice I've observed that many organizations have immature product information processes.

When they onboard new products, they don't manage product information in an adaptable, sustainable way. The result is dirty, incomplete and inconsistent data that undermines the ability of AI to deliver the optimal experience.

The fuel for an intelligent e-commerce experience comes from two kinds of data: information related to products and customer data.

Start with products. Managing a selection of thousands or millions of products begins with the product hierarchy called a "display taxonomy." Just as products in a physical store are arranged according to a logical set of aisles and shelves featuring similar products, the products in a virtual store need to be organized according to a logical set of categories and qualities suited to the unique needs of the business' customers.

This is the product display taxonomy, and its design is just as important to an e-commerce site as the planogram of a physical store is to its ultimate shopping experience. Differentiation of that display taxonomy is one source of competitive advantage.

If you know how your customer solves their problems and can arrange products in a more effective way than competitors, you will retain their business. If they can't find what they need quickly and easily, they move on.

Product Information Management (PIM) systems hold the information about products, including their relationships. They know which products are accessories to other products and which ones usually are used together. Yet this data is effective only if the onboarding process for new products is sufficiently rigorous to always include such relationships.

In my experience, the design of the taxonomy of data and categories in the PIM is a subtle and challenging problem that many technology managers overlook. The more fine-tuned the taxonomy to unique customer needs, the more the site can offer AI-powered suggestions that lift yields.

However, customized taxonomies often run afoul of rigid industry standards. The design of the taxonomy and the product onboarding process is therefore a delicate balance between standardized and site-specific elements.

The other side of the challenge is customer data. Personas (like "first time visitors" or "price-sensitive buyers") allow sites to make sense of the diversity of users they encounter. Designers then use those personas to make taxonomy and customer experience decisions.

They reflect audience attributes such as customer loyalty, impatience, or consciousness of value. Testing based on those attributes then allows the site design to refine its approach to particular types of customers with different needs.

There's another unsuspected source of challenges in the data that supports AI in e-commerce: terminology. When serving multiple audiences, the same terminology can have multiple meanings and contexts (remember "mold stripping"?). Standardizing terminology is an essential element to making the product taxonomy and audience data usable and effective.

How Customized Sites Powered by AI Actually Develop

Despite the automation that seems inherent in customizing a site, in my experience the design always begins with a very human, almost artisanal set of decisions. A marketing specialist who knows the target customer starts by deciding what message or part of a message is likely to resonate -- and then tests it by iterating on a set of handcrafted variations.

The specialist then handcrafts the message and tries a variation, just as an artisan uses craft knowledge to create something that will engage with another human. The marketer then will try other variations and learn which other items might work and which probably will not.

Eventually, machine learning comes in, as AI-based algorithms try the likely variations and optimize their combinations based an ongoing process of testing and continual improvement.

How to Get Your Data House in Order to Best Power AI in E-Commerce

How can you make sure your AI tools actually deliver the experience they promise? Reviewing commonalities from dozens of projects, I've observed key areas to concentrate on to ensure that the data on which AI operates actually can enable a better, higher-yield experience:

  • Build the right content architecture. This includes defining a model for metadata about products and supporting content, controlling vocabularies and terminology, and -- most importantly -- ensuring that the content architecture supports customer experience. It means creating product taxonomies designed specifically to support the tasks that site visitors most often undertake. Such an architecture must support a dynamically generated customer experience and enable cross-selling.
  • Create rigorous rules for supplier and product onboarding. Using requirements in procurement contracts, ensure that suppliers provide an inventory of available product metadata, then validate data models against merchandising requirements. That baseline data then can be enriched with unique attributes based on customer needs and preferences. Verify that product managers are getting necessary content and data to allow for site customization, and that the product data is complete and consistent. Architect product attributes in ways that are most useful for site customization and for enabling customers to make choices.
  • Audit content operations. Define a workflow for content ingestion and automated content tagging. If content for a product needs to change, make sure there is there a defined workflow to change it. Architect systems to manage and respect content rights and track promotions lifecycles.
  • Manage digital assets for site customization. This includes making sure that spec sheets and, potentially, engineering drawings and brochures are available, and that assets are organized in a way that makes retrieval and reuse easier through an appropriate content architecture.
  • Refine a personalization strategy. Document buyers' needs and personas, then create tasks and objectives aligned with the personalizing content based on those personas. Ideally, content will be assembled dynamically based on visitor behavior.
  • Optimize omnichannel experience. Verify that store promotions are consistent with online promotions, and that customers can identify and find stock in stores based on information from sites. Test experiences that cross devices -- mobile, PC and in-store -- to determine where glitches in the experience arise.
  • Wrangle analytics to refine site effectiveness. Content performance metrics should be embedded in governance processes, and sites should continuously measure search effectiveness as well as paths on which visitors leave without purchasing.

None of these tasks are easy. In fact, your progress on them will determine your site's level of maturity for AI readiness. Auditing your progress on these challenges -- and putting plans in place to improve them -- will go a long way toward making sure that your future site improvements effectively use the artificial intelligence advances coming down the pike.

This is where you should be concentrating many of your efforts. Adding more AI-powered modules on top of a weak and inconsistent content and data architecture will end up costing you in the long run. Think more about the data and less about the bells and whistles, and you'll be on a path to preparing your site for the technologies of the future.

 

Earley Information Science Team
Earley Information Science Team
We're passionate about enterprise data and love discussing industry knowledge, best practices, and insights. We look forward to hearing from you! Comment below to join the conversation.

Recent Posts

[RECORDED] Unlock the Value of Data Discovery Using Knowledge Graphs and Hybrid AI

Successful knowledge management, risk management and process automation initiatives use Knowledge Graphs and Text Analytics for data discovery to extract value from documents and transform them into actionable insights and data. Knowledge Graphs aka Semantic Networks are the bedrock of an organization’s Information Architecture - modeling an organization’s products, services and people. Such semantic approaches, leveraging Natural Language techniques, have been the backbone of Text Analytics. Recently, advances in Machine Learning (ML) are augmenting such traditional approaches to create Hybrid AI. Attend the next Earley Information Science webinar to understand the key steps to set up your next data discovery initiative for success using the latest methodology and technologies. We’ve partnered with Expert.AI, a recognized leader in document-oriented text analytics platforms to explain the technical and methodological advances that enable better data discovery.

First Party Data: The New Imperative

The need for accurate data to support digital transformation and the emergence of new restrictions on the use of third-party data have prompted many companies to focus their attention on first party data.

Knowledge Graphs, a Tool to Support Successful Digital Transformation Programs

Knowledge graphs are pretty hot these days. While this class of technology is getting a lot of market and vendor attention these days, it is not necessarily a new construct or approach. The core principles have been around for decades. Organizations are becoming more aware of the potential of knowledge graphs, but many digital leaders are puzzled as to how to take the next step and build business capabilities that leverage this technology.