All Posts

The Architectural Imperative for AI-Powered E-Commerce

This Article originally appeared in E-Commerce Times.

We've been promised, repeatedly, that artificial intelligence would revolutionize e-commerce. Think about it for a moment. Has it lived up to expectations for you?

If not, the problem may lie not in the technology you've deployed on your site, but in the architecture and data that support it.

Consider a few typical e-commerce scenarios that tap into artificial intelligence.

You enter an e-commerce site and start to navigate through categories. The site decides, based on who is visiting and other factors, to list high-margin categories first. Now let's add some more information: In the past you've always bought things that were on sale. The site knows this and therefore displays clearance items and the best deals first, maximizing the chances that you'll click and buy.

You enter an industrial site and search on the term "mold stripping." As in any search, the site has a choice: Show as many relevant results as possible (opting for completeness) or attempt to guess the exact right answer (opting for precision).

Does "mold-stripping" refer to a cleaning an injection mold, removing mold and mildew from a damp surface, or resurfacing wooden molding? A smart site takes into account all the context it can -- such as what you bought in the past and what other searches you've conducted -- and offers the best answer, whether it's lubricants, cleaning chemicals or abrasives.

You've clicked around a site and have added a few favorites in a category. The site interprets your actions as a buying signal and nudges you into a purchase.

You've added a selection of items into your shopping basket -- say chamois cloths and glass cleaner for cleaning windshields. The site knows that many other people who've made those choices also bought car wax and tire cleaner, so its shopping basket analysis algorithm suggests those items.

AI powers all of these scenarios: search, navigation, predictive offers and shopping basket analysis. However, they have more in common than that.

My analysis of numerous technology projects, both successes and failures, has shown that the effectiveness of AI-powered features like these ultimately depends on a high level of discipline about data and architecture -- and to a degree that few site managers recognize.

The Right Data and Organization Make All the Difference

The e-commerce customer experience is made up entirely of data. The quality of the underlying data determines the quality of the experience. While this sounds obvious, in practice I've observed that many organizations have immature product information processes.

When they onboard new products, they don't manage product information in an adaptable, sustainable way. The result is dirty, incomplete and inconsistent data that undermines the ability of AI to deliver the optimal experience.

The fuel for an intelligent e-commerce experience comes from two kinds of data: information related to products and customer data.

Start with products. Managing a selection of thousands or millions of products begins with the product hierarchy called a "display taxonomy." Just as products in a physical store are arranged according to a logical set of aisles and shelves featuring similar products, the products in a virtual store need to be organized according to a logical set of categories and qualities suited to the unique needs of the business' customers.

This is the product display taxonomy, and its design is just as important to an e-commerce site as the planogram of a physical store is to its ultimate shopping experience. Differentiation of that display taxonomy is one source of competitive advantage.

If you know how your customer solves their problems and can arrange products in a more effective way than competitors, you will retain their business. If they can't find what they need quickly and easily, they move on.

Product Information Management (PIM) systems hold the information about products, including their relationships. They know which products are accessories to other products and which ones usually are used together. Yet this data is effective only if the onboarding process for new products is sufficiently rigorous to always include such relationships.

In my experience, the design of the taxonomy of data and categories in the PIM is a subtle and challenging problem that many technology managers overlook. The more fine-tuned the taxonomy to unique customer needs, the more the site can offer AI-powered suggestions that lift yields.

However, customized taxonomies often run afoul of rigid industry standards. The design of the taxonomy and the product onboarding process is therefore a delicate balance between standardized and site-specific elements.

The other side of the challenge is customer data. Personas (like "first time visitors" or "price-sensitive buyers") allow sites to make sense of the diversity of users they encounter. Designers then use those personas to make taxonomy and customer experience decisions.

They reflect audience attributes such as customer loyalty, impatience, or consciousness of value. Testing based on those attributes then allows the site design to refine its approach to particular types of customers with different needs.

There's another unsuspected source of challenges in the data that supports AI in e-commerce: terminology. When serving multiple audiences, the same terminology can have multiple meanings and contexts (remember "mold stripping"?). Standardizing terminology is an essential element to making the product taxonomy and audience data usable and effective.

How Customized Sites Powered by AI Actually Develop

Despite the automation that seems inherent in customizing a site, in my experience the design always begins with a very human, almost artisanal set of decisions. A marketing specialist who knows the target customer starts by deciding what message or part of a message is likely to resonate -- and then tests it by iterating on a set of handcrafted variations.

The specialist then handcrafts the message and tries a variation, just as an artisan uses craft knowledge to create something that will engage with another human. The marketer then will try other variations and learn which other items might work and which probably will not.

Eventually, machine learning comes in, as AI-based algorithms try the likely variations and optimize their combinations based an ongoing process of testing and continual improvement.

How to Get Your Data House in Order to Best Power AI in E-Commerce

How can you make sure your AI tools actually deliver the experience they promise? Reviewing commonalities from dozens of projects, I've observed key areas to concentrate on to ensure that the data on which AI operates actually can enable a better, higher-yield experience:

  • Build the right content architecture. This includes defining a model for metadata about products and supporting content, controlling vocabularies and terminology, and -- most importantly -- ensuring that the content architecture supports customer experience. It means creating product taxonomies designed specifically to support the tasks that site visitors most often undertake. Such an architecture must support a dynamically generated customer experience and enable cross-selling.
  • Create rigorous rules for supplier and product onboarding. Using requirements in procurement contracts, ensure that suppliers provide an inventory of available product metadata, then validate data models against merchandising requirements. That baseline data then can be enriched with unique attributes based on customer needs and preferences. Verify that product managers are getting necessary content and data to allow for site customization, and that the product data is complete and consistent. Architect product attributes in ways that are most useful for site customization and for enabling customers to make choices.
  • Audit content operations. Define a workflow for content ingestion and automated content tagging. If content for a product needs to change, make sure there is there a defined workflow to change it. Architect systems to manage and respect content rights and track promotions lifecycles.
  • Manage digital assets for site customization. This includes making sure that spec sheets and, potentially, engineering drawings and brochures are available, and that assets are organized in a way that makes retrieval and reuse easier through an appropriate content architecture.
  • Refine a personalization strategy. Document buyers' needs and personas, then create tasks and objectives aligned with the personalizing content based on those personas. Ideally, content will be assembled dynamically based on visitor behavior.
  • Optimize omnichannel experience. Verify that store promotions are consistent with online promotions, and that customers can identify and find stock in stores based on information from sites. Test experiences that cross devices -- mobile, PC and in-store -- to determine where glitches in the experience arise.
  • Wrangle analytics to refine site effectiveness. Content performance metrics should be embedded in governance processes, and sites should continuously measure search effectiveness as well as paths on which visitors leave without purchasing.

None of these tasks are easy. In fact, your progress on them will determine your site's level of maturity for AI readiness. Auditing your progress on these challenges -- and putting plans in place to improve them -- will go a long way toward making sure that your future site improvements effectively use the artificial intelligence advances coming down the pike.

This is where you should be concentrating many of your efforts. Adding more AI-powered modules on top of a weak and inconsistent content and data architecture will end up costing you in the long run. Think more about the data and less about the bells and whistles, and you'll be on a path to preparing your site for the technologies of the future.

 

Earley Information Science Team
Earley Information Science Team
We're passionate about enterprise data and love discussing industry knowledge, best practices, and insights. We look forward to hearing from you! Comment below to join the conversation.

Recent Posts

Conversation with ChatGPT on Enterprise Knowledge Management

In another article, I discussed my research into ChatGPT and the interesting results that it produced depending on the order in which I entered queries. In some cases, the technology seemed to learn from a prior query, in others it did not. In many cases, the results were not factually correct.

The Future of Bots and Digital Transformation – Is ChatGPT a Game Changer?

Digital assistants are taking a larger role in digital transformations. They can improve customer service, providing more convenient and efficient ways for customers to interact with the organization. They can also free up human customer service agents by providing quick and accurate responses to customer inquiries and automating routine tasks, which reduces call center volume. They are available 24/7 and can personalize recommendations and content by taking into consideration role, preferences, interests and behaviors. All of these contribute to improved productivity and efficiency. Right now, bots are only valuable in very narrow use cases and are unable to handle complex tasks. However, the field is rapidly changing and advances in algorithms are having a very significant impact.

[February 15] Demystifying Knowledge Graphs – Applications in Discovery, Compliance and Governance

A knowledge graph is a type of data representation that utilizes a network of interconnected nodes to represent real-world entities and the relationships between them. This makes it an ideal tool for data discovery, compliance, and governance tasks, as it allows users to easily navigate and understand complex data sets. In this webinar, we will demystify knowledge graphs and explore their various applications in data discovery, compliance, and governance. We will begin by discussing the basics of knowledge graphs and how they differ from other data representation methods. Next, we will delve into specific use cases for knowledge graphs in data discovery, such as for exploring and understanding large and complex datasets or for identifying hidden patterns and relationships in data. We will also discuss how knowledge graphs can be used in compliance and governance tasks, such as for tracking changes to data over time or for auditing data to ensure compliance with regulations. Throughout the webinar, we will provide practical examples and case studies to illustrate the benefits of using knowledge graphs in these contexts. Finally, we will cover best practices for implementing and maintaining a knowledge graph, including tips for choosing the right technology and data sources, and strategies for ensuring the accuracy and reliability of the data within the graph. Overall, this webinar will provide an executive level overview of knowledge graphs and their applications in data discovery, compliance, and governance, and will equip attendees with the tools and knowledge they need to successfully implement and utilize knowledge graphs in their own organizations. *Thanks to ChatGPT for help writing this abstract.