All Posts

Measuring Website Usage & Taxonomy Effectiveness

How can you use analytics to understand and optimize your taxonomy to drive high-performance eCommerce?

Three key metric sets help us understand this issue:

  • Web Analytics Metrics
  • User Testing Metrics
  • Design Metrics.

How the metrics connect to business value is key. One formula with a direct line to business value is:

Traffic x Conversion Rate x Average Order Value = eCommerce Revenue or Returns.

Conversion rate is often the metric of primary importance, but that doesn’t mean the others aren’t important. The conversion rate can mean something different for manufacturers vs. retailers. On the retail side, conversion rate may be calculated as the percentage of users who buy a product, while a manufacturer may consider the downloading of content as a value for conversion rate.

Web Analytics Metrics

Web analytics allow for metrics to measure success based on website activity. Some of these metrics include traffic – how many page views, traffic per revenue – how many page views per dollar, conversion rate, navigation time and directness. Traffic, traffic per revenue and conversion rate are all very tied together. You need to be able to see all of these to get an overall idea of which products are successfully moving to purchase. This is real-time data that provides line of sight to what is working and what is not

User Testing Metrics

User testing metrics software such as Treejack requires users to run through specific tasks to locate a category in a taxonomy in which they would expect to find a particular product. User testing metrics require a decent sized pool of testers to run through the instance so that measurables can be gathered. There are usually 10-20 tasks per test. The key metrics that are gleaned from this test for overall rates and task-to-task rates are

  • Success in selecting the right category
  • Average navigation time
  • Directness

Success rate indicates the number of times the tester selected the correct category for product classification.

Average navigation time tells us how long it took the tester to select the category.

Directness indicates whether the tester went straight to the category selection or if they had to look around the taxonomy, or “pogostick,” until they found a category they felt fit the product.

These metrics are aggregable so we can see averages for specific portions of the taxonomy or the taxonomy overall.

Design Metrics

Design metrics are specific to a taxonomy. These measures include (among others):

  • Items per category
  • Item concentration
  • Buried items

Items per category provides insight into the balance of the taxonomy. If there is a wide range of number of products per category, then the taxonomy might need to be rebalanced. For example, if there are few items in a category, it may need to be combined with another category. If there are many products in a category, it might need to be split into subgroups so the user does not need to scroll through as many options to find the target product. This may also indicate these variables on a higher level of the taxonomy from the L1 to the terminal node. Revenue = per category can provide a broader picture and indicate whether categories need to be merged or split.  

Item concentration analytics brings the metric to a more L1 specific level. Measuring item concentration among the subcategories will reveal the balance with in that L1 with a version of the Herfindahl index (an accepted measure of market concentration of the largest firms in an industry) modified for item concentration rather than market share. This can show us where the hierarchy is imbalanced.

Buried items is another metric that can be used to find opportunities to improve taxonomy design. Here we are measuring the degree to which categories are buried in the hierarchy (for example, putting it in a category called “Other”) and if a specific category needs to be moved up or down a level in that hierarchy.

Consider revenue

Revenue should be taken into account for design metrics as much as possible. Permissions may be needed to gather that data. A trailing year of revenue data will be more accurate than a smaller time frame, due to seasonality, since some categories sell at a higher rate during different times of the year. Proxies can be used if revenue is not available, such as a tiered approach to show priority of an item or category or numbers of orders. These numbers will bring high-revenue items up in the hierarchy so they have more of an impact on the analytics..

Final thoughts on measuring the effectiveness of your website

  • Start simple and focus on outliers. Look at the very low- or high-end numbers to bring balance to the taxonomy.
  • Consider market references – compare your data to competitor websites, both distributors and manufactures.
  • Consider specialized software and home-grown solutions to get the analytics you want that may not be available in commercial software.
  • Integrate metrics with data governance programs. Create processes to manage the outliers.
  • Use revenue in addition to item counts whenever possible
  • Tie other metrics to conversion rates whenever possible.
  • Analyze correlations between the three types of metrics (design, testing, web analytics)

Following these guidelines will help you get the most out of your analytics, and bring together insights from web analytics, user testing analytics, and design analytics.

For a deeper dive into how we use customer data models, product data models, content models, and knowledge architecture to create a framework for unified commerce download our whitepaper: Attribute-Driven Framework for Unified Commerce

Chantal Schweizer
Chantal Schweizer
Chantal Schweizer is a taxonomy professional with over 10 years of experience in Product Information and Taxonomy. Prior to joining Earley Information Science, Chantal worked on the Product Information team at Grainger for 9 years, Schneider Electric’s PIM team for 2 years and had some previous work in PIM consulting.

Recent Posts

[7/20/2022] Powering Personalized Search with Knowledge Graphs

Transforming Legacy Faceted Search into Personalized Product Discovery July 20, 2022 @ 1:00PM EDT, 6:00PM BST The latest in e-commerce trends is the transformation of legacy faceted search into a more personalized experience. By applying semantic reasoning over a knowledge graph, contextual information about a customer can be combined with product data, delivering relevant search results tailored to them. The first half of this webinar is designed for the business executive. We’ll focus on why personalized search is an essential e-commerce ingredient. And we’ll demystify the process of implementing a more personalized product discovery experience for your customers. The second half of the webinar is designed for the data strategist. We’ll cover the data modeling required to build knowledge graphs for successful personalized search. We’ll include a real-world demonstration and cover the steps you can take to get started. Who should attend: Executives who care about e-commerce and the data experts who enable them. Speakers:

AIs That Can Draw - It is Still About the Data

Creativity is considered to be a bastion of humanness and somewhat outside of the realm of artificial intelligence.  But AI can be used to generate variations of artistic themes that appear to be creations of their own. 

[RECORDED] Artificial Intelligence Begins With Information Architecture

Building An AI-Powered Enterprise Recorded Webcast In this webinar we establish the formula for AI success:  AI-Powered solutions are only as good as the data that fuels them. Successful AI requires a semantic data layer built on a solid enterprise information architecture. We’ll demystify this topic for executives and provide actionable advice for data strategists. Who should attend: Executives who care about AI and the data experts who enable them. Speakers: