Measuring Website Usage & Taxonomy Effectiveness

How can you use analytics to understand and optimize your taxonomy to drive high-performance eCommerce?

Three key metric sets help us understand this issue:

  • Web Analytics Metrics
  • User Testing Metrics
  • Design Metrics.

How the metrics connect to business value is key. One formula with a direct line to business value is:

Traffic x Conversion Rate x Average Order Value = eCommerce Revenue or Returns.

Conversion rate is often the metric of primary importance, but that doesn’t mean the others aren’t important. The conversion rate can mean something different for manufacturers vs. retailers. On the retail side, conversion rate may be calculated as the percentage of users who buy a product, while a manufacturer may consider the downloading of content as a value for conversion rate.

Web Analytics Metrics

Web analytics allow for metrics to measure success based on website activity. Some of these metrics include traffic – how many page views, traffic per revenue – how many page views per dollar, conversion rate, navigation time and directness. Traffic, traffic per revenue and conversion rate are all very tied together. You need to be able to see all of these to get an overall idea of which products are successfully moving to purchase. This is real-time data that provides line of sight to what is working and what is not

User Testing Metrics

User testing metrics software such as Treejack requires users to run through specific tasks to locate a category in a taxonomy in which they would expect to find a particular product. User testing metrics require a decent sized pool of testers to run through the instance so that measurables can be gathered. There are usually 10-20 tasks per test. The key metrics that are gleaned from this test for overall rates and task-to-task rates are

  • Success in selecting the right category
  • Average navigation time
  • Directness

Success rate indicates the number of times the tester selected the correct category for product classification.

Average navigation time tells us how long it took the tester to select the category.

Directness indicates whether the tester went straight to the category selection or if they had to look around the taxonomy, or “pogostick,” until they found a category they felt fit the product.

These metrics are aggregable so we can see averages for specific portions of the taxonomy or the taxonomy overall.

Design Metrics

Design metrics are specific to a taxonomy. These measures include (among others):

  • Items per category
  • Item concentration
  • Buried items

Items per category provides insight into the balance of the taxonomy. If there is a wide range of number of products per category, then the taxonomy might need to be rebalanced. For example, if there are few items in a category, it may need to be combined with another category. If there are many products in a category, it might need to be split into subgroups so the user does not need to scroll through as many options to find the target product. This may also indicate these variables on a higher level of the taxonomy from the L1 to the terminal node. Revenue = per category can provide a broader picture and indicate whether categories need to be merged or split.  

Item concentration analytics brings the metric to a more L1 specific level. Measuring item concentration among the subcategories will reveal the balance with in that L1 with a version of the Herfindahl index (an accepted measure of market concentration of the largest firms in an industry) modified for item concentration rather than market share. This can show us where the hierarchy is imbalanced.

Buried items is another metric that can be used to find opportunities to improve taxonomy design. Here we are measuring the degree to which categories are buried in the hierarchy (for example, putting it in a category called “Other”) and if a specific category needs to be moved up or down a level in that hierarchy.

Consider revenue

Revenue should be taken into account for design metrics as much as possible. Permissions may be needed to gather that data. A trailing year of revenue data will be more accurate than a smaller time frame, due to seasonality, since some categories sell at a higher rate during different times of the year. Proxies can be used if revenue is not available, such as a tiered approach to show priority of an item or category or numbers of orders. These numbers will bring high-revenue items up in the hierarchy so they have more of an impact on the analytics..

Final thoughts on measuring the effectiveness of your website

  • Start simple and focus on outliers. Look at the very low- or high-end numbers to bring balance to the taxonomy.
  • Consider market references – compare your data to competitor websites, both distributors and manufactures.
  • Consider specialized software and home-grown solutions to get the analytics you want that may not be available in commercial software.
  • Integrate metrics with data governance programs. Create processes to manage the outliers.
  • Use revenue in addition to item counts whenever possible
  • Tie other metrics to conversion rates whenever possible.
  • Analyze correlations between the three types of metrics (design, testing, web analytics)

Following these guidelines will help you get the most out of your analytics, and bring together insights from web analytics, user testing analytics, and design analytics.

For a deeper dive into how we use customer data models, product data models, content models, and knowledge architecture to create a framework for unified commerce download our whitepaper: Attribute-Driven Framework for Unified Commerce

Earley Information Science Team

We're passionate about enterprise data and love discussing industry knowledge, best practices, and insights. We look forward to hearing from you! Comment below to join the conversation.