Information managers need to be able to quantify the value of curation, because otherwise justifying its cost can be difficult. Curation is a key enabler for collaboration, data quality, accessibility, adaptability, and usability. All are great capabilities, but can we know for sure how much are they worth? The answer is yes: We can calculate the ROI on curation efforts, and in this first of three articles I'm going to show you how.
So, is a curation effort worth US$100,000 or $10,000,000? What's a reasonable annual spend? Let me show you how to determine this with confidence.
Curation consists of the intentional information management efforts that improve the desired data and knowledge outcomes. For example, "search curation" improves the quality and experience of search. To measure ROI, we focus on the outcomes more than the actions that produce them, but typical curation solutions include data modeling and cleansing, process optimization, taxonomy and IA development, tools selection and synchronization, content strategy, automation, and governance.
The Outcomes Model
Measurability requires detectable outcomes. Instead of trying to isolate and measure the value of "taxonomy," we attempt to measure the impact of "taxonomy-like efforts" (i.e. curation) on the business. We don't need to get the number exactly right. Any estimates that improve upon our current knowledge can provide value, and because they are estimates, we acknowledge they may not be perfect to start out with, and can be improved over time.
Consider any business action or activity that users might need to repeat until they get it right: performing a content search, completing a design, building a document, or researching a decision. Our algorithm describes this as a simple loop: (1) perform the action, (2) succeed or fail, (3) try again or quit. To calculate the cost of this loop we assign costs to each one: (1) the cost of performing the action, (2) the cost of failure, and (3) the cost of quitting.
The ideal situation -- in which the action is performed once and success is immediately achieved -- incurs the smallest cost, but can still be improved by reducing the action cost (e.g. speeding it up). In a more realistic situation, some users try not just once, but two, three, or four times before they achieve their goal. Others quit without ever achieving their goals; and some users mistakenly believe they've succeeded and quit without realizing their error until later.
The Obvious Costs
In the simplest interpretation, three costs are associated with this model: the cost of performing the action, and two different failure costs. To illustrate these, let's consider the simple example of someone trying to log into an account. The performance cost is often measured in time. How long does it take someone to type their account name and password into a dialog? We can measure this. When I use challenging passwords, I need about ten seconds. If I mistype something, I'll probably try again, but each additional attempt will take longer as I type more carefully or rethink what I know.
After some number of tries, however, one of two things will happen. Either I'll abandon the task and be forced to incur the cost of trying something else (e.g. using the phone, resetting my password, using someone else's account), or I'll trust myself one too many times (false positive) and lock myself out of my account.
In this example, most of my costs are measurable as time spent or wasted. Depending on the situation, I could spend upwards of 15 minutes talking to technical support on the phone. There might also be money or social costs if I can't accomplish my goal: Paying fees or penalties, switching service providers, embarrassment, and other realized risks like lost wages or lost data.
In addition, the company whose application is asking me for a password also experiences costs. Customer retry time might not seem that important (since they’re not the ones suffering), but the company definitely spends money on technical support, loses profit when customers quit, and maybe suffers some realized risk around account theft.
You can see that this model has near-universal applicability: any action as defined, any number of repetitions, and any number of defined success and failure outcomes. So let's look at a specific example close to the hearts of information managers: search quality.
Assume that employees at Acme perform 100,000 intranet searches every business day of the year. Despite recent IT initiatives, search is still slow and clunky, and the metadata are pretty bad. It takes 10 seconds to perform a search, and often another 10 seconds to browse and interpret the results. Search could be better too, with only a 60% likelihood that the first page provides what the user wants.
The company observes that half of its employees quit trying after a third failure, and everyone else quits by the fourth attempt. Meanwhile, the two biggest quality gotchas are version control and indexing speed: Because the search index is only updated overnight and not in real-time, documents aren't available until the day after their creation. Old versions of a document often look a lot like current versions. So maybe 1% of the time people grab an outdated copy when there is actually something more current, and completely miss one that has not yet been indexed.
How does this translate into money? Acme employees spend over 22,000 hours annually using search; at $30/hour per employee, Acme spends about $650k on search time, over and above any technology costs. Meanwhile, search fails over 6000 times per day, and 450 times every day someone gives up in frustration. For simplicity, let's say that when an employee gives up, the company loses an additional $30: calls for help, rework, and project delays. Additionally, remember that 1% of searches retrieve the wrong document; this might quadruple the rework cost to $120.
So our total annual costs now include $650k in search time, $3.4M from employees who give up, and a bonus $2.9M penalty from working with bad documents. In total, Acme spends $6.9M every year because of the current suboptimal search situation.
And we're not even finished.
The Hidden Costs
The negativity that is generated by failure of any kind has a cumulative effect. It can impact relationships among companies, employees, customers, and vendors. A bad user experience will cause customers to shop elsewhere, employees to take shortcuts that escalate costs and losses, and employee morale and retention to suffer. These problems can further cascade as customers and employees go to friends and social media to share their negativity and lure yet more people away.
When these problems grow sufficiently large, the company is forced to spend still more money to repair systems, processes, and its own reputation. And underneath everything there is always the actuarial cost of extreme risk realization, when a highly unlikely but catastrophic result occurs: legal action, brand destruction, and worse. These extreme situations are discussed constantly among governance and compliance professionals, even as recent European privacy regulations have raised the stakes even more.
In our search example above, where 1% of searches retrieve the wrong document (a conservative estimate), it's not hard to imagine how an error might end up in a customer-facing document, which could in turn lead to client loss. Even if only 1 in 10,000 such errors adversely affects Acme's customer base or damages its reputation, remember that version control errors are happening 96 times per day. In an average year, Acme's brand will take two very big hits. Depending on the nature and fragility of Acme's business, the $6.9M spend we calculated earlier might be only a small fraction of what Acme really needs to consider.
Finding the Right Curation Solution
Having estimated the cost of Acme's search experience, we can look for opportunities to invest in improvement. There are a number of great places to start:
Reduce search time from 20 seconds to 10 seconds. Savings: $330k/year
Increase search quality from 60% to 80%. Savings: $3M/year
Reduce false positives from 1% to 0.1%: Savings $2.6M/year (not including the 90% reduction in reputation hits)
All of the above: $6M/year
If a company chooses to move forward on all of the above options and saves $6M annually, investing just a fraction of this in improved curation could produce a really good ROI.
So ask yourself: If you had a better idea of how much time and effort your company is wasting, would you start looking for ways to improve your information management? Give us a call. We can plug your numbers into the model and make some rational suggestions on how to start saving immediately.
Coming soon are two articles (at least!) that continue this conversation:
Calculating the costs of suboptimal product information management quality
Calculating the costs of suboptimal document management quality
Be sure to subscribe for updates so you don’t miss either.