All Posts

    Monetization - Adding Value To Data With Metadata

    Metadata gives information meaning. Metadata is therefore the basis for monetizing information, because the ability to monetize depends on an understanding of the meaning of the information to a user seeking to solve their problems.

    Some of the biggest challenges that organizations have are about deriving meaning from data, content, and information.  Data is generated by our everyday activities, whether we are on our laptop, a tablet or a cell phone.  Smartphones are sending off continuous streams of metadata that is collated, organized, and sold throughout an ecosystem of aggregators and resellers.

    Increasingly, the greatest value of this data is coming from the integration of it across different touchpoints, both physical and virtual.  Because customers interact with a variety of departments and functions during their journey, multiple systems support these interactions.  A customer may see an ad on TV, research a product on the web, use a mobile phone to find a nearby location and complete the purchase in a store. Once at home, the customer may need to call a help desk or find support information online. 

    Content and data supporting customer interactions will be spread across the associated applications, with each touch point requiring some model of the customer and their needs. External data about demographics may inform local advertising. The web site leverages intelligence about personas and user scenarios with its content models. Mobile devices collate geographic targeting data with user search, while in-store displays and offers align with online content, and point-of-sale systems synchronize with mobile pricing data. Help desk and support tools leverage knowledge models. A unified picture of the customer behaviors, interactions and experience requires the ability to harvest, analyze, make sense of and act on data from this complex array of venues, platforms and processes. 

    Senior data leaders need to deal with the increasing volume and number of data sources that are part of core processes as well as how to extend capabilities with new data sources.  For any given process – take the customer journey spanning multiple processes – business leaders need to know how to leverage an increasingly sophisticated set of systems and tools. IT leaders need to understand how to integrate and maintain these systems.

    Many organizations are not well prepared for the expected growth in complexity, volume, and velocity of data and content. This limitation will significantly impact their operations and put them at a competitive disadvantage compared with those organizations where metadata strategy is correctly integrated into the business value proposition.

    Sources of Metadata Value

    Each of the systems that support the customer experience leverages various metadata structures. The systems have different models of customers and users because of the different ways in which the information they serve fits into the process at hand. These are the models that tell each system what information to collect, process, and present for each customer. An ecommerce application cares about certain specific aspects of the customer (account information, payment data, order history), whereas a help desk system cares about other aspects in the aggregate (classes of problems that users have, their level of knowledge or technical sophistication).  A marketing campaign for a new product that addresses problems that large numbers of customers have had with older generation product would be accomplished by mining both data sources.

    Integrating multiple data sources and gleaning meaningful insights is the goal of any kind of business intelligence program.  Marketers are targeting customers by combining their internal data with external sources. They are attempting to improve their engagement with more meaningful offers that meet current or imminent needs.  This process is about anticipating what the user needs and giving them information that moves them closer to meeting that need. 

    Big Data, Small Data and Metadata

    Many organizations are feeling compelled to undertake Big Data initiatives but have not clearly articulated what they are trying to accomplishWhether your initiatives require the new Big Data approaches for processing large amounts of information (a distributed Hadoop or NoSQL approach) or can be managed with traditional analytic tools (“small data”) certain foundational elements need to be considered.

    In order to achieve business value from any kind of data, be it Big Data or “small data,” a common metadata framework to map terms and concepts across systems must be developed, as well as translations of aggregate models of behavior to specific attributes of users.  What problems do these types of users have in common?  How can offers be tuned to meet their needs? What content will be required and how can we check the progress of campaigns? Where are we missing information and insight about our customers, products and processes?

    Big Data programs can be made concrete by developing attribute models for each system that contains needed information and then creating a set of hypotheses that can be tested by performing “what if” scenarios.  The foundational requirement for this is creation of a domain model that allows for alignment of the attributes and then creation of the vocabularies that will be used to aggregate content and data. 

    Any data project – whether Big Data or small data - requires these constructs as a foundational element.  The model needs to be comprehensive, cutting across systems that don’t typically talk to one another, such as a knowledge base and commerce application. The models need to contain the semantic s of unstructured content architecture along with structured data models. The ability to take this step also requires broader knowledge of the customer and associated processes so that meaningful insights can be gleaned from the signals.  A data scientist with no knowledge of customer behaviors and experiences will have a more challenging time than a business person armed with the correct framework for analyzing the data.

    For a look into how we use customer data models, product data models, content models, and knowledge architecture to create a framework for unified commerce download our whitepaper: Attribute-Driven Framework for Unified Commerce

    Seth Earley
    Seth Earley
    Seth Earley is the Founder & CEO of Earley Information Science and the author of the award winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable. An expert with 20+ years experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. He has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance.

    Recent Posts

    [RECORDED] Product Data: Insights for Success - How AI is Automating Product Data Programs

    Artificial Intelligence is changing the way businesses interact with their customers. From hyper-personalized experiences to chatbots built on Large Language Models, AI is driving new investment in digital experiences. That same AI and LLM can also be used to automate your product data program. From data onboarding and validation to generating descriptions and validating images, AI can help generate content faster and at a higher quality level to improve product findability, search, and conversion rates. In our second webinar in the Product Data Mastery series, we’re speaking with Madhu Konety from IceCream Labs to show exactly how AI and product data can work together for your business.

    AI’s Value for Product Data Programs

    By Dan O'Connor, Director of Product Data, Earley Information Science

    The Critical Role of Content Architecture in Generative AI

    What is Generative AI? Generative AI has caught fire in the industry – almost every tech vendor has a ChatGPT-like offering (or claims to have one). They are claiming to use the same technology – a large language model (LLM) (actually there are many Large Language Models both open source and proprietary fine-tuned for various industries and purposes) to access and organize content knowledge of the enterprise. As with previous new technologies, LLMs are getting hyped. But what is generative AI?