All Posts

Master Data Management: Data Quality Supports Achievement of Business Goals

Sometimes a business problem that sounds simple to solve turns out to be a major challenge. For example, getting a count of a company’s customers sounds easy. But within an enterprise, different information systems may have slightly different data for the same customer, producing a new, duplicate record. The same customer can get counted twice, making the total number of customers incorrect. A company also needs to determine their definition of a customer and get agreement across business units and functional areas.  Sales may define it at the account level, while operations defines it at the location or operating unit level. The same holds true for product records--a product may be listed in different ways, so analyses that rely on connecting a metric with a product may be inaccurate.

When a company is working on a business intelligence or an analytics project, data quality is often a roadblock. Analytics that are run on poor quality data will produce results that are not in line with expectations, or do not pass the common sense test. Inconsistencies make aggregating data or analyzing it across different functional areas nearly impossible. Millions of dollars can be spent without the project having achieving the desired outcome, such as gaining insights into businesses operations or tailoring promotions to a particular customer segment.

Another issue that can impede effective analyses is that employees want to solve business problems but they do not understand the data they have. Often, the perceived solution is the deployment of yet another software product. But this can create yet another set of data that may be inconsistent with other systems. Or, data may be updated in one system but not in another, creating inconsistencies across the enterprise. The core issue is not the capabilities of the software so much as it is the issue of data governance and the quality of the data.

The solution is to ensure that there is one set of accurate data or golden record, and the way to do that is through Master Data Management (MDM). An enterprise-wide information architecture that supports the linking of master data to enterprise data is a mechanism for understanding business concepts and then transferring that into data elements. Ideally, the information architecture will capture the business landscape, and the exercise of developing it moves a company shifts the perspective from diverse data sources to a holistic view of enterprise information.

EIS provides unique value in having an end-to-end understanding of business technology. Many companies can establish their business intelligence (BI) systems or implement MDM, but do not have a view of the information flow from its origin to how it gets transferred from data to tangible, actionable information. Systems integrators attempting to assist them may understand the technology, but cannot follow the information flow needed to complete business activities.

Companies generally realize when they have an issue that needs to be solved by an MDM project. They have inconsistent data that is causing a problem, or are unable to determine whether an action on one element in an organization will impact another. For example, when a transaction takes place with one customer, what else is affected? Is a customer also a supplier? or a potential partner? The lack of clear relationships among data element impedes achievement of business goals.

In order to answer questions like this, a robust data model that organizes the data elements and shows the relationships among them must be developed. A wireframe that shows what the user experience will look like, where data is surfaced in different places and what the search facets are, is a useful tool. But ultimately, data quality is fundamental to customer experience, along with having master data and integrating it into critical business applications.

For a deeper understanding of the important role of taxonomy in your MDM program read our white paper:  Launching a Master Data Management Program: Eight Key Steps in the Journey

Earley Information Science Team
Earley Information Science Team
We're passionate about enterprise data and love discussing industry knowledge, best practices, and insights. We look forward to hearing from you! Comment below to join the conversation.

Recent Posts

Conversation with ChatGPT on Enterprise Knowledge Management

In another article, I discussed my research into ChatGPT and the interesting results that it produced depending on the order in which I entered queries. In some cases, the technology seemed to learn from a prior query, in others it did not. In many cases, the results were not factually correct.

The Future of Bots and Digital Transformation – Is ChatGPT a Game Changer?

Digital assistants are taking a larger role in digital transformations. They can improve customer service, providing more convenient and efficient ways for customers to interact with the organization. They can also free up human customer service agents by providing quick and accurate responses to customer inquiries and automating routine tasks, which reduces call center volume. They are available 24/7 and can personalize recommendations and content by taking into consideration role, preferences, interests and behaviors. All of these contribute to improved productivity and efficiency. Right now, bots are only valuable in very narrow use cases and are unable to handle complex tasks. However, the field is rapidly changing and advances in algorithms are having a very significant impact.

[February 15] Demystifying Knowledge Graphs – Applications in Discovery, Compliance and Governance

A knowledge graph is a type of data representation that utilizes a network of interconnected nodes to represent real-world entities and the relationships between them. This makes it an ideal tool for data discovery, compliance, and governance tasks, as it allows users to easily navigate and understand complex data sets. In this webinar, we will demystify knowledge graphs and explore their various applications in data discovery, compliance, and governance. We will begin by discussing the basics of knowledge graphs and how they differ from other data representation methods. Next, we will delve into specific use cases for knowledge graphs in data discovery, such as for exploring and understanding large and complex datasets or for identifying hidden patterns and relationships in data. We will also discuss how knowledge graphs can be used in compliance and governance tasks, such as for tracking changes to data over time or for auditing data to ensure compliance with regulations. Throughout the webinar, we will provide practical examples and case studies to illustrate the benefits of using knowledge graphs in these contexts. Finally, we will cover best practices for implementing and maintaining a knowledge graph, including tips for choosing the right technology and data sources, and strategies for ensuring the accuracy and reliability of the data within the graph. Overall, this webinar will provide an executive level overview of knowledge graphs and their applications in data discovery, compliance, and governance, and will equip attendees with the tools and knowledge they need to successfully implement and utilize knowledge graphs in their own organizations. *Thanks to ChatGPT for help writing this abstract.