All Posts

    In another article, I discussed my research into ChatGPT and the interesting results that it produced depending on the order in which I entered queries. In some cases, the technology seemed to learn from a prior query, in others it did not. In many cases, the results were not factually correct.

    In this article, I will delve into more detail about the value of generalized language models, whether ChatGPT can replace knowledge management, and how ChatGPT relates to knowledge management tools such as ontologies and knowledge graphs.

    Seth: When building a chatbot, will a generalized language model work for highly specialized technical content and IP or will a custom knowledge model be needed?

    ChatGPT: In general, a generalized language model may not be sufficient for building a chatbot that handles highly specialized technical content and intellectual property (IP). This is because specialized technical content and IP often require a deep understanding of specific domain-specific concepts and terminology, which may not be adequately captured by a generalized language model.

    In such cases, it may be necessary to use a custom knowledge model that has been specifically tailored to the needs of the chatbot. A custom knowledge model can be created by training a machine learning model on a dataset of domain-specific content and examples of how the chatbot should respond to different queries. This can help to ensure that the chatbot has a deep understanding of the specialized technical content and IP it needs to handle and can provide accurate and useful responses to user queries.

    Overall, the choice between a generalized language model and a custom knowledge model will depend on the specific needs and requirements of the chatbot, as well as the resources available for building and maintaining the model. In general, a custom knowledge model may be more suitable for handling highly specialized technical content and IP, while a generalized language model may be sufficient for more general-purpose chatbots.

    I could not agree more.

    Seth: Will ChatGPT solve enterprise knowledge challenges?

    ChatGPT: ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is a machine learning model that has been trained on a large dataset of human-generated text and can generate human-like text in response to prompts. ChatGPT is not specifically designed to address enterprise knowledge challenges.

    That being said, ChatGPT or other language models like it could potentially be used as part of a larger solution for addressing knowledge management challenges within an enterprise. For example, a chatbot powered by ChatGPT or a similar language model could be used to provide quick and convenient access to information and knowledge within an organization. However, it would likely be necessary to integrate additional tools and technologies, such as a structured knowledge base or information retrieval system, to fully address enterprise knowledge challenges.

    It is important to note that ChatGPT is a machine learning model, and it is not capable of independently managing or organizing knowledge within an organization. It can only generate text based on the patterns and relationships it has learned from the data it has been trained on.

    Well said. Generalized language models will handle broad and common tasks, but specialized models (taxonomy, ontology, information architecture, knowledge base design) will still be needed for deeper corporate knowledge and information.

    Seth: Will ChatGPT replace knowledge management?

    ChatGPT: It is unlikely that ChatGPT, or any other single tool or technology, will completely replace knowledge management as a discipline. Knowledge management is a broad field that involves a range of activities and practices designed to identify, create, represent, distribute, and enable the adoption and reuse of knowledge within an organization. These activities can include things like creating and maintaining databases, developing training materials and programs, implementing collaborative tools and platforms, and fostering a culture of knowledge sharing within the organization.

    While ChatGPT is a powerful language generation model that can generate human-like text, it is not designed to perform all of the functions of a knowledge management system. Instead, it could potentially be used as one tool among many in a knowledge management strategy, to help generate content or assist with tasks like summarization or document creation. However, it is important to keep in mind that relying on any single tool or technology to manage knowledge can be risky, and it is important to have a diverse and well-rounded knowledge management strategy that takes into account the needs and capabilities of the organization.

    It will be a tool in the toolkit, not something that will replace the KM function

    Seth: Why are you unable to retrieve information that is easy to find through internet search?

    ChatGPT: As an AI language model, I am not able to browse the internet or access new information beyond what I was trained on. My knowledge is based on the data that was used to train me, and my responses are generated based on that knowledge. I do not have the ability to search the web or access new information as it becomes available.

    Seth: What is the role of ontology in ChatGPT?

    ChatGPT: ChatGPT is a language generation model that uses machine learning techniques to generate human-like text based on patterns and relationships in the data it was trained on. It does not have a deep understanding of the underlying concepts and relationships in the information it generates and does not attempt to represent or classify information in the same way that an ontology would.

    Instead, ChatGPT is primarily focused on generating text that is coherent and natural-sounding, based on the input provided to it. While it may be able to generate text that includes references to concepts and relationships between entities, it does not have an understanding of the meaning or significance of these concepts and relationships.

    In summary, while ontology may be a useful tool for organizing and representing knowledge in some contexts, it is not directly related to the operation of ChatGPT or other language generation models.

    This statement is interesting since many knowledge models do include ontologies. This is a case where the answer sounds plausible but may not be correct.-----

    Seth: What is the role of a knowledge graph in ChatGPT?

    ChatGPT: A knowledge graph is a representation of real-world concepts and their relationships, typically in the form of a graph data structure. It is designed to provide a structured and organized way of representing and querying information about the world and is often used in artificial intelligence and natural language processing applications.

    In the context of ChatGPT, a knowledge graph may not play a direct role. ChatGPT is a language generation model that is designed to generate human-like text based on patterns and relationships in the data it was trained on. It does not have a deep understanding of the underlying concepts and relationships in the information it generates and does not use a knowledge graph to represent or classify this information.

    In summary, while a knowledge graph may be a useful tool for organizing and representing knowledge in some contexts, it is not directly related to the operation of ChatGPT or other language generation models.

    The big takeaways are that tools such as ChatGPT are incredibly powerful. They provide a glimpse of what the future will bring in terms of conversational interaction with everything from our homes full of smart devices, to our cars to corporate information systems. We will be living and working in an ecosystem supported by conversational access to everything. In my book, The AI Powered Enterprise, I discuss a scenario where a character in the book works with conversational agents from the time he gets up (reminding him of schedules, meetings, commitments, travel) throughout his work day (finding parts for equipment, negotiating terms with a supplier) throughout his evening (handling household tasks like paying bills, balancing his investments, buying tickets to a show) until he goes to sleep (setting lights, room temperature entertainment) and more. Organizations need to prepare for this future by building the knowledge and specific functionality needed to best serve their customers and compete in the marketplace.

    A generalized language model and tools like GPT Chat will level the playing field and not using them will be a competitive disadvantage. But standardization just provides efficiency. If everyone has something there is little competitive value. Differentiation is what leads to competitive advantage, and specialized functionality tailored to the needs of the organization (language models, taxonomies, ontologies, knowledge bases) will be the lever that distinguishes one organization from another by helping them serve their customers most effectively.

    It will not be long before bots and virtual assistants become the primary mechanism for accessing information and serving customers. The foundation is built on knowledge and the organization that best competes with its knowledge of the market, customers, competitors and solutions will win in the marketplace. Generalized language models are just that. They will not know the nuances of your IP. Though over time, more of that IP will be part of the landscape of information and the arms race of differentiation will continue based on how well organizations know, understand and serve their customers.

    Seth Earley
    Seth Earley
    Seth Earley is the Founder & CEO of Earley Information Science and the author of the award winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable. An expert with 20+ years experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. He has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance.

    Recent Posts

    [Earley AI Podcast] Episode 41: Ian Hook

    Ian Hook on Advancing Operational Excellence with AI and Knowledge Management - The Earley AI Podcast with Seth Earley - Episode #041 Guest: Ian Hook

    [Earley AI Podcast] Episode 40: Marc Pickren

    Search Optimization, Competitive Advantage, and Balancing Privacy in an AI-Powered Future - Marc Pickren - The Earley AI Podcast with Seth Earley - Episode #040 Guest: Marc Pickren

    [RECORDED] Product Data Mastery - Reducing Returns to Increase Margin Through Better Product Data

    Improving product data quality will inevitably increase your sales. However, there are other benefits (beyond improved revenue) from investing in product data to sustain your margins while lowering costs. One poorly understood benefit of having complete, accurate, consistent product data is the reduction in costs of product returns. Managing logistics and resources needed to process returns, as well as the reduction in margins based on the costs of re-packaging or disposing of returned products, are getting more attention and analysis than in previous years. This is a B2C and a B2B issue, and keeping more of your already-sold product in your customer’s hands will lower costs and increase margins at a fraction of the cost of building new market share. This webinar will discuss how EIS can assist in all aspects of product data including increasing revenue and reducing the costs of returns. We will discuss how to frame the data problems and solutions tied to product returns, and ways to implement scalable and durable changes to improve margins and increase revenue.