BioIT World - How to Get the Best of Commercial and Open-Source Large Language Models While Protecting Corporate IP - Workshop Notes Processed by LLM

My colleague Chesley Chen assisted with note taking at the workshop we delivered at BioIT World and used ChatGPT to summarize. Here is the result.

How to Get the Best of Commercial and Open-Source Large Language Models

While Protecting Corporate IP

April 15, 2024

Seth Earley, CEO, Earley Information Science

Chesley Chen. CEO Burma Road Ventures

ChatGPT 4 Synthesis of the Flip Charts notes verbatim, 04.17.24, 4:15pm EDT


Act as a research associate in an information and knowledge management consulting firm. Your company just put on a workshop called "How to Get the Best of Commercial and Open-Source Large Language Models While Protecting Corporate IP. " There were 2 breakout session exercises. Please take the flip chart notes and summarize them in a coherent fashion. Here are the 2 exercises and flip chart notes. Please ask enough question in order to complete the task. If you have no questions, that's okay. [Flip chart notes were copied directly into the prompt.]

ChatGPT 4 response

To summarize the outcomes of the two exercises from your workshop titled "How to Get the Best of Commercial and Open-Source Large Language Models While Protecting Corporate IP," here is a coherent synthesis of the flip chart notes gathered:

Exercise 1: Objectives for the Workshop

Participants expressed a desire to achieve the following objectives from the workshop:

•   Understanding Capabilities and Limitations: Participants are interested in understanding what large language models (LLMs) can and cannot do. Specific interest was noted in handling structured data like tables, their potential in clinical and research applications, and their limitations in certain contexts.

•   Operational Knowledge: The need for knowledge on LLMs' data requirements, cost implications, and the management and organization of data was emphasized. This includes understanding how to integrate information and data architecture effectively.

•   Safety and Bias: There was a strong focus on the ethical aspects of using LLMs, including unbiased training, the mitigation of model hallucinations, IP protection, and the handling of sensitive data. Additionally, the importance of human judgment in deploying LLMs and building trust with these technologies was highlighted.

•   Comparative Insights: Participants showed interest in comparing outputs from different models and how to target and focus LLM applications to meet specific needs.

Exercise 2: Assumptions about LLMs

During this session, attendees shared their assumptions and beliefs about LLMs, which include:

•   Functionality and Limitations: There is a perception that LLMs are primarily linguistic tools with limited creative capabilities and are highly prompt-sensitive. Their non-deterministic nature makes validation and unification challenging.

•   Economic and Technical Constraints: Concerns were voiced about the high costs and significant computational demands (e.g., large GPU requirements) associated with training LLMs. The process is seen as time-consuming and likely expensive.

•   Utility and Impact: Despite their limitations, LLMs are viewed as beneficial tools that can democratize data access and assist in metadata translation. They are considered analogous to CPUs in their role as foundational technologies that support applications.

•   Challenges in Data and Bias: The effectiveness of LLMs is believed to be only as good as the data they are trained on, highlighting issues with data bias and the need for extensive data to improve accuracy. There was also concern about the potential misuse of LLMs and the necessity for users to be trained in prompt writing.

•   Future Expectations: Some speculative thoughts were shared about the future evolution of LLMs, including a humorous or hyperbolic suggestion about achieving consciousness within five years.

This synthesis provides a comprehensive overview of the main points of discussion and concern among the participants, highlighting their objectives and assumptions regarding the use of large language models in various applications. This should serve as a useful reference for shaping future discussions and strategies in your consulting work.

Some great take aways and knowledge nuggets. Zero shot prompting worked well here. We discussed most of these topics and got into a bit of a rabbit hole about whether GPTs contain knowledge. I say they do have knowledge of the world. The attendee countered with "Is 2+2=5 knowledge?" It is a creative or conceptual representation that is facturally incorrect. One number plus another number equals a third number which is conceptually correct. That is what GPTs are representing - things that are conceptually correct or likely to follow. But they can be statistically relevant and be factually incorrect - the hallucination usually sounds resounable or at least possible.

Meet the Author
Seth Earley

Seth Earley is the Founder & CEO of Earley Information Science and the author of the award winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable. An expert with 20+ years experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. He has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance.