Earley AI Podcast – Episode 37: Enterprise AI Strategy and Knowledge Management with Rachad Najjar

From AI Misconceptions to Strategic Action: Building Responsible Enterprise AI with Knowledge Management Foundations

 

Guest: Rachad Najjar, Knowledge Management & Organizational Learning Leader at GE Renewable Energy 

Hosts: Seth Earley, CEO at Earley Information Science 

             Chris Featherstone, Sr. Director of AI/Data Product/Program Management at Salesforce 

Published on: December 8, 2023

 

 

In this episode, Seth Earley speaks with Dr. Rachad Najjar, a knowledge management and organizational learning leader at GE Renewable Energy, where he has driven integrated learning strategy since 2013. They explore common misconceptions about AI implementation—including why AI is a cultural transformation, not merely a digital one—and discuss how generative AI is reshaping knowledge management use cases such as retrieval augmented generation. Rachad shares his seven guiding principles for a successful enterprise AI strategy, covering business value, ethical AI, data quality, governance, and continuous model supervision.


Key Takeaways:

  • AI is a cultural transformation, not merely digital—employees need to see it as a skills amplifier, not a job threat.
  • Organizations must understand and map workflows before integrating AI to avoid automating broken or poorly structured processes.
  • Forty percent of AI projects are abandoned due to poor deployment planning and unclear business impact metrics.
  • Generative AI delivers the most value in knowledge use cases like customer support, search, learning, and marketing content generation.
  • AI models depend entirely on training data quality—garbage in, garbage out applies directly to every enterprise AI system.
  • Responsible AI governance requires diverse cross-functional teams including legal, supply chain, quality, and knowledge management experts.
  • Retrieval augmented generation combines extractive and generative AI to deliver accurate, contextually grounded answers from corporate knowledge bases.

 

Insightful Quotes:

  • The success of our AI models heavily depend on the quality of the training data—quality in, quality out. AI will not improve the quality of your data. What we train the model on is what we get." - Rachad Najjar

  • "AI is a skills booster, a capacity amplifier. We should understand and focus on our people, understand those touch points in their workflows and daily tasks, and see where we can inject AI to free them up and assign more impactful work." - Rachad Najjar

  • "You can't automate what you don't understand. And you can't automate a mess—you just get an automated mess. The point is to understand that workflow process, remove the redundant and repetitive things, and have a specific intervention." - Seth Earley

Tune in to discover how to build a responsible, knowledge-first enterprise AI strategy that moves beyond hype and delivers measurable organizational impact.

 

 

Thanks to our sponsors:

 

Podcast Transcript: Enterprise AI Strategy, Knowledge Management, and Retrieval Augmented Generation

Transcript introduction

This transcript captures a conversation between Seth Earley and Dr. Rachad Najjar about common misconceptions in enterprise AI implementation, how generative AI is transforming knowledge management use cases, and the seven guiding principles organizations need to successfully integrate AI—from building a strong business case and ensuring data quality to establishing ethical governance and continuous model supervision.

Transcript

Seth Earley: Welcome to the Earley AI Podcast. My name is Seth Earley. And I'm really excited to introduce our guest today. We're gonna be talking about some of the common misconceptions about implementing artificial intelligence in the enterprise—a lot of the increase in use and misuse of AI tools and technologies. We're gonna talk about how generative AI is accelerating and impacting knowledge use cases and especially retrieval augmented generation, which is my favorite topic. And we're going to talk about ways to maximize knowledge sharing and learning opportunities.

Our guest has been at the forefront of innovation in the fields of organizational learning and knowledge management. For nearly a decade, since 2013, he's been a driving force at GE Renewable Energy, where his role involves crafting and executing an integrated learning strategy that encompasses experiential, social, and formal learning. His responsibilities extend to defining the enterprise knowledge architecture and fostering knowledge sharing communities. He's also co-author of a recent book on knowledge management and research innovation, alongside numerous scientific publications. Please welcome Rachad Najjar. Welcome to the show.

Dr Rachad Najjar: Thank you. It's my great pleasure to be on your show, to have this conversation. I'm looking forward. There are many, many topics that we can cover. We'll try to be really convergent and get to the really, the main messages.

Seth Earley: Yeah, I looked over your work and it's pretty amazing. The stuff that you shared with us is just fantastic. So we'll try to dig into some of what—I'd like to start with—are some of the misconceptions. What are you seeing in terms of misconceptions around AI and its deployment? What's going on from your perspective?

Dr Rachad Najjar: Yes, there are multiple misconceptions. Maybe let's focus more on the integration strategy. And this is exactly where I was curious to know more. I asked myself, okay, so now AI has gained mainstream attention. And we see a lot of use cases—some of them are text generation, text summarization, and text translation. But what are those specific use cases that we can implement, and how can we avoid some of those misconceptions? For some reasons, those misconceptions can lead us to harmful results.

There was a recent Cisco study that stated 40% of AI projects are abandoned because they couldn't really deploy some impact. So that's exactly back to your initial question. Some of the misconceptions—I'd like to start with AI, or generative AI, being a digital transformation. I hear many times: "AI is a digital transformation and it's a new technology and we need to deploy it." On the contrary, AI is a cultural transformation.

And why is this really important to understand? Because when we say AI is a digital transformation, we are inherently deploying—or communicating—a workforce risk. Employees will think AI will replace their job. On the contrary, AI people who embrace AI—those are who will replace people who don't embrace AI. And by also considering AI as a cultural transformation, which means AI is a skills booster, a capacity amplifier, we should understand and focus on our people, understand those touch points in their workflows and their daily tasks, see where we can inject AI, where we can integrate AI, so we can avoid repetitive tasks, free up their time, and assign them more impactful tasks where they can add more value. So it's an extension, an amplification of our human capital in terms of skills and capabilities.

Seth Earley: I love your point about finding that intervention in the workflow—really understanding that process. I like to say, you can't automate what you don't understand. And you can't automate a mess. I guess you can automate a mess, but you just get an automated mess. The point that you're making is: understand what that workflow process is, understand that day-to-day activity, and try to remove the things that can be redundant, manual, or repetitive. And be able to have a specific intervention. I also look for what I call information leverage points—where can you have a big impact downstream when you can either get information more quickly, get more accurate information, or get information you weren't able to get before. But carry on—your point being, it's not necessarily a digital transformation in terms of technology. It's really a people transformation and a cultural transformation.

Dr Rachad Najjar: Exactly. And you mentioned something very important: we need to understand whether it's a mess or organized. And this is one of the false expectations that we put on generative AI—that we expect it to organize our mess. It's our job as practitioners, as domain experts, as specialists, to do our homework first, and then say, okay, how can we increase productivity, how can we increase our understanding of the situation? And then we can introduce the AI.

There's also some kind of operational shift and paradigm shift. We need to look at our processes and say, we are not going to deploy another incident management system, or we're not going to deploy another learning management system. What do we already have in place? And search for those processes. For example, if we are deploying some kind of career development and we are looking specifically into some kind of insights and coaching—here we should evaluate how AI or those capabilities can help us to offer more personalized and more detailed insights in the coaching activity, instead of saying, "We are going to deploy a new LMS system. We are going to revolutionize our operational processes."

Seth Earley: No, that makes a lot of sense, and what we don't need is another system. And I love when people say, "All we need is one place where everyone can go for all of their information." And it's like saying the solution to application proliferation is another application—at least 2 or 3 or 4 of them. It's not gonna get you there. You need to optimize the process and optimize your existing systems and existing tools, your existing workflows, and integrate into the fabric of the organization.

Dr Rachad Najjar: As you mentioned in some of your writing, we need the information architecture before AI—connecting or having this kind of ontology and information architecture where we understand our data schema, our data sources, how they are connected together, what is the context—and now we can say, okay, let's implement AI. This is also one of the points that I have learned from your book.

Seth Earley: Well, I'll tell you. Back in the day when I first wrote that article—"There's No AI Without IA"—there was so much noise and nonsense, just like there is today. And you know, it was gonna magically do all these things. And you don't need ontologies, you don't need taxonomies, you don't need metadata—which is far from the truth. You really do need that core architecture. Generative AI and other forms of machine learning can accelerate the expansion and evolution and upkeep of an ontology, but it can't build it. It can give you building blocks, but it can't build it, because machine-generated taxonomies and ontologies are very problematic—they look machine-generated, they don't make common sense.

But yeah—we designed recently a global knowledge architecture for an organization, and that prepares them for using generative AI. Maybe you could talk a little bit: you did this amazing study comparing a hundred generative AI technologies. Do you want to talk a little bit about where generative AI is accelerating and impacting knowledge use cases, areas, and processes?

Dr Rachad Najjar: Yes, exactly. So this research was really initiated from a basic question. I was curious to better understand where practically, and in real life, generative AI is adding value. As I mentioned earlier, in the media and mainstream and social networks, every single company is rushing to deploy AI—and especially on the forefront, customer services like conversational search, ticket management systems, and chatbots or virtual assistants.

And then I asked myself, okay, so now I am a knowledge manager and organizational learning leader—where exactly will generative AI impact my work, and what challenges will it help me overcome? So my reasoning was: first, let me lay down the knowledge life cycle, starting from the discovery of knowledge—the co-creation, the exchange—until the organization, formalization, reuse, and then all those analytics and intelligence. Once I set the life cycle of our knowledge, I went into every single phase and asked: what are the key processes or challenges we face as knowledge managers?

Once I built this evaluation grid, I went to the industry, to the vendors, to the developers, and saw how every tool in the knowledge management and learning field was implementing those AI capabilities. And to give some data and statistics: 45% of those tools are related directly to customer support services, around 11% related to search, another 10% related to learning, and another 10% related to sales and marketing.

And that's where we can understand why today there's really a division—two groups of companies. Some companies are banning AI because they are afraid of the consequences, not aware of what risks they can handle. And another group of companies who are rushing to implement AI. But my interest was exactly those key knowledge and learning processes where AI can help.

I can give three examples. The first one is related to ideation. Knowledge management is about tacit knowledge—how we can externalize our tacit knowledge. Generative AI can do idea auto-complete—the same way our phones auto-complete a message—and can expand that idea with more arguments and examples, helping us to do some kind of ideation and brainstorming activities.

Another use case related to tacit externalization that I find very impactful is related to storytelling. We normally record videos and interview experts with the objective to extract tacit knowledge. Through video capturing and ingesting AI capabilities, those videos can be automatically translated—speech to speech, to further languages—making them available at a larger, global scale. And then, more importantly, we can search within the video. When I ask my question, the AI can go and extract a 30-second segment from that video, give me the transcript, and give me the video section. So now we are facilitating the discovery of those unstructured data into more operational data that can be searched and integrated into our corpus.

Seth Earley: Right, right. No, that's a great example. And of course, in terms of search and integration into our corpus, again, we still need that reference architecture. In your writings you talked a little bit about the misuse of AI. What do you consider the misuse of AI, and can you talk more about that?

Dr Rachad Najjar: Some numbers before going into some examples. In 2019, there were around 123 incidents or misuse cases. In the first half of 2023, more than 1,000—so 10 times more, and just 6 months into 2023.

Some examples of those misuse cases: facial analysis applications claim they can do some kind of pseudo-science—analyzing the criminality tendency of a person using the shape of the head or skin tone. They repackage their solutions with AI capabilities and marketing, and try to convince the end user that through the power of AI you can detect criminal behavior of your citizens or clients. This is one of the very harmful misuse cases that can be harmful. We need really some kind of regulation or governance to prevent this kind of application.

Another example that was really eye-opening—using deepfakes. There was someone who composed a speech, put it through deepfake techniques, synced the lips, and impersonated Morgan Freeman—making him say something he never said. Not only the face and the image but also the lip syncing. And as a propagation effect on social media, things spread very quickly. That's a very serious issue.

Seth Earley: Wow, yeah, that's a great example. It's like you can't trust what you see and hear. Now you had also done some work around 7 guiding principles for a successful enterprise AI strategy. Do you want to talk a little bit about that framework, how you arrived at that, and maybe talk about some of those principles?

Dr Rachad Najjar: Yes, great. So there were 7 principles that emerged during my evaluation of those 100 vendors. I asked myself: now we know what to do and we have some use cases and business cases defined—but how do we make it successful? How can we go to the C-level, to top management, and convince them with investment and to be part of the transformation? That's how I arrived at the 7 principles.

The first one is fundamental: we need to have a strong, measurable business case. That strategy should involve the end user and the customer, and that's where ethical AI comes in. Responsible AI is also one of the guiding principles—and this is one of the misconceptions: we transform responsible AI or ethical guidelines into a checklist, and we check the boxes. No, it will not work like that, because ethical AI is a process. Every single time we develop an application, we should go through a reevaluation and really assess our principles for that specific application. The customer should be involved not only in the deployment phase, but also in the development phase.

We should be very transparent about how the machine is making decisions. For example, if we are developing image recognition for the medical field, we should not impose the application on the doctor or medical staff. Instead, we should be transparent about how the machine is making the decision, and the doctor should have the final word on whether this evaluation or diagnosis is ethical or not. We should also involve the patient, telling them: "We are taking your personal data and embedding it into our model. Do you approve?"

Seth Earley: Absolutely. So you talk about business value, process integration, quality training, continuous supervision, powerful computing infrastructure, and AI and ML skills. What do you think is most critical beyond business value and process integration?

Dr Rachad Najjar: Let's talk about the quality training set and couple it with continuous supervision—those are interrelated. The success of our AI models heavily depends on the quality of the training data—quality in, quality out. We should not confuse this: AI will not improve the quality of your data. What we train the model on is what we get. It's as simple as this equation.

We want to make sure the training data set is diverse, representative, and free from biases. For example, if we are in a financial institution and we want to use AI for credit risk assessment—to make lending decisions—we want to ensure fair and unbiased outcomes by collecting a diverse dataset representing different demographics and financial backgrounds. The data should be carefully curated to eliminate potential biases such as gender or ethnicity.

And the story doesn't end there, because it's not enough to say: we trained our model, we have our data, and now the model will continue to work on its own. There is an inherent issue with neural networks—over time they degrade in performance. We need to continuously retrain the model with new data. So it's not enough to deploy it and leave it—we should continuously update and include new use cases and new data to our model.

Seth Earley: Right, right. Absolutely. So this leads me to the topic of governance. We have machine learning operations—MLOps—to look at the development part of this. But there's a bigger picture. When you mentioned customer service being the largest application—customer service is many times owned by the call center. And many times they don't have the expertise to model the knowledge, fix the knowledge processes, and go upstream. They're tasked with call deflection or handling calls or having a chatbot. But really, why are people calling the call center in the first place? Because something is broken. You have to go upstream. You have to look across not only the entire customer life cycle, but also the internal supporting processes.

So we really have to think about governance. What are your thoughts about how organizations need to govern and manage AI resource allocation and generative AI project decisions?

Dr Rachad Najjar: Yes, exactly. And this is also one of the misconceptions—we create an AI team, we form a team from machine learning skills or data scientists and some IT Ops skills, and we say this is the AI team responsible for our applications. But this is really unhealthy, because AI is diverse. We should form a community including diverse profiles—legal, supply chain, quality, project management—all of them having different perspectives on the same topic and the same issue.

If we want to increase our customer satisfaction by 30%, it's not the customer service function alone that is responsible—it's the whole value chain leading back to the design of the products and services. We should tackle that value chain from different perspectives. That's why we need multiple expertise. And every specialist in their domain—like a quality product manager working with a knowledge manager—plays a critical role. Knowledge managers are the facilitators who have the responsibility to create the space, a safe space, where we can include all those visions and opinions for better quality decisions.

In terms of resources, it's a community-based structure rather than a centralized team where we delegate responsibility to a single centralized unit.

Seth Earley: That's a great point. It's almost like when organizations were trying to incorporate more business intelligence and analytics—they started with centers of excellence or centralized teams, but then found that distributing that knowledge and expertise into the various teams, business units, and departments was really much more effective. But you really need to embed it at the business level and at the business unit and departmental level.

So, just to kind of go back to the idea of generative AI and some of the problems with hallucinations, misalignment with the brand, potential loss of IP—we look at retrieval augmented generation as a mechanism to mitigate some of that. Can you define that and talk a little bit about what you're trying to mitigate and how it works?

Dr Rachad Najjar: Yes, perfect. So first, let me lay down the foundations. There were two types of technologies: extractive AI and generative AI. With extractive AI, it works on a set of our corpus—a set of documents we already have in the company. Extractive AI goes and searches for the relevant documents, and extracts exactly the sentence that answers my question. If there's no answer, the outcome will be zero—no output.

With generative AI, with some context, the model can create or generate new content that is similar to the trained data. And how focused or reliable is the outcome? That's controlled by something called temperature. Temperature is how much you want your model to be creative. If temperature equals one, you get totally imaginative content. If you reduce or optimize your temperature—like 0.4 or 0.5—you get more relevant content.

In RAG—retrieval augmented generation—there is a combination: getting exactly the sentence that answers my question, but also putting some additional information to generate new content that helps me better understand or expand on that specific answer. So RAG is very powerful, provided that we already find an answer in our corpus.

Seth Earley: One of the things we found was that by specifying to the model—turning the temperature down to zero and saying, "If you don't have the answer from this data source, say 'I don't know'"—you're really not giving it any room to be creative. But the answers would still vary depending on how you asked the question.

And what's really interesting is audit trails. When we showed where the data comprising the answer came from, it didn't look very friendly—it had snippets and pieces. But the model took those data sources and pieces from the retrieval, and made them more conversational. So there's processing at the front end—normalizing the utterance or question, using that to retrieve information from a knowledge base or vector similarity search, and then processing the result to make it conversational. And when we enriched the content with additional metadata and knowledge architecture, sometimes the model would pull from content and augment with metadata, sometimes the reverse—but it was always making it more conversational. Those were very interesting findings.

Dr Rachad Najjar: Yes, and one way—going back to the guiding principles—continuous supervision: when we find a gap in the outcome or the model is not giving satisfying results, we need to go back, retrain the model, fine-tune with additional information, and before going to deployment, do some testing. Get the real experts, get human input, ask new questions, and validate the answers. This is a continuous process. We will not get it right from the first time, and we need to reiterate on it—and that comes with a price, either in infrastructure costs or time invested from our experts.

Seth Earley: Yeah, that's a really great point. When people talk about just pointing the LLM to your content and expecting quality results—that's not realistic. It really depends on the use cases, and those use cases have to be defined. We tested 60 use cases for an application we recently built, and you can see if they're factually correct. But it's not always easy, because the model will phrase the answer very differently each time.

Have you been using large language models to test the similarity of results—model testing?

Dr Rachad Najjar: Yes, in fact. I'm doing some personal project-based learning—just for personal satisfaction—trying to build a model using the open source foundational model Haystack. I have many documents related to knowledge management, books that I've written, and various publications. I said: I want to get a sense of what it takes to build from A to Z an application where I can ask a question—"What are the roles and responsibilities for a knowledge manager?"—and get the answer from my own documents.

I took the Haystack model, which is a foundation model for general training and general purpose, and retrained it on those documents. The more documents I gave it, the more I retrained the model, the more accurate information I got. There's also a technique I use called "lost in the middle." This technique is used when developing RAG applications. What it does: when I have a set of documents—some relevant, some not—it places the most relevant at the top and the end of the results, and the least relevant in the middle, so that the end user's attention is captured at the most relevant points.

Seth Earley: That's very interesting. So when you think about where this is going for organizations—many organizations are not terribly mature in their knowledge processes and they've looked to technology to solve the problems. First it was a better search engine, then semantic search, then neural search. What's your advice to executives going down this path who perhaps do not have the most robust or mature knowledge processes?

Dr Rachad Najjar: We must learn from the past. As you mentioned, we saw the trend in around 2013 to 2015 to deploy enterprise search engines everywhere—and the results were frustrating. They were not as good as what Google provided. And that's because internally, organizations didn't do the governance, didn't model their processes, or design their knowledge architecture. They didn't really understand how their knowledge flows and their networks are connected.

We don't want to repeat that history. If we really want to get the best out of AI, we first go back to our operations model, design it, understand how they are connected, and then go into AI. And I also don't recommend fully deploying AI everywhere—search for those quick wins, easy to implement, high impact on the customer, because there's an underlying cost which is very heavy. Today the big companies—Facebook, Amazon, OpenAI—are giving us the foundation models and charging a lot. For example, Google's PaLM model is charging about 2 cents for 15 seconds of interaction. So if we do the math for one hour, that's $12—and it will go exponential. So we don't want to say, "Let's deploy AI everywhere." Identify those quick wins, small applications, where they have the most impact.

Seth Earley: Yeah, no, that's a really great point. So we have a few minutes left—I understand you're a big fan of skiing and you've skied in the Grenoble Alps. Tell me a little bit about what else you do for fun outside of work.

Dr Rachad Najjar: Yeah, so I'm located in Grenoble, on the Alps mountains. Skiing was part of my university curriculum—we had a course and activities where we went out and learned skiing. I came from Lebanon, on the north coast, raised on the beach, and I never did skiing. So when I came to France and went with my friends, I was the only person falling. The trainer advised me to enroll in a training course, and I was put with pupils who were three years old learning to ski. So it doesn't seem adequate—I took personalized and individualized training and then I got my little start, and I was so proud of it.

Seth Earley: I learned to ski a little later in life as well—in my early thirties. I took a 10-day learn-to-ski course, had some private lessons in the afternoon, and my ski instructor took me down a black diamond at the end of those 10 days with a high five. I can't ski black diamonds very well now though—I think I kind of stumbled down it.

So, I wanna ask you a quick question. If you could go back in time and give yourself some advice when you were getting out of college—what might you tell yourself?

Dr Rachad Najjar: Maybe what I would tell myself is to be more aggressive in terms of jumping ahead—to have the forward leap phase—and try not really to analyze a lot of my decisions. Just do it, jump ahead, and see what happens. Take more risks.

Seth Earley: Yeah, that's great. Well, listen, I want to thank you, Rachad, very much. This has just been tremendous. And maybe we can have you on our webinar on governance—that would be great where we could do some collaboration. So I'm looking forward to continuing the discussion. Thank you so much for your time today.

Dr Rachad Najjar: Thank you for having me today.

Seth Earley: And I want to thank our audience. We'll see you next time. And also thank Carolyn and Liam for doing the production behind the scenes. So thanks again everyone, and we'll see you next time.

Meet the Author
Earley Information Science Team

We're passionate about managing data, content, and organizational knowledge. For 25 years, we've supported business outcomes by making information findable, usable, and valuable.