Earley AI Podcast – Episode 43: AI Governance, Knowledge Management, and Digital Transformation with Thomas Bloomer | Page 44

Earley AI Podcast – Episode 43: AI Governance, Knowledge Management, and Digital Transformation with Thomas Bloomer

Building the Foundation for AI Success: Why Information Architecture, Data Governance, and Business Value Must Come First

 

Guest: Thomas Blumer, Knowledge Management and Digital Transformation Expert

Hosts: Seth Earley, CEO at Earley Information Science

Published on: March 1, 2024

 

 

In this episode, Seth Earley and Liam Kunick speak with Thomas Bloomer, a knowledge management expert with decades of experience in AI implementation and digital transformation. They explore common misconceptions about generative AI, the critical role of information architecture and data governance in successful AI deployments, and why organizations must focus on business value over technological capabilities. Thomas shares practical insights from implementing AutoML translation systems that saved millions, discusses the intersection of knowledge engineering and prompt engineering, and explains why building trust through measurable results is essential for data-driven decision making.

Key Takeaways:

  • Organizations often adopt AI tools without clear business problems to solve, treating generative AI like buying an expensive hammer without knowing what to build.
  • Information architecture is the foundation for AI success—without clean, structured data and proper governance, AI systems simply replicate existing problems at scale.
  • Generic AI models excel at vague tasks but struggle with precision—specialized use cases require careful taxonomy development and context-specific training data.
  • Successful AI implementation requires balancing standardization for efficiency with differentiation for competitive advantage, focusing resources on true business differentiators rather than customizing everything.
  • Digital adoption platforms seamlessly integrate AI capabilities into existing workflows, eliminating the need for separate tools and making knowledge management transparent to end users.
  • Data-driven organizations often face resistance when analytics contradict executive intuition—building trust requires proving value consistently through small wins before tackling larger initiatives.
  • Starting with contained proof-of-value projects using production data enables organizations to learn what works, establish governance frameworks, and scale successfully over time.

 

Insightful Quotes:

"AI is a tool in our toolset, and yes the tools are getting better, but it's not going to solve all our problems. You need people, process, and technology aligned—otherwise the best tool will just make a mess, a bigger mess, much faster if you don't have clear governance underneath." - Thomas Bloomer

"If you don't have a concrete idea what you want to get out of AI, AI blows you away. As long as you're very vague, AI is super good. Now, the more precise you get, the harder it is." - Thomas Bloomer

"There's no AI without IA—if you don't organize your content and have a good foundation, you don't want to build a skyscraper on top of sand. Information architecture is the foundation, and if that's not good, don't try to build something on top and expect it to work very well." - Thomas Bloomer

Tune in to discover how organizations can build the critical foundations—information architecture, governance frameworks, and metrics-driven approaches—that transform AI from experimental tools into business value drivers that deliver measurable results at scale.


Links:

LinkedIn: https://www.linkedin.com/in/thomasblumer/
Website: https://www.zyris.com


Ways to Tune In:
Earley AI Podcast: https://www.earley.com/earley-ai-podcast-home 
Apple Podcast: https://podcasts.apple.com/podcast/id1586654770 
Spotify: https://open.spotify.com/show/5nkcZvVYjHHj6wtBABqLbE?si=73cd5d5fc89f4781 
iHeart Radio: https://www.iheart.com/podcast/269-earley-ai-podcast-87108370/ 
Stitcher: https://www.stitcher.com/show/earley-ai-podcast 
Amazon Music: https://music.amazon.com/podcasts/18524b67-09cf-433f-82db-07b6213ad3ba/earley-ai-podcast 
Buzzsprout: https://earleyai.buzzsprout.com/ 


Thanks to our sponsors:
CMSWire
Earley Information Science
AI Powered Enterprise Book

 

Podcast Transcript: Building AI Foundations Through Information Architecture and Governance

Transcript introduction

This transcript captures an in-depth conversation between Seth Earley, Liam Kunick, and Thomas Bloomer about the critical foundations organizations need for successful AI implementation, exploring the misconceptions about generative AI capabilities, the essential role of information architecture and data governance, and practical strategies for building trust in data-driven decision making while delivering measurable business value.

Transcript

Seth Earley: Welcome to today's podcast. My name is Seth early. And I'm Liam Kunick.

And our guest today has decades of experience in knowledge management and technology adoption. He's been a specialist in knowledge management. He's done a lot of work in AI and machine learning, digital adoption, data strategy. He has a proven track record of saving millions of dollars for companies like Google in their AutoML implementation. He currently leads digital transformation efforts and we recently had him on our webinar on data governance in the Age, Information Governance in the age of AI. Welcome, Dr. Thomas Muomer. Thank you so much, Seth.

Thomas Bloomer: I'm really glad to be here and talk a little bit more about all these exciting elements we just addressed. Absolutely, absolutely. So

Seth Earley: before we start, what I'd like to do is I like to get a sense of what are the things that you're running across that are misconceptions and misunderstandings in this space. And I know there's lots of them, but why don't you articulate some of those?

Thomas Bloomer: Yeah, you know, it's like I've been around the block for a while, as you just mentioned, and whenever there's a new technology, we always feel that will say all our troubles, you know, and I think we have to be realistic. It is a tool in our tools that, you know, and yes, the tool sets start getting better, but it's not that it will solve all our problems. It's, you know, like the traditional thing, you know, is people process and technology, you know, and they have to be kind of in balance and aligned, otherwise the best tool will not solve it. You know, as you mentioned at the webinar, it just makes a mass, a bigger mass, much faster if you don't have a clear governance underneath, you know. Yeah, one of the things that we have seen is, you know, when

Seth Earley: people have content operations for AI for a specific group, that ends up with more knowledge fragmentation than if we do this in a more thoughtful manner using something like Create once published everywhere. We're not going to talk a huge amount about that today, but we'll talk a little bit about content operations and content processes.

So. So it kind of comes back to basics Right. When you', when you're looking at these tools, it comes back to business problem and process and data sources and so on, and data hygiene. Right. I think the big misconception we come across is that you don't need that stuff. You know, you don't need taxonomies and ontologies, you just need the tools and the tools will do it. And tell me, is there a little bit of truth to some of that? I mean, again, people are putting this forward, but it's not completely right.

We know it's not right, but it's not completely wrong. Do you want to talk about where the tools will help with things like. Data

Thomas Bloomer: quality, data hygiene? Yes. You know, there are certain things AI or machine learning can do because of the algorithm which is applied. For example, if you have a whole stack of information, you know, PDFs or over PDFs and PDFs, you know, it's like if you have no structure at all, you could let AI start, actually put them in context to each other and do its thing. What the LLM will do and you will get some information back, which in the past will have spent a lot of time to kind of fine tune. But

as we say, rubbish in, rubbish out. If the data you provide is not clean, don't expect it is suddenly clean by AI. It is still there. If you have bad data and you haven't sorted it out, so it may help getting you some work, but you still need to do the legwork underneath to leverage it, you know. But something I was really impressed is like when I wrote my dissertation, it took me

weeks to code all my interviews. I had 600 pages of interviews and I had to tag them very closely. And then I pressed a button and it took literally 12 hours of computing time to come up with something like how these terms relate to each other. I did a very similar thing last, last month with the knowledge graph. You know, I had 600 pages, I pressed a button in, in minutes. I had a knowledge graph which I could turn around,

dig deeper, get out. It even told me where I had holes in my thing. You know, I was like, hey there, there's some information missing in that. And it's like, wow. I mean, we came a long way. But, you know, if I don't provide the right information, don't expect that it comes out clean, you know. But it is really fascinating how, how far we came.

Seth Earley: Absolutely. And I think one of the critical things that you mentioned was or implied was that you have to have a reference architecture. Right. Even for the systems to do something with. You have to tell it what to expect. You know, you have to tell it to types of content, the terminology you care about and so on. And when you did that originally you tagged that stuff and you tagged it with a reference architecture. Now we put in the reference architecture we can get something that is machine generated in terms of a knowledge graph or reference architecture. But then we have to have some human intervention to fine tune that. And that's been a challenge for you know, 20 years when it comes to machine learning and, and artificial intelligence. Back in the days of Lotus Discovery Server in the late 90s when I co authored book on that tool that became Omnifine from IBM, that DNA went into Watson and it was the still the same core algorithm. And the big challenge was generating those taxonomies and generating that knowledge architecture because you could tell what a machine generated taxonomy look like. Right. It made sense to the machine and.

Thomas Bloomer: It had clusters, but it didn't make sense to humans. Yes, absolutely. You know that the two have to be alliance to, to get something out of it. So go ahead, we're going to say something. No, no, that's fine, go ahead. So here's what one things

Seth Earley: that I want to understand is. You know, my guess is that the boom in, in generative AI is going to catalyze funding for, for knowledge processes. What is your perspective on that? Where do you think organizations are in their realization or understanding? And then how soon do you think that's going to actually lead to investment in that area and talk about why that's important? Yes, that's a good question.

Thomas Bloomer: I think 2023 were kind of, it was a big awakening for many companies. Like whoa, we are behind the curve. You know, like ChatGPT really was a wake up call for many companies, you know, so 2023 was the year where people just played around a lot of experimentation. But as you mentioned earlier, it's like where's the business value? And you know, I went to Silicon Valley, I had a lot of interesting conversation with really smart people

and they could do everything but they didn't know where to apply it. It kind of like I can solve every problem, but I don't know the problem to be solved. You know, there was a real dissonance between business and the technology. I think the gap is closing. I think we will see 24 more

deployments. You know, kind of people who have played enough and know like this is how we actually can use it for good, you know. And you know, it's like many software companies offer Now AI baked in. I mean, Salesforce has Einstein, Adobe has their tools. So we see all these AIs now built in. But where I always talk about, where are the AIs of the AIs, you know, it's like, I don't want to have just a little piece optimized here. I want to have the big changes, and the big changes are not coming by well. I have now automatic summaries. I. Well, good. I saved two minutes. But the quantum beliefs are on a different level, and I think companies still have to get there. Yeah. I mean,

Seth Earley: what I've researched and what I've seen and what I've heard and what the analysts say, first of all, that the whole market around, you know, retrieval, augmented generation, and for those of you, you who are not familiar with that, it's not relying on the language model for the answers, because the language model contains knowledge, Right. It understands relationships. And when you're using ChatGPT and you're asking questions, it's basing that on what's in that model. But it doesn't know. Your secret sauce doesn't know your processes, your IP doesn't know the specifics about your differentiation. Right. Your knowledge. And so we need to use that knowledge as a source to retrieve information. That's retrieval, augmented generation, and that's what people. People are starting to look at. That does require that we have some curation of the knowledge. Right. And I believe I'm a big believer in the intersection of knowledge engineering and prompt engineering. Right.

I think there's going to be a lot of work in that area because the knowledge engineering is structuring the information for retrieval. The prompt engineering is structuring the queries in some way. Right. Because you can see when you have good prompts and they're very detailed, that's actually metadata. Right. There's metadata. And then we're lining that metadata with the metadata in the content through the knowledge architecture. Do you want to make some comments about that? Yeah, that's a really good point.

Thomas Bloomer: Says, you know, it's like we have on one side the people who are just kind of. They know they need to do something but haven't started. And then you have the companies who already. Pretty deep, you know, and I had some discussion with consulting companies who

we don't even talk about prompt engineering. They have entire prompt libraries already for the different verticals, you know, so there's all these more people who already start building that. And now the question, which is really important, how can you

transparently enter that in Your work stream, you know, you don't want to have another silo. It's like, oh, and here's the AI silo. You want to bring that right in your process where it's like, oh, I have a question. I mean, this vertical, I work for this area. And it brings you an entire library which you then can still change the prompt in this flexibility, but it's 80% already there. I think another fallacy

I forgot to mention, which I see is like, if you have no concrete idea what you want to get out of AI,

AI blows you away. You know, you go to meet Journey and say, hey, I want to see whatever Apollo 14

on, on Mars and you will build something. And it's, wow, that's really good. You know, as long as you're very vague, AI is super, super good. Now, the more precise you get, the harder it is. You know, it's like if you're familiar with the

Penrose stairs, you know, from an Escher drawing, you know, the illusion which just goes around in circles. I try with every release of the latest AI text to image to build one, and so far it could not build one. And if any one of your audience would figure it out, please let me know. I'm really interested. But I could not figure it out. I put in A challenge on LinkedIn page is like, does anyone know how to do that? So you see limitations. And when it comes to LLMs, the same thing is true. Like, yes, if I want to have a summary, that's not the problem. But now think about the tech company. You know, it's like, I have support. It's like, this is my problem and AI will give you an answer. But now it's like in attack, it's like, well, on which operating system are you? Which patch level do you have? Which operating system? Now you have far more dimension you need to take care of. And now it's like a generic model can't do that without being trained to that level. So that's where you know, taxonomy and

needs AI. And it's a little bit more challenging than if you just play around. It's like, wow, it's so powerful. That's such a great, that's such a great

Seth Earley: observation because it can be so impressive on its surface. But then when you try to get into those very specific use cases and scenarios, you realize that the large language model doesn't have the context. Yes. It doesn't have the knowledge, it doesn't have the context. So we have to

Thomas Bloomer: build that in. You know, I use my Example of error code

Seth Earley: 50. I need troubleshooting steps. Well, okay, what product do you have and what model and what's the configuration and what's the operating system? And all of that. Yeah. Has to be on that little piece of content called, you know, troubleshooting guide for air code 50. Right. It has to have all that stuff and without that con, without that context, it's not going to be able to answer those questions. So you hit the nail on the head. And that comes both from the signals about the user. Right. A customer or employee identity graph. Right. First party data, you know, who's your customer or who's your employee? What's their technical background, what's their knowledge, what's their history here, what's their configuration,

what is the as built installation, all of that stuff. Architectural clues that will help the LLM get to what you want. But then you have to also have the content tagged so that it can make use of those contextual clues. Oh,

Thomas Bloomer: absolutely, absolutely. You know, it's like generic is generic, which is good for generic questions. And the more precise need answer you, you need to engineer that, you know, so it's, that's where I think many companies start realizing like, oh, it's not just so easy as to get an LLM inside your company. It's like,

you know, we, we started with Confluence many, many years ago and people loved it. It's like, wow, it's so good. And the search is great. Well, you know, the Same people now, 10 years later call me like, the search is horrible.

It's not the technology has changed, it's just the amount of data in it because you have now 10 years of different versions and you type something in and instead of one result, you get now 20 and you have to figure out, well, which one is it? And that's the same thing for an AI. That's right. I don't know which version you're talking about. And if you use a generic one, it's even bigger challenge, you know. Yeah. You can say, what's the status of the project?

Seth Earley: If you're in a single project database, fine. But if you're across multiple projects, you got to give me a little hint. Right. It's kind of like walking up to the counter at Home Depot and say, tools. Yeah. What do you want to know about tools?

Power tools. Yes. I

Thomas Bloomer: mean, we mentioned it during the webinar that the Boston Consulting Group did a really fantastic study. It's like if you use a AI for the right purpose, it's really powerful, but it's also really challenging to identify when not to use it because the

LLM was not trained for it. But it still gives you an answer. And if you take that, it will actually be worse than if you wouldn't have used AI. And so making this distinction is still a challenge. I mean if that's a problem for the Boston consulting groups, which I would say pretty high up in the knowledge level

seeker, just for the average person, it is a shock. The mere mortal. The mere mortal, exactly. Yeah.

Seth Earley: So when you start thinking about, you know, readiness for this stuff, like we, we actually have a KM for AI readiness assessment and we'll put that in the show notes.

I think there's. This should be an easy URL we should be able to rattle off. But you can, you can put it in the notes. Liam, when you edit this. Right. But the point is that you need to start looking at these different dimensions and start thinking about what you're ready for. And Thomas, do you want to talk about your perspective on what makes an organization ready for

AI and generative AI, especially with regard to knowledge, Knowledge processes, governance, data governance, those types of things. What makes an organization ready? Yeah, well,

Thomas Bloomer: there are obviously different layers. I normally say keep it simple and start with a

nicely contained area to just get a proof of concept, proof of value.

And you learn so much just by going through. Because every company is unique, at least they think everyone is unique. So try out what works for you. And like what we did is like there's so many things you could tackle. And we just, we saw a problem with our translation, which we did. Translation of a product is very expensive, especially if you want to translate entire help system, you know. And so we had like over 4,000 pages. We want to translate into 14 languages.

So we, we talk about a lot of money, you know. And so

rather than go to the traditional way, reaching out to a translation vendor, getting it back, which takes weeks, we actually start leveraging 20 years of translation memory and start loading them into the model from Google. We use Google Automl and so we leverage what we had and now we have our translation model based on our own translation, our language, our terms. And now it took, it took days.

Actually the translation took one day. You know, like the cost came down about 95% and the quality output depending on the language was on par. You know, it's always challenging translating technical documents, but it was as good as someone who doesn't know our product and translate it. So this was a pretty clear use case. It was isolated and it just was tremendously Valuable and now set to move for the future and open the doors like oh, now that we have that,

should we start translating our training documents? What about other labels? You know, it's like it really opened the door but you don't want to have it confined,

figure out what works, what doesn't work. Fine tune it before you start scaling it. Right. And that's one of the things that we do when we do that

Seth Earley: assessment is we, we identify a very narrow set of use cases. Right. So what problem do you want to solve? And then let's make sure that those, those processes that those use cases comprise or supportive of are measurable. Right? You have a baseline. Like you had a baseline, you said extra dollars to do this

translation. Now with the new process, it's very defined process, very narrow scenario. Instead of use cases, clearly defined knowledge and information sources. And then you had the measure, right? So you had the baseline. And then when we talk about proof of value versus proof of concept, it's really about scalability, right?

It's the proof of concept. Many proof of proofs of concept don't get out of pocket, right? Because you know, the data is different, the algorithms are different, the, the, the environment, the hyper parameters,

you know, all of those things are,

you know, different in, in a way that you can't readily translate into production. Right. You don't have a luxury. So one of the things we want to do is use production data, you know, to make sure that we're satisfying that use case. Liam, you had a question related to this, so go ahead. Yeah, you may have just

Speaker A: answered it, but I was going to talk a little bit more about the proof of value or proof of concept to proof of value and seeing what are kind of the common challenges that companies face when they kind of get past that concept of value stage. They want to expand it to other parts of their company. Dr. Thomas Bloomer, if you don't mind answering, what are those common challenges that lot of these companies kind of run into? Yeah, well a really good question.

Thomas Bloomer: You know, there's obviously the vertical and the horizontal integration. You can go, you can go deeper what you already do or you can actually go from silo to silo, you know, and

while for both there pretty a lot of experience already, you know, it's nothing new. And where I get really excited is really start thinking about end to end processes, you know, like combining the different silos. Because what we are really good in, we, we, we squeeze out minutes out of a process. You know, this is a process, it's well Defined how can I squeeze in or out a minute out of it? You know, and yes, that obviously for big companies, still millions of dollars, but where I get excited is like, so you squeeze here a minute and then you have a handover and the process is stale

for a day, a week, or even a month, you know, and nobody manages in that. Right now we have the technology to start tying these pieces together. You know, I call it like a lean dab, you know, for digital adoption platform. You know, it's like for lean manufacturing to really start thinking, how can we manage the wait time? Because the wait time is normally, you know, like between 70 and 90%, depending on the company, the industry. But this is where a lot of that time in the company is between the process. And I think that's the era I really want to start addressing it. And I think technology is slowly ready to actually do that. What, you know, the total quality management gurus talked about in the 70s, you know, it's like, we need to manage it that way, and now we slowly have the tools to do that electronically as well. And that's where I get the most excited about.

Seth Earley: Right. That's great. Have a bigger impact on the business.

So one of the things that I like to say, and I think you agree with it, there's no AI without ia and what does that mean to you? And how would you explain that to a business person?

Thomas Bloomer: What is it called? I guess we have to define aia.

Yeah, I mean, you can go really deep to explain that, but if you don't organize your content, you know, like, in a way, then you can't.

I normally explain it as like, if you have a foundation built on sand, you don't really want to build as kind of skyscraper on top of it, you know, and this information architecture is, for me, the foundation. You know, like, if that thing is not a good foundation, don't try to build something on top and expect it to work very well. You know, it's just whatever you have, you replicate and, you know, you have a kind of a gut feel how good or how bad it is. And if it's good, okay, we can play around and see what comes out. But if you already have a bad feeling, it's like, well, clean the house first before you replicate. Otherwise you just have a bigger problem and more to finally clean up. So. And it'll break in unexpected ways when you

Seth Earley: have built on top of it. Taking the analogy of building, I think it's a great one. You know, if you were going to build a house or a building of some sort, you wouldn't just start digging holes and pouring concrete. Right? You would have an architecture, you'd have a plan. You'd have a, you know, and you'd have multiple plans because you'd have a foundation plan and you'd have a framing plan and you'd have an H vac plan and an electrical plan and a, and a frame. All of these different things. Because there are different ways of looking at that information structure. Right. The information house has to be built based on an architecture. And just like you'd hire an architect to build a house, or you'd take a plan that, you know, closely aligns with what you're looking for. You of course would want to modify it in some way, shape or form. But yeah, but especially for large complex organization, you really do have to build that information architecture. And it's not off the shelf stuff because, you know, think of standards.

Standardization will give you efficiency, right? But it's differentiation that gives you competitive advantage. So if you just take an off the shelf plan, great, that'll give you some efficiency. And maybe you need that efficiency between organizations or within your organization, but it's not going to give you that differentiation that you need the marketplace you differentiate based on your knowledge, your expertise, your ip, your knowledge of customer needs, your knowledge of the competitors, of the technical problem and solution and so on. So that comes back to

managing and monitoring and curating and structuring the knowledge of the organization. Absolutely.

Thomas Bloomer: And as I think everyone agrees with what you just said, but it is also very hard to differentiate. Where should you actually focus on the efficiency and where on the differentiation. Because what I see often companies, they always say we are different and everything is different. And so like every process needs to be different. It's like, no, you don't differentiate here. You just, you don't even use best practices, leverage best practices. And what are really the areas where you differentiate and you need to be different and

people. It's just so much more expensive if you customize every single screen for my ERP system. But it's like that doesn't add any value. It's back to the value proposition. You know, you really need to have this hard discussion because everyone thinks they're so unique. It's like, no, you are not here, here is not a differentiator. Leverage what someone else already sought for you. And here is your secret sauce. Yes, theory makes sense. You know,

Seth Earley: that's a great, great observation. And it really is the, the, the framework. Framework or ref. Frame of reference that you need to look at that stuff. You're right. You don't need to differentiate everything. Maybe sometimes off the shelf standards will work for you, but it's understanding when they'll work for you versus where you do need to have your secret sauce, as you say. So that's a great, that's a great observation.

Let's talk a little bit about data governance, which was our topic of our webinar just the other day.

And you know, just maybe you could talk a little bit about, you know, kind of a metrics driven governance approach and how that applies to AI and AI implementations in a, in a corporate environment. Do you want to expound on that a bit? Yeah,

Thomas Bloomer: I want to address it from two different angles. One is like you need data and metrics to manage any project. You know, it's like you want to know, do we hit the milestones? Are we within the parameters? You know, can you fine tune it for that? You need obviously data. On the other hand, what I have found through discussion with many, many people is, you know, like it's kind of a new term. Well, I don't know how new, but everyone wants to be a data driven organization. You know, it's a kind of yes, we are data driven. We have data analytics and all that great. But when it comes to hard decisions, data is good as long as it's aligned with the gut feel of C level people.

We are not data driven organization. If it goes against your gut feel, then zone is like, well, I don't know what you did. I'm pretty sure I can pull different numbers. I know how analytics works, you know, and statistics, you know, you know, people always make fun about statistics and liars. You know, it's like, tell us that

old story. Give us that. Saul, what is that again that they. Say, oh, how is it called again? Like a statistic. Statistic and like I, I forgot about. But you know, most likely remember that. So it's still a challenge. Even if you have data and proven data, it's like, hey, this is where it goes. And it's like, well, I don't believe it. I still think we should do this. So there's still opportunity for change management to have solid data where people really believe. And you know,

I always say trust needs to be earned. You know, it's like you, you showcase and then you deliver

and then like, oh, well, I didn't expect that to work. Okay. And every time you have a value added

incidents and you can show it, it's like, oh, you know what, okay. I think these guys know what they're doing. So, you know, I, I don't fault anyone, but it's not as easy. Don't think because you have the data, people will just follow that. You know, it's like, no, you need to prove your value.

Seth Earley: I, I could not agree with this more. And

we did, we did go into this in the, in the, in the webinar. So for, for those of you who want to see more detail to definitely register the webinar and it's recorded. But the, the story I like to tell is we had a, a data driven recommendation for an executive team, and it

was saying, here's how you should think about your product hierarchy. Right? But they were like, no, they tried to manipulate the questions and the survey and the, and the testing, that didn't seem to work. Right. To get their answer, get the answer that they had in their gut. Right. That they wanted. Yeah. And so they still, even though manipulated the, the questions and, and the, and the testing,

it still gave us the answer that, you know, was data driven, that said, no, do it this way. And they said,

yeah, we don't care, do it this way. Right? They said, do it the way we want to do it rather than the way that all your data tells us. And guess what the results were. As we expected, it was the wrong decision. And then when you got the results, you're like, okay, this is what the data told us, and now here's what the results are telling us, right? So you're right, you know, you're 100% right. That they will believe in it if it aligns with what the gut says. I know, and it's really funny. It's not a new

Thomas Bloomer: phenomena, you know. Pretty sure you are familiar with the Monte Carlo simulation. And, you know, like, they lost some bombs in the Mediterranean Sea, you

know, and they like, how can we find it? And there's a mathematician who was appointed to find them, and he just start asking, to find what? To find what? A bomb from Second World War.

Seth Earley: Okay. And so they had to, to find it. And instead of going

Thomas Bloomer: to look for it, he starts just asking all the people like, where would you think it is? And then he start doing his mathematical thing and they say, I think it should be over here. And people like, how did he came to this conclusion? Like, no, no, that's not how we do it in the Navy. We just go and search the whole thing, you know, and

months later they found it. And it was pretty much exactly there where he was, you know, but it's you know, believing the numbers. It's like, no, that's not how we do it. And so it's a, It's a change which has been there for a very, very long time, you know. Right. And I think you're right. The way you had

Seth Earley: described this is you have to build that trust over time. Right. People have to be able to see the result. They, first of all, they have to understand it. They understand what the inputs are. They have to understand how you're getting to the outputs, what they're expected. Even if you can't,

you know, understand a large language model per se. You understand the inputs and you understand the outputs, and you validate the outputs, and you have to validate the outputs with a gold set of standard use cases. Right. But, but when it comes to any of these things, it's. It's trust that's built over time and it's based on explainability to the degree you can. Evidence. Right, Evidence

and, and results. Right. So the evidence of what you're modeling, what you're predicting, and then what are the ultimate results. And it's hard, you know, if you don't get that trust at the beginning or you, or you exploit that and you build mistrust, it's very hard to regain that. So that's why you have to be very careful to pick and choose your use cases and

really, you know, bring people along on that. But I, it's. I totally agree with you that that's a really critical thing. And, you know, being data driven is not just about data. No, it's not.

Thomas Bloomer: I think it's all about the business impact, the business value. And, you know, like, whatever you do, you need to set,

you know, the translation. We didn't say, like, well, we saved the translation process. It's like, hey, we can save you $1.7 million

and we can decrease the process from weeks to a day. It's like, oh, that sounds great. Okay, can we have more of it? You know, the driver was the business value. And you know, like, that's the problem. I see. I saw in 23 that there was a lot of activity without business drivers. You know, it's like, it feels like someone in 90 tried to solve a business problem without any guidance. It's like, wow, that's really dangerous. A lot of song costs is that, you know,

like, it reminds me a little bit of the Texas sharpshooter mentality. You know, like, if you don't have a clear goal, if you just start shooting at a barn and then you Go to the barn and start drawing the, the bull eyes around your

like wow, I'm so good. Right? Golf up front a little bit, a different story. You know,

Seth Earley: that's very funny. I'm just looking over some of the questions we had from the webinar. But Liam, did you want to ask some specific questions about digital adoption

or what did you want to cover? Liam? Yeah, I saw that you

Speaker A: had some experience or a lot of experience when it comes to digital adoption and digital transformation. And when it comes down to it, I mean digital adoption, artificial intelligence and knowledge management, they're kind of like this magic triangle of things.

And especially when it comes to building like technology solutions, how do you see the relationship to all those three things kind of, I guess mesh together and create these solutions? I love

Thomas Bloomer: the question. I'm totally biased because I've worked so long in knowledge management and I'm really very engaged in depth and AI. But so from how I see it is like what we try to do from knowledge management is to bring the right information to the right people at the right time time, you know, at the right place. You know, that's a kind of what we try to do. And now we have different tools as I mentioned up front, in order to do that better. Like what we have learned

like 20 years ago is like when we started with the first knowledge based systems, you know, in support, it's like nobody cared about it. They felt like, wow, it's an additional step I have to do.

And it really was once we integrated the knowledge base right into the, the flow of the support engineer in their

being implemented in the CRM system, you know, it was just a field and suddenly people went there and like, oh, actually that cuts down my time is I'm actually faster,

you know, like when using that. And the same thing is true now, you know, you don't want, it's like, well, here's your work and now go over here to this AI tool to do something you don't understand it and bring it back. And that's where digital adoption platform is a kind of the glue between these systems. You know, you can put it on top of a Salesforce or SAP or whatever system you need. You just put a launcher and it brings it there and brings the information back without having to go to it. And like, can you build me an integration here? And with APIs and well, we don't have access to the source code. It's just an overlay on top of any application and then you can do whatever you want. So that's why I think these three pieces just come so nicely together

to finally integrate what you need to do right into the process. And I think if we do from a knowledge management perspective, we do a good job. Knowledge management is totally transparent. People don't even know that we were involved. It just works and like, wow, the system just got so much smarter. It shouldn't be like, oh, now I have to do this. It's like, no, we helping you and you shouldn't even know that we were there. So that's a kind of how I see these three connecting to together.

Wow. I think people now are really starting to realize the value of knowledge.

Speaker A: Knowledge management that's designed well and implemented well, especially with all these new tools and things coming around. Oh yeah, it is, it is

Seth Earley: a, you know, there's a, a trade off between

being current and the ability to absorb change. And digital adoption platforms allow or enable the organization to absorb change more quickly, right? Yes. And

Thomas Bloomer: many times even without even notice, you know, it's just like you, you just go through the flow and you like,

okay, that was different, but that was much faster, easier. You can

add automation into the process. Like, well, how many times certain steps you do over and over again. Because I can tell

you, like, is that a new setting? It's like it just automatically filled so I can just focus on the stuff I really need to interact with a customer. So you. And I think that's what AI will do. It's like many people are afraid that it will take away everyone's job and I think it will shift things you can repeat. It's like, why would you do that more than once? You know, like, define it, leverage it. And now it frees you up to do things you never had time to do. It's like really value added work, you know, like going deep or building a good relationship with your customer, your vendors or whoever. And that's not what you can automate. So I think work gets far more exciting, you know, if you apply AI correctly.

Seth Earley: So it sounds, the things that you're talking about related to digital adoption also sound like there's they, they're adjacent to robotic process automation. Do you want to talk about how those two areas fit together or are they increasingly blurring the lines between them? I think over time a lot

Thomas Bloomer: of things will converge and blur the lines. So

robotic automation, RDPs, I think they are just a little bit more strict and most likely for high volume things you just want to automate, you know, like that's where you want to use it. Adapt is a little bit more flexible and most likely also smaller processes, you know, like hey, here I want to automate like three or four step on this screen and it's right in the front end. While RDPs are often in the back end where you just have a huge amount of data. You just convert from one system to another.

So I think front end versus back end. But over overall I, I see even more things converging, you know, like we have also from a training perspective, artificial intelligence,

artificial reality, AR and VR. And I think if you think about AR on the, on really the low level is actually you're in front of

a panel and you have your goggles and then it tells you press here, press here and has some instructions. I mean this is the basic level of ar, you know, and it's very similar to Adapt where it's just on a screen and I think these two could totally converge. Where it's like, it's not really that you have to be in a, in front of a screen is think about all the things now Apple is pushing and also Meta,

you know, like with the goggles and all that stuff.

Facebook, you know, like I, I think we, we will come into more virtual world where you, you have your goggles and you see instructions, how you use the panel and then you come back to your screen and it's one flow now all together. It's not segmented anymore. So I think you're absolutely right. These things start

converging, you know that you can't even tell which tool you use. It's just transparent and it's, it's nicely put together.

Speaker A: You know, I'm, I'm glad you brought up the Apple Vision Pro kind of new wave of AR that's coming around nowadays. We were talking about kind of like context and prompts and the importance of prompt engineering. Like earlier in this discussion, discussion and I was just thinking imagine it's. You wear these, you know, what are they called the Apple Vision Pros all day and it's basically taking context of your life and what you're doing on a day to day basis. And I can only imagine like the future possibilities with yeah how AI will kind of interact with you on a. Day to day basis of

Thomas Bloomer: knowing oh. Yeah, you get out of bed at this time and you like to

Speaker A: make your coffee first and do this and check your calendar here to where like it'll just start autumn, it'll basically be trained based off of your life and who knows the possibilities of that. Do you have any comments? I'm Glad you brought that up.

Thomas Bloomer: I attended the keynotes in 2000 and

I could actually found the computer science professor who had that because he was talking about like a, hey, in the future you will talk to your computer and say, I have a homework assignment about this topic. Tell me a little bit about that. And then like, oh,

you make some three points. Can you compare and contrast point A, B and C that were really interesting. And then, you know, like, so the discussion goes on and on. And then he said, oh, okay, can you summarize everything we talk into a paper of about 600 words and send it to my email? That was

the keyNote speak in 2000. And when ChatGPT Voice launched last year,

I, I had to find this professor and send him a message like, do you remember. Yeah. What you talked about now we can do it. It's exactly what I can do now. And he's like, oh, that. You still remember that. That's so, that's so great. It took 23 years, but, you know, it's now here,

and I think it just goes on from here, you know, it's fascinating, you know. Yeah, I remember also, like 20 years ago

Seth Earley: when personal digital assistants were a thing that people were talking about will be coming, right? And then the Apple Newton and how abysmal a failure it was, but it was like stepping stone, right? We're going to get there. And, you know, the. I used to talk, you know, in my talks about one day being able to access all the knowledge you need, you know,

like whether you're an attorney and getting case files or whether you're doing something else. And it's just, that's our reality, you know, it's, it's. Yes,

Thomas Bloomer: isis. And so it's really interesting to kind of think about

Seth Earley: where this is going to go. And, you know, I was listening to a podcast the other day, Hard Fork. I don't know if you listen to Hard Fork,

it's a good podcast, and were talking about, you know, artificial general intelligence, which people think may not be that far away. I still have a hard time believing that that's coming because I think the human brain is so much more complex than anything we can do in. In silicon, right. One nerve cell can be connected to 10,000 others. We have, you know, I don't remember how many billion we have, but we have a lot of nerve cells in our brain. And then neurotransmitters are analog. They can go at different levels. And there's 100 different neurotransmitters. Right? So that's a lot those orders and orders of magnitude more complex. Complex. And you know, when you still look at what generative AI is, it's. It's still, it's an emulation of intelligence. Right.

And it's hard to imagine where that's going to go, but it's also hard to imagine it, you know, really, truly replacing people. It will replace jobs, obviously,

and it's the people who are going to be using the AI most effectively in the organizations using it most effectively. So what do you think? What do you say to the companies that are banning it right now internally and afraid to do anything with generative AI?

Thomas Bloomer: Well, every company has a different risk tolerance. You know, I always feel

like banning is the worst thing you can do, you know, because then you just open a different channel which you don't control anymore. Actually having

a very clear place where you can play with and you have control over it, I think will be much more preferred and very clear policies, what you can and what you can't do. Because if you ban it, you just go to your cell phone, it's like, that's my phone. I go to ChatGPT and you enter data you should not put in because nobody told you and you still have the same problem. While if you would educate people what not to do and what to do and give them a safe

sandbox, I think that would be far better because you will really behind this the wave of all the other companies who actually move forward, you know, and so you don't want to be left behind. So that's kind of my approach

because you can't stop it. And the more you learn and educate and safely do it,

I think that's the better approach than just kind of try to ban. Like if a company would ban computer in the 80s, like, well, we don't use computer, we use typewriters. It's like, well, good luck with that approach. You know, it's like, it's coming. Yeah, that's very funny.

Seth Earley: Yeah, you are. There were, there was a time when that happened, right. And I remember. What's that? What's that? I said that was a

Speaker A: thing. They're banning computers and replacing them with typewriters or holding on to

Thomas Bloomer: the typewriters. Well, yeah, you know, every change, people don't like it. And it's like, well, if you don't know how to save and how, how to use board rather than typewriter, you know, like you. It's slightly different, you know. Wow. Yeah, it's. It's change, you know, and it's an investment. You know, back then it was far more expensive, obviously. Fair enough, fair enough. Yeah.

Seth Earley: You know, I, I, I do. Go ahead. Sorry, finish. You brought

Thomas Bloomer: up a very interesting point regarding the law. You know, it's like, yeah, I mean, law is all written down and you could actually leverage AI for doing that. And there's some interesting studies where it's like, well, if we would actually let

robots or AI enforce the law, all the problems we have because we apply law still with human intelligence involved. But

there's a lot of laws which are totally outdated, but they are never took it out of the law. They are actually still effective, but not enforce them because it doesn't make sense. Right. But now if you tell that a robot just to enforce everything, Right, Incorrect. We really need to clean up our law

to be current, you know, to your point, you need information architecture. You need to clean your house. The law has never been cleaned up. We just added over 200 years more laws. And there are some really funny things which are illegal. It's like, what? That's illegal. I mean, it's really funny, you know, so just to your point before, that's. Really great, A really great point. And

Seth Earley: I remember when I was working in an office and I brought the first computer in, right? They had not used computers and I brought a computer in to, you know, automate mail, you know, mail marketing, right? And it was just snail mail, but it was integrating lists and doing personalized story, you know, addressing personalization. And it's doing all this stuff and we're grabbing these lists and this guy says to me, you know, one day

this, this is an advantage for us now, but one day this is just going to be the cost of admission. You know, using computers in business, it's just going to be mandatory, right?

Thomas Bloomer: It's like, right? It was one day. And I also remember

Seth Earley: when one of my clients said, our biggest problem is not having enough information online.

That really changed when you made that great observation about when people like Confluence because search works so well. It's like anything new is going to work well until you start loading it up with stuff, right?

Thomas Bloomer: Yes. I always told my team, which I think is true for AI as well. You can build everything. Building is not easy and not difficult. Maintaining it over time is really hard. And a good friend of mine made a funny joke. It's like, well, God could create the world in seven days because he didn't have an install base he had to take care of. And for

us it's like, it's not the new Thing. What do you do with everything you had before? How you move that over? And I said like that's a kind of an interesting observation. That's very funny. God didn't have an

Seth Earley: installed base. That's what he can make the Earth in seven days. That's great. I love that. So

let's see, you know, we're right at the end, right at the top of the hour here, just a few minutes before.

What else do you want to kind of COVID up on, by the way? Tell us a little, tell us a little bit what you do for fun, you know, what's, what's. What do you do when you're not doing DAP and knowledge management?

Thomas Bloomer: Well, I, I have a family. I have a. A four and a six year old. You know, they keep me on my toes, you know, like,

he's like, they come with stuff but like, wow, I didn't know about that. Tell me more. It's fascinating how quickly they kind of embrace things and I mean I never taught them how to use a cell phone or a tablet. My son does stuff. I have no idea how he learned that. He kind of. He takes picture of himself or a wall and start drawing on top of it and changes the color and start automating that stuff. Like, wow,

that's pretty cool. Yeah. Isn't incredible.

It is. And it's like, yeah, I'm definitely not afraid of him.

He has drunk the Kool Aid already, you know, so. But yeah, I love reading, you know, keeping myself informed that it's just so much interesting things, you know, like, you know, new stuff. But also what I also like is going back and see like what have we known in the past which we just have forgotten or haven't applied and you know, yes, human are still very similar than. Yes. Many, many hundred years ago. And it's like, wow, this still applies to change management and all that stuff. So I love reading quite a lot. I love historical fiction

Seth Earley: myself. Like historical. It's historically accurate but they fill in some of the gaps and build personalities. But that's. I never liked history in, in school, but you know, now it's like I'm so curious about, about everything. And I love science. I love reading scientific journals and all that. I can't get enough of that.

Well, is there anything else you want to kind of sum up with? I mean this has been great. I've really enjoyed working with you and having you on the. The. Well, same here. Same here. Conversation's been fantastic. Anything, Any final thoughts, Thomas? Well,

Thomas Bloomer: something which is always A little bit controversial but I always.

That's a kind of my go to like if we in knowledge management do a really fantastic job, we work ourselves technically out of a job. That's always my goal is like we are not here here for being here forever. If you do a show, things just start to happen naturally

and as I say, transparent and people have processes that work

now since there are so many changes, you fix one problem, you have the next problem. But the, the motto should be is like make yourself obsolete and you start working the right direction. So I think that's something I always try to do. It's sometimes I was too successful. But.

Most of the time, you know, like you just free yourself up to do some other meaningful work for the company. Well, I'm waiting for

Seth Earley: you to make yourself obsolete in your current, your new job.

And it's great to, to meet people that you just resonate with and, and absolutely. Yeah.

So that's great. Thank you again so much for your time and participation. This has been really great. Thank you,

Thomas Bloomer: Thomas, it was a great pleasure. Thank you so much and I hope we can continue in the future with that. Absolutely. And thank you to our

Seth Earley: audience. Liam, I appreciate your help in both the pre production and the post production. And now you're here on camera as well. So of course behind the camera, in front of the camera. That's a, that's a

great. And then Carolyn, of course, who's been instrumental in all the logistics. Thank you so much and thank you to all of our audience and we'll see you next time on our early AI podcast. Bye now. Thank you so much. Have a wonderful day.

Thomas Bloomer: See you. Bye. Thank you.

 

Meet the Author
Earley Information Science Team

We're passionate about managing data, content, and organizational knowledge. For 25 years, we've supported business outcomes by making information findable, usable, and valuable.