Navigating Vendor Hype, Data Realities, and Strategic AI Implementation for Sustainable Success
Guest: Tobias Zwingmann, , Analytics and AI Expert, Founder at Rapyd AI
Host: Seth Earley, CEO at Earley Information Science
Chris Featherstone, Sr. Director of AI/Data Product/Program Management at Salesforce
Published on: August 15, 2024
In this episode of the Earley AI Podcast guest Tobias Zwingmann, an esteemed analytics and AI expert from Hanover, Germany, brings a wealth of experience from his work with SaaS platforms and consulting, and he shares invaluable insights on the practical intricacies of AI in business.
Join our hosts Seth Earley and Chris Featherstone as they discuss with Tobias the importance of business leaders understanding AI, the pitfalls of misleading sales tactics, and the necessity of organizational alignment for successful AI implementation. With topics ranging from data quality to the challenges of adopting generative AI, this episode is a treasure trove of actionable advice for anyone looking to navigate the complex world of artificial intelligence.
Key Takeaways:
- AI is often treated as pure software when it requires a fundamentally different approach involving creative interaction and probabilistic outputs rather than transactional processing.
- The "80% fallacy" traps organizations when initial AI prototypes seem promising but each incremental improvement becomes exponentially harder and more expensive to achieve.
- Software vendors frequently sell AI solutions to problems they cannot fully solve without proper data quality, structure, and organizational alignment from the customer.
- Organizations must establish strategic clarity on whether AI is mission-critical, horizontally transformative, or pragmatically incremental before making technology investments and commitments.
- Successful AI roadmaps involve six-month validation cycles with modular building blocks that deliver measurable value while positioning organizations to benefit from future advances.
- Generative AI works best as augmentation for human workers rather than complete automation, starting with low-risk use cases before scaling to customer-facing applications.
- Business leaders need fundamental AI literacy to ask vendors critical questions about data requirements, non-deterministic behavior management, and production monitoring rather than accepting aspirational functionality.
Insightful Quotes:
"AI is very often still treated as a pure software and IT topic. But with generative AI, there's a certain limit to how you can scale by just looking at it from a software perspective because software is very often just very transactional." - Tobias Zwingmann
"People are really eager and it's very easy to ship an 80% kind of ready prototype in even a couple of hours. And people then start to extrapolate that and say, oh, if we did that in one day, what can we do in one month? And it turns out that in one month you can't do that much actually because every percentage point becomes almost exponentially harder." - Tobias Zwingmann
"Software vendors are selling solutions to problems that they can't solve fully. If clients don't have data in the right shape, in the right format, or if knowledge is contradictory, you can have the best software. In the end it won't work." - Tobias Zwingmann
Tune in to discover how organizations can navigate AI vendor hype, build realistic roadmaps, and establish the organizational alignment necessary to transform AI investments into measurable business value.
Links:
LinkedIn: https://www.linkedin.com/in/tobias-zwingmann/
Website: https://www.rapyd.ai
Twitter: https://x.com/ztobi
Website: newsletter.tobiaszwingmann.com
Ways to Tune In:
Earley AI Podcast: https://www.earley.com/earley-ai-podcast-home
Apple Podcast: https://podcasts.apple.com/podcast/id1586654770
Spotify: https://open.spotify.com/show/5nkcZvVYjHHj6wtBABqLbE?si=73cd5d5fc89f4781
iHeart Radio: https://www.iheart.com/podcast/269-earley-ai-podcast-87108370/
Stitcher: https://www.stitcher.com/show/earley-ai-podcast
Amazon Music: https://music.amazon.com/podcasts/18524b67-09cf-433f-82db-07b6213ad3ba/earley-ai-podcast
Buzzsprout: https://earleyai.buzzsprout.com/
Podcast Transcript: Demystifying AI Implementation, Vendor Realities, and Organizational Alignment
Transcript introduction
This transcript captures a comprehensive conversation between Seth Earley, Chris Featherstone, and Tobias Zwingmann about the practical challenges of implementing AI in business environments, exploring vendor misconceptions, the critical importance of data quality, organizational strategy, and the realistic expectations needed for successful AI adoption.
Transcript
Seth Earley: Good morning, good afternoon, good evening. Depending upon your time zone when you're listening to this. My name is Seth Early. And I'm Chris Featherstone. And welcome to the Early AI Podcast. And we're really excited to introduce our guest today when we're going to discuss a number of things that are very important to our space. You know, certainly why a lot of vendors fail to grasp the core fundamentals of AI and the significance of things like data labeling, taxonomies, ontologies and metadata. We'll talk a little bit about the limitations of generative AI and why it should be considered a supplemental tool rather than a fundamental enabler. And really how business leaders can better navigate AI technologies to make sure they're getting meaningful and impactful applications. Our guest today is an analytics and AI expert from Hannover, Germany. He has a background in both SaaS platforms and consulting. He brings tremendous insights to the strategic implementation of AI. He emphasizes the need for some of the non technical aspects of AI implementation and really advocates to setting realistic expectations within the industry. Tobias Vignan, welcome to the show. Hi Tess, thanks for having me. It's great to be here.
So the first thing we like to begin with is, you know, common misconceptions like what are you seeing in the industry that, you know, people are not understanding? And I know that's changing. It's a moving target, right? People have seen, starting to understand. Okay, well gen AI, you know, can do certain things, but then we need to leverage corporate data. But even when they get that piece and go, yeah, okay, rag, we want retrieval, augmented generation. We want to use our, our knowledge as a ground, as ground truth. What are they still missing? Like what are the things that people are not getting both on the vendor side and on the customer side.
Tobias Zwingmann: Oh my God, there are so many recipe styles. Yeah, I know, I'm sorry. Hey,
Seth Earley: let's start, let's start on the customer side. Yeah. So like,
Tobias Zwingmann: to be honest, like the, like, like the biggest misconception that I see from my consulting work is that AI is very often still treated as a pure software and IT topic. And I think this is like both on the customer side but also on the vendor side. There like lots of companies are, you know, trying to sell AI as a, as a software, as a tool. You know, that's like what comes most natural both to like big consulting, but also to, you know, SaaS companies. But I think especially with generative AI, there's a certain limit to how you can scale and, you know, or not, not scale, but adopt AI by just looking at from, from a software perspective because like software is very often just very transactional. Like you give something in and you expect a certain output and ideally you can do that at scale and then they know that's your business. But with generative AI, you know, we have a much more like creative approach to, to working with generative AI and also in terms of like handling the outputs, you know, provided by generative AI, which are like inherently like deterministic or more probabilistic, however you want to call that. But definitely different to, you know, what you expect from a normal software as a service product. And I think like having both vendors and customers that go through that mind shift is a big challenge right now because like the whole market has been educated towards software. Like, you know, you just pay 39 bucks a month and you know, you get this thing out of there. But with generative AI, it turns out it's very hard to define what you get as an outcome. And if you want to be really crystal clear and define that outcome, that often turns out to like taking the LLM or whatever gen AI service you have, wrapping a lot of stuff around that in order to tame it and then solving the output and then suddenly it becomes extremely expensive. So I think this is one of the misconceptions that I currently see and also a lot of like missed potential on the customer side.
Seth Earley: Yeah. And so when you, I like the way you put it, you got to put a lot of things around it to tame it. Right. To put guardrails around, tame it. And so what are the, some of the hidden costs? Because you know, what I, what we have found, and again I know I've presented this scenario is when you tell the large language model to only answer from this data source and if you don't have the answer from this data source to say, I don't know, that will reduce hallucinations or mitigate them. I haven't found a situation where I've had significant hallucinations. In that situation when I've turned the temperature down to zero and I've, and I've done that, but still it's, it's still possible and you do need to, to do a lot of testing. But tell me about some of the hidden costs. So you know, you're trying to tame it, you're trying to put guardrails around it. And then when people, you, you, when we did the prep call you talked a little bit about in a couple of days, they, they do some, you know, they'll use some out of the box tool Tools, just some open source tools. And then, oh wow, this is great. We just need to put another couple of days into this and we'll be set. Tell me about that thought process and that scenario. And then what does and doesn't work. Yeah,
Tobias Zwingmann: I call it the 80% fallacy. So people are really eager and it's very easy to like ship an 80% kind of like ready prototype in even a couple of hours. And people then start to extrapolate that and say, oh, if we did that in one day, like you know, what can we do in one month? And it turns out that in one month like you can't do that much actually because like every percentage point that you try to get like more and more accuracy becomes like almost exponentially harder. So going from 80% to 90%, maybe you can go do that in like you know, three months or so. But like going from 95 to like 100% or even like 98% becomes incredibly hard because. And I think this is what a lot of people underestimate is like the, especially in like these open ended conversations that you, that you have or you know, where people interact with large language models. Like you have an infinite number of possibilities of what people can like throw in there. And we would expect that for example, if you just have a slight variation in the type way you question that, or a slight variation in the punctuation or whatever you did, that the LLM will be able to figure it out and still give the right output. But it turns out, and like recent research paper also show that, that there are very strange ways that LLMs behave sometimes. And there's literally impossible to catch all these different, different exceptions. And that's not saying that you can't use these tools in production. It's just saying that like there will always be a like minimal risk of that something might go wrong. And the really hard part is to quantify that risk. Like are we talking at like you know, 1% failure here or 2% or 1.5%? And I think this is where we have to approach that a little differently. And this is where the hidden costs come in because you need to have that both expertise but also these systems in place to monitor and assess the risk that you're actually operating at. Both in terms of accuracy but also in terms of security and like securing your actual applications and software that you, especially if you have customer facing LLM applications. So Tobias, when you start thinking about those pieces, right, because we
Chris Featherstone: look at this from, and I don't, by the way, I don't ever see that organizations take cost up front. Right. It's usually, which is not, not the bad wrong thing. Because part of it to me is let's understand the use case first and what we're trying to solve or, or as I, you know, as I see them do this. Well, we just want to do something that's really cool and innovative. That's the wrong way to approach it as well. So when we see these, these, these conversations going, where else do you see them missing? Because you brought up it a bit, but I want to kind of tease that a little bit. But where else do you see the organizations missing? So it's one thing to miss on the, the overall use case. It's cost is usually a miss up front. But then it comes back in. Because now what does this thing, what does it look like in production? But where else do you see organizations missing? Like across it? What does that look like and what have you seen? Yeah, I think what's often
Tobias Zwingmann: missing is the overall strategy is such a broad word but just the big idea of what you want to do. To give you an example, we have companies like Tesla for example. AI for them is like super mission critical because they want to have autonomous driving. If they fail at autonomous driving, like the whole company will go back. Maybe not, but it's super important, right? But then we also have companies like for example Moderna, which is like the big shining star use case for OpenAI and they have like, you know, 80% of their employees are using ChatGPT. So it's a very like horizontal approach but like with lots of like individual like use cases that people do on a small scale but that have a collective large impact. And then we have companies, for example, like you know, big retail giants like you know, Walmart or like, you know, lots of big retailers here also in Germany that have a very pragmatic approach to AI. You know, they say okay, it saves us like 2% in our supply chain. Like you know, of course we're going to do this. Like it's a no brainer and just figuring out where you are in your overall like organizational ambition towards approaching AI. I think this sets the stage for all the things that come like after that. Because if you want to have this whole transformation then you need to have also someone in your organization that is driving that transformation. Because ultimately it's also a cultural and people problem and aspect. If you want to have these pragmatic approaches to AI, then you need to empower the different business units to just go out and do Whatever works for them. If you have this transformative use case, then you have to be sure and you also have to manage your stakeholders that there's a certain risk attached to that that it will eventually fail or not work out. And what I see is that companies are not really sure what kind of these games they are playing and kind of like sprinkling in different things from all of these and sometimes you know, trying to copying what you know currently is in hype in Silicon Valley. You know, sometimes just buying a tool, doing stuff here and there and throwing things against the wall, hoping that something sticks. And I mean to a certain degree you need to experiment and also need to have like lots of these like low shell touch points with AI and try things out. But I think you should be aware from a leadership perspective, where's your organization going here? Because it makes a big difference whether you are an insurance company. I currently work with, like insurance is like heavily affected by AI. The whole business model is kind of like at risk. But then there are other business models, like you know, more brick and mortar types, businesses, you know, carpenters, handymans and so on. I don't know who are probably not that much effective. You can still use it here and there, but it's. Yeah, they will still like be there even after AI probably. So I think this is important to navigate that and you don't have to overthink that. Right. But you have to figure it out and somewhere have that alignment across leadership. What to do
Chris Featherstone: you feel like they should be going into it? Because I, because the, the use cases and stuff they're looking at, I always feel like there should be, which I try to help them understand, you know, risk versus value in terms of the use case they're thinking about. But then that leads into well, what's, what is the risk assessment according to what this looks like? And that could vary for organization. What is like. I love that you brought up organizational alignment because it's so important. The cost assessment, what's the feasibility, the technology feasibility. Right. They're also thinking about this from the perspective of well these are technologies we've never even looked at before. So how do we retool our organization to actually understand these technologies? And some of them jump in because they know the Microsoft stack. So okay, that makes sense. Or we see a lot of these types of things. So there's a lot that goes into I think thinking about this also from the perspective like you brought up, what about pattern recognition technologies, what about propensity models, what about classification models before you Even get into generative models. Yeah. And what can that look like? And I think it feels like there's just a lot of re education that needs to happen and, or uneducation and then reeducation that I find that is absolutely critical. So I'm. It's interesting, but at the same time it's disheartening at the same time, it's so exciting. Right. All at the same time. So. But anyway. But I'd love to, I'd love to get your take too in terms of when we start looking at some of these challenges and stuff, what does it look like and where have you seen where we've disappointed, these inflated expectations? I know you talked about that before, right. In the pre call, but what does that look like and how do you help these leaders get over that hump? So I think one of the big
Tobias Zwingmann: missed expectations or expectations that we could not live up to. I mean, but generally the AI industry is that there will be more and more progress happening on the LLM front at the same pace that we have seen before. So like we have not. We are, I think we are not in the like exponential phase where like, you know, we will have superintelligence next year. Maybe we'll have, let's see. I don't believe in that. I think we will, like we are already hitting a limit with LLMs. And this is not to say that like they are not useful, but now we kind of, kind of like, you know, concentrate on the use cases that actually work with the technology that we have right now. Because there are tons of use cases we can do with that. And I think we need to educate customers to just focus on what's there, not to wait on what's next in two months. Because from my experience, there are business leaders sitting there and just waiting, waiting what would happen and say, okay, let's wait until Christmas, we will have a much better model there. Maybe we don't even need a rag until then. I don't think this will happen. Maybe it will happen in the long term that we don't need these toolings around large language models. But I think thinking strategically about your use cases, you should position yourself in a way that you benefit from AI progress, but not in a way that you kind of depend on it at all expenses. So to give you a concrete example, if you have a chatbot use case for customer support and you're currently using a rag architecture to deliver that, that's great. Go ahead and do that. However, keep your eyes and ears open and just consider the fact that there might be a point in the future where a single to LLM cost maybe you know, not 2 cents or 0.2 cents but maybe 20 cents. But like it has all like the Rack tooling included because like you have very high context limits or other AI service or whatnot. Would you be able to make the switch? Like would you be able to adopt that technology? Or are you so locked in and committed to your current infrastructure that you can't actually move anymore? And in the end, you know, that comes down to like make or buy decision. Like are you going to like invest in all that technology and build it up yourself or are you just like using what's there and like consuming these Rack services as a, services as a service as they are offered right now. And I think right now it's, it's good to stay flexible, but it's not like advisable to just like wait and see what's happening. So there is this fine border of like taking action but at the same time not locking yourself too much away into like long term obligations, which is a fine line to walk. But I think like managing that this is like, this is really critical right now. So
Seth Earley: one of the things that we talked a little bit about in the prep call was aspirational functionality, right? Things that vendors say they can do but maybe they can't quite do it today. And I think that's a real danger when you're talking about making plans around use cases and hearing what vendors say they can do versus what they can actually do. Do you want to talk a little bit about, about that in terms of, you know, what you've seen and then, you know, how do you kind of mitigate that risk that they're, they're ahead of their skis on, on, on some of this?
Tobias Zwingmann: Yeah. So I don't want to call a specific vendor out, but I think we all have one vendor at least in mind who's kind of like on, under delivering on, on their promises, especially with co pilot like interactions on their, on their computers. And I think the really important point here for business leaders is to figure out, that's why I always say like in my roadmap, everything starts with AI understanding. Like don't fall for you know, being sold snake oil and you know, having other people like just selling you, like you know, something out of the blue just because you don't understand it. And like there's so many tool vendors out there who, that will just tell you something or sell you something because you have no idea what they are talking about. Yeah, and they won't make it clear. They won't, they won't
Seth Earley: make it clear. Like on purpose, you know, on purpose. And, and that's why I
Tobias Zwingmann: think if you cover some basics around AI and LLMs, and again, this is not about taking PhD program, so it's about committing a few hours, maybe taking a course or just diving deeper, trying some things out, you will be in a position where you can ask the right questions, where you can ask these vendors, hey, how do you deal with this and that? How do you manage the non deterministic behavior of LLMs? How do you make sure that we actually use the data sources? Which kind of format does my data need or what are the requirements for my data in order for your solution to work? Because lots of vendors are not transparent about that and you don't even have to answer these questions for yourself. But just by being able to ask these questions, I think this will make it so much easier for you to cut through the vendors that are just selling something out of thin air and those who actually have a mission and know what they are talking about and helping you implement that. Yeah, I
Seth Earley: totally agree. And I think one of the other interesting points that you had made is you said, you know, the generative AI is kind of the cherry on top of things, right? Because you need to have so much, so many other elements that are aligned, such as the data. And you know, one of the things that we're doing for a large medical equipment manufacturer is we're going in and we're, we're using some of the, you know, templated agent based approaches using LLMs to fix the data, right? To fix their product data, to enrich it. And that's kind of a foundation for everything else that we want to do. And no matter what, you're still going to need good data. Like one of the biggest challenges around customer service operations and using gen AI is again, you don't want to use the gen AI to answer the questions. You want to use the gen AI to interpret the questions and make the results more conversational. But you need to have your knowledge base and that needs to be, you know, correctly structured and tagged and organized. What, what are you seeing in terms of, you know, when vendors are not getting it right or when they're not understanding that piece. And, and you kind of alluded to this already where you said, you know, they want to sell software and get licenses and they don't want to necessarily spend a lot of time with customers trying to go upstream and Fix a lot of stuff but, but that is fundamentally what needs to happen. So you know, do you want to talk a little bit about what you're seeing? What are they not quite understanding or getting or wanting to deal with? Maybe they understand it, but it's like I want to sell my software. You know, I don't want to, I don't want to fix your data problems. Yeah, I think
Tobias Zwingmann: like, I don't know if they don't want to understand, but I think the big takeaway is that these software vendors are selling solutions to problems that they can't solve fully, they just can't solve. Like if clients don't have data in the right shape, in the right format, or if, you know, knowledge is contradictory, you can have the best software. In the end it won't work. And the software company, it just can't solve that for you. It's just like, you know, is selling you the car keys but the car is not there. It's like very hard to come up. But then like you have the expectations from investors, external stakeholders, internal stakeholders to, you know, to just drive that software adoption. And I think this is, yeah, this is, this is so important or this is the critical thing that they don't, they don't quite get or where they are just like maybe like thinking by themselves, really honestly thinking that I will like at some point fix it in the future. Like just hoping that OpenAI will release GPT5 in the next three months and then we don't have all these issues anymore because we have superhuman intelligence. Maybe that's the bad they are having but honestly, I don't know.
Chris Featherstone: Well, I mean you get also get into the thing I love about Germany is Germany pushes the envelope for GDPR and for a lot of the security standards for data out there. However, with a CEO that's in a global, you've got, you know, data that is all over the place and you're also trying to think about that from the perspective of what can be shared, what's sensitive, what can't be shared. So how do you get a whole roll of view using these types of tools? How do you instruct customers with those types of perspectives and issues?
Tobias Zwingmann: Yeah, for me I try to get back to the original alignment. Like what are we actually doing here? If we look more for a kind of divide and conquer approach for AI, I think then you have to also empower the different business units also globally to ship their own solutions and just build their own stuff. If you see like for example, customer support or whatever that is as your one single killer use case, then I think the first step, and that's what I tell them is to think about this organizationally. Like what does it mean? Who is the owner of this? And if you don't have that aligned ownership from the beginning, there's no way you can approach any, you can get any technology to solve that for you. Because in the end it's about enforcing rules, ownership, budgeting these things and then like shipping it and like also taking the risk that it might not work. Like, and who is like who is willing to take that risk? And that's where a lot of companies, at least the, you know, customers that I work with are rather going for these. Yeah, what like what I call like atomic use cases where they give like responsibility away to different business units. You have a kind of central alignment where you know, what is the high level strategy, like what's the general direction you want to go to. But then at the same time you empower different business units to just run their own experiments and like adopt AI as they, as they need. Of course, like the trade off of that is that you are really not doing this like whole transformation of your organization. You will still run the same business model as before, just like more efficiently and maybe like, you know, in a more qualitative and you know, a more powerful way. But at the same time it's a pretty safe bet to do that because you don't have that huge upfront investment. Then also there's like the chances for success are just much higher. But yeah, like, that's why I say like it comes back to the original alignment and having that like high level leadership commitment of saying, okay, like what role does AI play for us actually? And then living that up. Yeah.
Seth Earley: I'll tell you, I think that in large enterprises that have a lot of fragmented processes and fragmented systems and fragmented data, yes, you can have some point solutions. I mean we went into this project not even considering it an AI project. Right. Considering it a knowledge and information architecture and product data project. Right. We are going to be using AI approaches in solving those problems. But that is not they, they knew they weren't ready for AI at, you know, in the way most people interpret it. But, but we're going to have small, you're going to have multiple interventions to fix certain aspects. But the fact that this whole ecosystem is so fragmented is what's problematic. Right. That is, you know, growing by acquisitions and you know, who owns the knowledge for this area? Who owns the knowledge for this area where you know, where do we go upstream for, you know, level three support? Right. And, and there's 20 plus applications that people have to go to that they're field service engineers and they're, they're remote service engineers and their technical consultants. There's, there's dozens of applications in each of those. So I think on the one hand that's where you're kind of saying, well, how can we have a transformation? We can have limited interventions in these areas, but we do still need to think of this in an overarching manner. And I think that's, you know, that's where you have to start thinking about what are those incremental value points. Right. That we can show and then show in a longer term roadmap how all these things can come together. So, you know, you know, what are you seeing there when you work with your customers? Are you putting together, you know, like 12 month or 18 month roadmaps? I mean, they go beyond that. It's, it's hard to, it's hard to see say, but how are you kind of helping them, you know, bring those pieces together? Because again, it is lower risk to have them as atomic projects. But that as you, as you observe, it's, it's not going to have that overarching transformation. So how do you kind of resolve those two things? So like the, the one that
Tobias Zwingmann: you brought up with roadmap is, is exactly it. I, like, I recently have a customer that is in, is running call center operation. So you know, they, they are offering call center services here in Germany and they obvious use case to use AI voice bot to handle the conversations. Right. Okay, so that's where I came in. And this is like a perfect example because this is like if you have the matrix of integration and automation, this is a highly integrated and highly automated use case with AI, which is like the most difficult thing you can do. And after the first trials they figured out, okay, maybe this is not going to work the way we want to. And this is where I came in. And then we started working on that and the way we scoped that was to say, okay, this is the high level goal, like that's where you want to be. And obviously technology right now is not there. It might be there in a few months or maybe years, but like, what's the roadmap like to get to that point? And so we trimmed down the use case and split it up into different, like, you know, atomic elements or individual assets. For example, the first thing that we shipped was a customer support chatbot which is not integrated and not automated, but which customer support agents, like human agents, could just use while they are on the phone to quickly retrieve information and give that to customers. So we just like took a whole part of that journey out and realized that first. And then we have the text to voice feature as kind of the next step on the roadmap to see if we can use for example new multimodal AI models where you don't have the voice to text and then text to voice transcription, which is really like a poor user experience. But maybe in a few months where we have like true voice input to generative AI models and also voice outputs from them which will just have a much, which is just much faster interaction, also much better experience. And if we at that point, if technology is coming at that point, then we already have the data sets or the database ready and the knowledge ready that we know AI is able to respond to customer questions. And then we can actually validate the next feature which is shipping that as a voice feature. And then the next option will be to have that in live conversations with customers. And that's what I mean with roadmapping and trying to, you know, have that high level goal, but kind of like chunking it down and seeing what's possible today and then validating step by step. And I think this is like the big difference between like you're just trying lots of use cases in your organization and seeing what works, but then also like in contrast to that, like having that ambitious and like you know, forward looking goal but being able to bring it down to okay, what can we do right now in order to take the first step? And also if that first step works, what is step number two? And typically I look for six month roadmaps on these projects and just reassess after that where, where we're standing and what the next steps. Tremendous sense because they build upon
Seth Earley: one another, they're building blocks, right? In order to get agents assist to work correctly for the agents, you know, when they can just refer to something, you're basically improving search, you're improving retrieval, but you're also using some of the AI tools to make that more conversational, understandable. You still need to think about the structure of that information. You need to make sure it's correctly curated, right? You need to give the rag engine something to, to, to pull from and then present that to the customer and then you can validate that, right? Then you can say oh yeah, that's working really well. Oh look, our agent productivity has improved. You know, we're spending less Time, you know, on, on callbacks or, you know, where we have higher first call resolution, we have shorter time per incident. Right. We have higher csat, we have higher agent satisfaction. Right. Then that whole component is that building block for that next piece that you. That gets built. So I totally agree with that. I think it makes a lot of sense. And again, for organizations to be successful with these things, you know, they do need to think first about the use case and then what data is needed to satisfy that use case, what, what functionality. So when you go into organizations to help with them, help them with these processes, where do you begin? That, of course, you start with some education. Then you start with, well, what's important to the organization. How do you then start breaking it down, say at the board level? Because that's one way to start is say start at the, at the most senior level in the organization. Yeah, I
Tobias Zwingmann: mean, I would love if it's always that linear way where we start with like, like educating on AI knowledge and so on. But like, what typically happens is that where I come in is when organizations kind of like try something and it's not working. So. And that's where we have to kind of reassess the whole situation and in many cases are circled back and as I just explained, like, maybe reassess the use case. And then there are two ways, like for larger organizations that typically, you know, becomes a certain business unit or department we're looking at. But if the organization is, is smaller, something like, you know, 100 employees or something like that, or up to 1,000 at max, then you can have a real like, you know, boardroom conversation of, you know, what's the, what's the overall goal or the overall challenge that we are facing here. And then I typically run workshops in order to have commitment and alignment on that top level and make sure everyone has the right understandings. Because, like, I remember one conversation, like where we had like one group of executives, like, believing in AI being the next thing and like, like becoming better and better, and the other group saying, like, you know, this will never work. So. And just having this like, commitment to what do we all believe actually, what does it mean for our business? Yeah, this is, it's super critical because, like, this makes all the difference whether you will say, okay, this is just another tool or another technology, like, you know, let it deal with that, or whether you say, okay, we need to treat that seriously. And I think this is really where every organization needs to kind of like, find their own way in terms of like, what's Their business model. What have they tried before? And also as we, you know, said and as Chris already mentioned, not only looking at generative AI, but you know, AI in general, which is also kind of like, you know, classical machine learning and so on. And this is where I like, you know, do AI design sprints and so on in order to figure out what's like the best use case mix here for that organization. Obviously there's a disconnect between
Chris Featherstone: above the budget line and below the budget line in most cases. Right. One of the teams is super advanced. The the executive team isn't. Right. Or the opposite. Right. Somebody wants to go. So I would love to get your take too on. We find these use cases and generally the use case is broken in terms of maybe it's too far advanced because part of this, to your point, in that customer experience use cases, we want to completely remove all voice or all. All agents and we want to do all digital voice. Right. Because which is a cost savings metrics which seems like ton of sense in terms of all of these scenarios. And when we, when we get to a perspective though centered around okay, instead of thinking about this from I just want to remove all costs out of out of the equation and make this. I don't want to have agents stuff. I think Seth brought up a good point where now we get up, you know, in terms of. Of the scenarios centered around. It's not necessarily we, you know, we want to shock the system and remove all agents out of the environment because of first resolution or because of just the overall scenarios. It's a matter of, well, why don't we think about this from the perspective of let's provide augmentation scenarios for them to make them more effective so we don't churn as many agents out of the perspective. Right. So the cost savings can still be there. But let's look at it from a crawl rock run as opposed to shocking the system. So what I find generally, and I don't know if you do as well devices. The generally the use case is big, hairy and audacious, which is super fun and really neat and interesting. However, it's missing the crawl walk run. They want to go straight from zero to a thousand in like this. Right? Yeah. Which is.
Tobias Zwingmann: And I think the reason for that is that, you know, customers have been educated over a long time by SaaS businesses that that's the case. Like you give me 99 bucks a month and you get the full solution. Like we all do all everything for you and like every SaaS, like for for SaaS businesses, there is no like crawl, walk, run. It's just like, you know, here you pay. And that's just, that's. Yeah, it's on, off, you know, that, that's it. But especially if you build these AI capabilities in your organizations, like, you know, and that's exactly the way I approach it. Like, you know, to figure out how can we make our first baby steps into this project. And like, in 99% of all cases, that's augmentation. Like, we want to have augmented use cases because it's easy to control. It doesn't cost us anything really to try it out. And if it doesn't work, we can just abandon it. Nothing happens. And scaling up from there is much easier because there are so many learnings that you can use in order to take the next steps. But at the same time, I think this is the most critical or the critical thing then is that it's not so kind of like easy to sell to executives or external stakeholders who have these super high expectations of, of AI. And then you come and say, hey, maybe we get a 10% increase in productivity here. And they're like, wait, that's it. And I think this is where I think the hype that we currently have is not kind of like benefiting these practical use cases. But at least in my impression, we're getting there slowly. That people realize that even a 10% productivity improvement is like a super huge deal if there's a good RI on that, depending on what kind of the investment you have for this. But I think this is, yeah, we have to circle in that people are coming from the expectation where, you know, we have social media posts that say, like, we will all be out of jobs in five years and AI is doing everything for us and no one needs to be a programmer anymore and all these things. So, like, getting that back into reality and saying, okay, like, best I can do is 10% productivity improve. This is the, you know, communication or the conversation we're having. Well, you know, I have a really interesting
Seth Earley: story about that and set of expectations. And that is where when there are multiple vendors in an organization, in a project, like when a customer brings in a customer, you know, a vendor that specializes in this and this customer special, a vendor specializes in that, et cetera. And we had somebody who is doing their user experience, right, and we do user experience, but for whatever reason they decided this company was going to design the user experience. Now it's a lot easier to design a user experience than it is to build the application Right. Yeah. It's like designing in PowerPoint, designing in Figma. Right. And mocking things up. Where does the data come from? We don't know what's the shape of that data? I don't know what happens behind this, but I don't know. It's like, you tell me we have to trace it back. And what I was saying to people, this user experience company was saying, it's not a user experience company I'm working with today. So this is the customer's user experience company. Right. We have a great vendor partner that we work with. But they were saying things like, oh, it's going to be so great and everybody's going to find exactly what they need in the context they need it when they need the old, you know, the right information at the right time for the right person. The promise of knowledge management for everybody. Researchers are going to be able to do this and commercial people will do this and executive. And they're like setting these expectations sky high. Right. Like everything's going to be wonderful and apparent and according to their beautiful pictures. Yeah, it was going to be wonderful. And I had to be the one to say, hey guys, I think we have to manage and temper expectations. Right. This is not going to be, you know, information nirvana because there's so much other work needs to be done. Maybe in two or three years. Right. But not in two or three months because you cannot fix all this stuff. And I was hammered for that. I was beaten up. I was taken out back by a couple of executives and beaten with, you know, with rubber hoses. Right. Because, because I figuratively. Because I said the wrong thing, you know, and I brought everybody down. I'm like, dude, you can't, you know, is it better, it's better to set expectations realistically today than have people be disappointed tomorrow? And I think, think the challenge was many of those executives were going to be on to their next project. Right. So. So I think it's, it really is important to set those expectations and manage those expectations and be realistic. But you know, a lot of times that does not get the excitement. So I think sometimes when you're too realistic with organizations, they, they don't really like it.
Tobias Zwingmann: Same here. Yeah, I have exactly the same experience. Like I was on a project doing an architecture review for a rag application and like giving recommendations of what it costs to implement that. And like in the end, like there was business case anymore for that, for the internal product. Yeah. It turns out they just got someone else in who had a different opinion. That's it. Right. Like, right. Because the problem is that the owner of that product already sold the solution internally to the top level management. So they were actually not really interested in having a real assessment of that application. It's about getting external confirmation that this is actually working. But that's not how I work. And the problem is that there are a lot of people out there actually who just sign these things off and say, oh yeah, you know, if we have these, and these assumptions, you know, we'll be fine. Just, you know, maybe take six months more to work out. And this is exactly where lots of organizations are right now, deeply invested into very, very complex AI use cases that get more expensive by the day and not delivering the results that they were originally promised. And I think this is just not useful for AI adoption in general. And yeah, but it is what it is and I think we have seen that in other technology cycles before. So, you know, I, yeah, digital transformation, we had big data. Yeah, I don't know where we would start with that. So there's so much. And I think this is happening all over again. And you know what's really
Seth Earley: funny is I was on the ground with I'll name the company because I used all public information when I wrote about this and I used my own experience as Verizon, right? And according to, and when I was talking to people at Verizon, you know, I'm hearing stuff on the ground, but I as a customer had a terrible, terrible, miserable, like impossible experience, right? And I wrote an article about it in Customer Think. And it started off me being so angry at how I was dealing with Verizon and how, you know, difficult they were and how I got sent in endless loops and then told go here, go there. And then I found out, you know, even a year after the fact, they still weren't honoring the deal. It's a long, long story. But my point was I put it in the context of six fails and six fixes around their digital transformation. And one of the things that they were saying publicly was, oh, our digital transformation is this huge success. And all the executives say it's a huge success in all their marketing since it's a huge success. And I as a customer and, and plus I knew people internally, but I as a customer was experiencing it as a miserable failure, Miserable failure. Yet everything was saying it was enormous success. And I think the executives are fooling themselves. I think they're lying to themselves. I think they get other executives that don't want to speak the truth, that don't want to say the emperor has no clothes. And I think that's what's happening with AI these days is that people are so dug into these projects and for those organizations that kind of feel like, like there's something not right here, you know, it's not too late to, to reassess and, and to come back to some real value, right? And then that's where, you know, people like you and people like me, you know, and I'm sure, Chris, in your, in your dealings, you try to be, you try to speak the truth, then you try to give people the real answers and even if they don't want to hear it, it's like it's a, your duty to tell people what the reality is and what they need to do. But a lot of times it's in a politically fraught environment. Nobody wants to make that statement. Nobody wants to say that stuff. And you
Chris Featherstone: cannot report on assumptions, right? Most of the data is assumptions. And so you can't say, hey, I'm going to report this to Wall street or to the world or to whatever else. And oh yeah, let me just make sure that points 2, 3 and 6 are just broad assumptions, right? They have asterisks at the bottom
Seth Earley: of the report, right? And said, you know, this is aspirational functionality or based on what our vendors are telling us. But, but you know, at the same time, even though things can be so fraught, I don't want to end on a negative note. I want to say that it's very, you know, there's a lot of possibility, there's a lot of potential, there's a lot of value. And you know, going about these things in a certain way will ensure that value, right? Because you can do validation points, right? As you were mentioning, you build on things, you make things modular, you have building blocks, you validate your assumptions, you, you get proof points, right? You show in use cases that you can actually accomplish something, right? That's why use cases are so critically important. And baseline measures, right? So if you can get the data and show the data, you know, you can make tremendous, tremendous inroads. So to buy. So where can people find you? I know you have your website, your it is rapid R A P Y D A I. Correct.
Tobias Zwingmann: Absolutely. I also have my, you know, personal website to be a swingmon.com where people can sign up to my newsletter. But rapid AI is also a perfect address to check out and obviously LinkedIn, like I'm posting on LinkedIn regularly, so if you want to connect with me or you know, follow me, there Anytime. And. Yeah. And tell
Seth Earley: me about your, your world. Like I know you. What else do you like to do outside of generative AI and AI in general? What do you like to do. Do with your watching. Watching human. Watching human AI unfold. I have
Tobias Zwingmann: three kids under 10, so it's really fascinating to spend a lot of time with them and just play and hang out and, you know, watching. Those large language models get built. Right,
Seth Earley: get built. Yeah. And produce some real hallucinations.
Chris Featherstone: You got to be careful because you're doing the fine tuning. Exactly. Yeah,
Tobias Zwingmann: exactly. Yeah, that's a really good point. A bunch of random data. So let's see how that turns out. Yeah. In guardrails. Right?
Seth Earley: Guardrails. Yeah. You know, I like my, my daughter is turning 10
Tobias Zwingmann: soon, so I'm not sure if these guardrails will like, you know, how long they will last, but let's. They get all torn down. Have you
Seth Earley: seen, is it. What's the name of the, the Pixar movie? It's about the, the, the emotional. Oh yeah, with the emotions. Yeah, I've seen
Tobias Zwingmann: the first one. Is that what it's called? Yeah, yeah, yeah. I just know the German title. I don't know the English title, but yeah, there's. Is there two of them now or three of them? Two of them,
Seth Earley: yeah. Absolutely brilliant. Absolutely brilliant. And, and absolutely represent. Representative of what goes on when we, you know, build our world and our mental models and our representations and then tearing down that model and rebuilding it, you know, when you, when certain things happen. It's a brilliant, brilliant movie. But
Tobias Zwingmann: absolutely. I like. One of my 16 year
Chris Featherstone: olds actually was like, dad, that's exactly how I feel for anxiety because they got anxiety really good. And he was, he was super serious and he turns, he's like, he's 16 and he's like, oh my gosh, that's exactly how I feel. Holy cow. And he's on this spectrum as well. So he couldn't articulate that until he saw. Yeah. Oh, it's
Seth Earley: a, it was so well done. I mean, I think it's one of the, it's, it's great when a movie can be entertaining and insightful and not pedantic but, but, you know, really, really meaningful. I think it's such a great movie for, for humans and, and for, for humans. For humans building their AI models. Right. They're, they're, they're. It's not artificial, right. It's real, it's reality. It's actual intelligence. Right. It's absolutely so anyway, but. Well, that's. That's great. I guess you like to do a lot of outdoor stuff with the kids? Yeah, we go swimming and outdoors,
Tobias Zwingmann: playing football. Like, you know, I have two boys, so. Yeah. Hiking. Yeah, we went to Switzerland for summer holidays and went there for some, you know, mountain touring and so. And, you know, it's just like a really beautiful spot to. To be there the whole time.
Seth Earley: I love Switzerland. It's a beautiful place. Well, Tobias, this has been a lot of fun. I really want to thank you for your time. It's been great to have you. And we'll have all the. All your contact information in the show notes. And again, thank you for your time. Thank you for participating. Thanks for having me. It was a pleasure. Thank you. And thank you to our audience for tuning in and Carolyn doing all her work in the background. Thank you, Chris. And again, this has been another episode of Early AI and we will see you next time. Thanks. Thanks, Tobias.
