Navigating AIOps Maturity, Data Architecture, and the Reality Behind AI Washing in IT Operations
Guest: Trent Fitz, Product Strategy and Technical Marketing Leader at Zenoss
Hosts: Seth Earley, CEO at Earley Information Science
Chris Featherstone, Sr. Director of AI/Data Product/Program Management at Salesforce
Published on: March 21, 2024
In this episode, Seth Earley speaks with Trent Fitz, a product strategy leader at Zenoss with over 20 years of experience across cybersecurity, cloud computing, and IT infrastructure. They explore the reality behind AI hype in IT operations, discussing how organizations can move beyond "AI washing" to build genuine AIOps maturity. Trent shares insights on the critical role of data architecture, topology awareness, and governance in making AI initiatives successful within IT environments.
Key Takeaways:
- Organizations are feeling pressure to adopt AI without fully understanding it, leading to widespread "AI washing" where vendors rebrand existing products as AI-powered.
- True AIOps maturity requires governance at three levels: technical implementation, business process alignment, and enterprise strategy integration.
- Data and topology awareness are foundational to IT operations, yet these critical elements often lack proper appreciation and investment within organizations.
- Application performance monitoring tools like Dynatrace, AppDynamics, and New Relic provide valuable insights but require integration standards like OpenTelemetry for effectiveness.
- Machine learning in AIOps depends heavily on quality structured data and proper information architecture, not just sophisticated algorithms.
- IT organizations must educate stakeholders about the distinction between AI, machine learning, natural language processing, and large language models.
- Success with AIOps requires starting with clear governance frameworks and data architecture before implementing advanced AI capabilities.
Insightful Quotes:
"At the core of AIOPs lies a fundamental need to not just visualize but truly understand the staggering complexity of modern IT environments. It's not just about piles of data or sophisticated algorithms; it's about cultivating a genuine appreciation for the significance of that data and how we can harness it to drive smarter, more proactive operations." — Trent Fitz
"It's AI washing. Everything is AI these days just because it's something that people are interested in investigating. There are companies doing real things with it, but for the masses, people are still finding their way around." - Trent Fitz
"People are feeling their way around. They feel like they're going to miss the boat if they don't get projects going, if they don't start investigating how AI can help their businesses. At the same time, they're really just trying to figure out what's what." - Trent Fitz
Tune in to discover how IT leaders can move beyond AI hype and build the foundational data architecture and governance frameworks that enable genuine AIOps maturity and business value.
Links:
LinkedIn: https://www.linkedin.com/in/trent-fitz/
Website: https://www.zenoss.com
Ways to Tune In:
Earley AI Podcast
Apple Podcast: https://podcasts.apple.com/podcast/id1586654770%C2%A0
Spotify: iHeart Radio
Amazon Music
Buzzsprout
Thanks to our sponsors:
CMSWire
Earley Information Science
AI Powered Enterprise Book
Podcast Transcript: AIOps Maturity, Data Architecture, and Navigating AI Reality in IT Operations
Transcript introduction
This transcript captures a conversation between Seth Earley, Liam Koenig, and Trent Fitz about the evolution of AI in IT operations, exploring the gap between AI marketing hype and practical implementation, the critical importance of data architecture and topology awareness, and how organizations can build genuine AIOps maturity through proper governance frameworks.
Transcript
Seth Earley: Welcome to today's podcast, the Earley AI Podcast. As you are aware, we have a great guest today. And my name is Seth Earley, by the way. I'm here with Liam Kunik. Say hello, Liam. How's it going? Co-hosting today. Yeah, Liam is newer to the organization, recent graduate in digital marketing, and he's been doing a lot of behind-the-scenes work. So he's helping out in front of the camera today. So great, great to have you. And we are very excited to have our guest today, who we're going to be discussing things like the evolution of IT infrastructure monitoring in the context of AI and machine learning operations. We'll talk about the importance of AIOps maturity. And the data collection behind AIOps. We'll talk about AI and machine learning in the IT organization and how we need to think about topology and awareness of the tools and the technologies. We'll talk about some of the drawbacks of application monitoring in and of itself and look at a bigger context of AIOps machine learning maturity. So our guest today is a digital expert, a digital guru with 20 years of experience in tech industry. He's an expert in global marketing, product strategy, business development, cloud computing, cybersecurity, and AI. He has led projects at companies such as IBM, SailPoint, Trustwave, and various startups. He's currently the product strategy and technology marketing leader at Zenoss. And Trent Fitz, welcome to our show.
Trent Fitz:Thank you so much, Seth. I'm happy to be here. I appreciate the time. We're just talking about the fact that you're in Austin, as is Liam. I'm in Boston.
Seth Earley:What's your weather like today? It's about 20 degrees with the wind chill.
Trent Fitz:It's 9. Wow. Okay. So we're, we're similar, but we have a lot of snow, but it's expected in this neck of the woods. So one of the things I'd like to start the show with is kind of your perspective on misconceptions around AI. What's fact? What's fiction? What's the future of applications. And, you know, what do you come across most frequently in your travels when you talk to executives and customers where they're just not quite getting it right? I almost— I know when we were talking before, I almost framed it as what are they getting right versus what are they not getting right? But you can frame it either way.
Seth Earley:Yeah. So we're still at a pretty nascent stage with AI. I mean, it's been around for, you know, 50 years, but, uh, it obviously, because of ChatGPT and large language models and, and the kind of stir they've caused, uh, it's generated a lot more interest recently. It has become a more tangible, uh, you know, application for business use, uh, but there's still so much confusion. Um, mostly what people know is ChatGPT, uh, or the, you know, corollary Google or, or Microsoft options for that. Uh, obviously OpenAI is the topic of a lot of conversation, but, uh, I think the, the general sentiment right now, rather than, you know, trying to say who's getting it right or wrong, I think people are feeling their way around. They feel like they're going to miss the boat if they don't get projects going, if they don't start investigating how it can help their businesses. Um, at the same time, they're really just trying to, to figure out what's what, and there are so many terms, you know, around artificial intelligence, even, even drawing a distinction between artificial intelligence and machine learning and where do those fit in with natural language processing and large language models, etc. People are still trying to just figure out how all these parts and pieces work together. And, and right now it's a— honestly, it's a huge opportunity for vendors to kind of use smoke and mirrors to sell anything. You know, it's, it's AI washing. Everything is AI these days just because it's something that people are interested in investigating. So we're just at such an early stage. There are companies doing real things with it, obviously, but, um, but for the masses, people are still finding their way around.
Trent Fitz:I like that term, AI washing. You know, I used to, uh, write that back in the early days of AI and machine learning. By definition, AI didn't work because as soon as it worked, they called it something else. And I forgot who coined that phrase. But so word processing was an early iteration of AI. We don't call it AI now, we call it word processing, right? So, but now everything is AI, which is an interesting twist on that. And even the people that I respect and admire in the field, when I see some of the claims made, uh, I want to pull them aside and say, how could you say that? How can you make that claim that, you know, you have artificial general intelligence? Like it's, I just don't buy it. And, uh, what about when you talk to your clients and customers or, or go to conferences, what are people talking about from that perspective? Uh, what are they trying to do? What are they saying? What are the problems that they're trying to solve? Yeah, there, so I'm my, my forte, I guess you could say, is AIOps. I I've specialized in, in, uh, IT operations. Really just by happenstance. And in AI for IT ops or AIOps, there's machine learning involved. Right. And so, so it's, it's still a big focus, but, um, uh, man, so I can speak to the IT operations community specifically as it relates to AI. People in some respects, they're starting to figure things out. So I use the phrase earlier feeling their way around. And, uh, also there's a lot of consternation around the topic. And, and to, to your question specifically, what, what I've seen is there's a bit of, uh, confusion and excitement and disappointment all kind of mixed up together right now in terms of how to use these models. And so for AIOps, right, for AI-driven IT operations, what you should be focused on is whatever automation you're trying to do. So let's say, let's say you have a tool, it's looking at your infrastructure, your cloud infrastructure, and it's measuring these key performance indicators. And so all of a sudden you, like you get a million alerts a day and most of those things are noise. They're saying there's something that's happening in the infrastructure but not necessarily anything that affects the business. So then that's a massive point of friction. So that is that is where people— that was a major use case that people used AIOps for was to, to filter down the noise down to only what's really actionable, only what's important to your business. Not because something's wrong, but only if something's wrong that might have a business impact. Right. And, and, and so a lot of people are using that type of machine learning effectively and, and that type of natural language processing to, to, to condense all of that, you know, text-based alert data into something that is consumable and can be fed to other systems or people. So that, that's something where I think people have gotten it figured out.
Seth Earley:But I also, I also think what I found is there's a much bigger confusion on how to properly manage IT infrastructure in general, right? So as I dug deeper with people, the real issue is that you probably don't know much about your IT infrastructure, right? That's the real problem. And so that's where people are really at a loss. And a lot of people don't appreciate the value of data, um, and, and don't really understand topology. And we can get into the details of that, but the reason I bring it up is because you can— imagine the LLMs or large language models as like the, the really sophisticated part at the top of, of an engine. If the engine is, uh, is weak, or if it's, you know, has no data, or the wheels are bad or it's in a— it's sitting in a puddle of oil because the engine's leaking oil. It doesn't matter if the engine itself is very impressive, it's not going to get you very far. And so, so I think there's, there's something interesting here around, you know, you can't make, um, you know, garbage in, garbage out. Right. Is, is the statement. And, uh, if, if your processes or the data that you're feeding to those tools is not solid, you're not, you're not going to be able to get the results that you need. And so, so that actually maybe should be the focus of our conversation more than just, Hey, what are you going to do with this LLM? Like, you probably should be thinking about, do you actually have a model of your, your IT topology and, and what are you, you know, generating from that? And then, and then let's move forward on what you want to do with those LLMs, uh, you know, large language models.
Seth Earley:Well, my favorite expression for that is precision and recall. Recall, do you get everything you need? Precision, is it relevant? And if you get a lot of irrelevant stuff and you miss critical stuff, then you have no value, right? So you have to have both those, both those— both precision and recall. They have to be good. And the same thing is true with the data and the data collection and the configuration, data lineage, provenance. It's a topic that I'm very passionate about. And you can't just dump a bunch of data and expect it to make sense of it. But I've been at AI conferences lately and I hear people making the claim that, oh yeah, you just put data in and now your large language model knows all about your environment or all about your whatever. And, uh, it's, it's just not quite as simple as that. I'll pass that one over to you.
Trent Fitz:Well, yeah. So okay, so there's a, there's a couple things. So one, the machine learning aspect and what you're doing there is you're looking for anomalies, probably. Right. So you might have a baseline of, hey, my CPU consistently runs at X percent for a sustained amount of time on, on Wednesdays at 10:00 AM. All right. And, and so, okay. I, I observe that that's happening. So then the algorithm can start to predict that and say, okay, right, it's a pattern. So it's going to happen next Wednesday, Wednesday at, you know, 10 a.m. And so, so, so I'm not worried about that one. And, and, and what you might get an alert for is all right, hold on. Something else is happening during that same time, but it's a little bit different. And so what the, uh, algorithms can do is say, okay, this actually in context doesn't look like a problem. But this other thing is. And so, so there's some sophistication there, you know, that happens through that machine learning. But what you have to do to feed that, to allow that to happen is structured data or a model of your infrastructure. Right. You're not you're not just, as you said, you're not just dumping it. And so there's some structure around that. And so it's the structure that you need to really understand what your IT infrastructure is. And by that, I mean, all right, so you have an application, right? You know, you're you have whatever it is. Like you have, you know, a financial services application or whatever it is, there's an application, but that application, if you drill down farther, it's actually run on an underlying infrastructure. So there's a web server, it could be, it could be numerous different ones and lots of redundancy and lots of complexity. So there's a web server, there's an application server, there might be a database or lots of databases, there's storage supporting all of those things. So there's, there's essentially a network that connects all of it together. And then you have whatever— you could be on-prem for that infrastructure or in the cloud or it could span across both of those. Right. And so, so you drill down into this, the layer underneath your application and you start to see all the things that are supporting that application. And what you need to do is you need to model that. And so that's a model of, of your infrastructure. You need to understand how it all works together. So that if your database has a problem and, and it now impacts your application, you know that those are connected. Or if a switch or a router has a problem with a network, the entire application just goes off the map. Okay, well then that's a risk and you need to know about that. So you've created a model of that. And so the large language models, you can get some benefit from those right now, like, like, right now, I— I assume that they're going to be very effective in terms of, uh, you know, correlating a lot of alerts and, and being able to summarize and be able to output that to someone and be able to say, hey, I looked through, you know, a million alerts and here's the one, two or three or five things that you should probably be worried about, right? Like there, there could be some value there. But really what you're feeding to that is the structured data about your IT infrastructure. So, so then the maturity goes up if you then say, well, but now I have governance around that. So I have governance at the technology layer. So just making sure that I have certain standards in place and, and that I'm, I'm doing things in a standardized, a formal— we can get into, you know, CMMI or ITIL and some of these other models that say here's how you should structure an IT organization. But you can have governance at the technology layer just to formalize things. That's not the only layer. You also then have governance at like a business process and workflow layer. And that, that includes: all right, what is that IT operations team supposed to do? And so if you have an incident or a problem, who gets notified? Who— what's the escalation chain? Who ultimately is going to resolve it, right? And, and how does that connect to, um, uh, you know, like some of the, um, ITIL processes. Who do I, how do I resolve incidents? How do I manage problems? How do I manage changes? These are core parts of, of running IT operations. You then have a level above that, which is at the enterprise strategy level. So then it's, okay, now we're looking at entire company-wide initiatives and we should be funding those based on whatever business strategy we have, and we need to execute on that. So, so there's, there's maturity models that need to be built. There are levels at which you need to put governance in place. There is a model of your infrastructure that needs to be created. And then from there, that's when you start doing machine learning. And, uh, in your case, maybe you've got some specific use cases where you can leverage LLMs to then, you know, to then take next steps. Yeah. And, and, or to, or to provide an interface where people can converse with the model, which is what I think is the, the massive opportunity, right, is where I can, I can literally say, you know, Hey, I, I want to know how, how this application connects to this infrastructure. Can you show me? Can you visualize that? And the large language model should be able to, to, to look at the structured data and say, yes, I can visualize that. Right. Assuming it's all structured, uh, there's some fun stuff that you can do there. Yeah, absolutely.
Seth Earley:And, and I, I want to get into the details more because I love this kind of stuff, but before I do that, I, I want to go back and just, uh, to one topic we hadn't spent a lot of time on. Um, which is just to get you to comment on Gartner's representation of AIOps maturity or what they think is important on AIOps maturity. And then if you have any commentary on where people seem to either be getting it right or wrong on those seven things that Gartner mentions. And then I want to come back to what you were just talking about. All right.
Trent Fitz:So Gartner has, has produced different, uh, uh, different documentation on AIOps. And so, so you may be referencing the, the one particular, uh, you know, set that, that is there, there maturity framework and, and there's a few to choose from. I'll, I'll leave it to you. You probably have more— you probably know more— I can call up, if you want to grab it, you can, but, um. Yeah. Well, why don't we do this? Because, so I, I was, I was looking at the one that specifically talked about the, uh, there were like seven things in particular, and they had things like understanding your, your infrastructure and topology and, uh, doing things like automated responses and, and how you deal with incidents. And then I, and then it looked like it said understanding how you do machine learning. So there were like seven categories. I don't remember exactly what they were, but is that, have you seen that model before? I have. Okay. And so would you like to comment on, on those?
Trent Fitz:Yeah. So, so I mean, you're hitting on a lot of the, the key terms. The other one that that obviously comes up is decision intelligence. Right. So, so you need decision intelligence as well. So going through those, um, I would say that, so first off, the, the maturity models for, for AIOps, which I actually co wrote one with Gartner recently, um, they are about making sure that you have governance on multiple levels. Right. So, so my pitch, uh, just in, just in previous answer was talking about governance on a technical layer, on a business process layer and an enterprise strategy layer. So, so I think you need that, um, to be mature. And so a lot of people are implementing. machine learning, AI, and they don't have, uh, they don't have any of that governance in place. They don't have those formal processes, they don't have, uh, their data, um, organized in the way that it should be. And so, so I would say people get that wrong. Um, the data and information architecture piece, uh, specifically just a general lack of appreciation for that. And, and the fact that there's so much complexity that we're dealing with, uh, IT operations, especially when you start talking about connecting your IT infrastructure or your IT environment to your business, or you start to look at your— your IT operations team, or you start to look at your entire enterprise from a business perspective. Those are, those are complicated things. And so I don't think people fully appreciate, uh, the, the data and information, uh, and the need to actually have that organized. Um, so, so I'd also say around the machine learning piece, people don't fully understand how to go about, you know, implementing machine learning and structured data and what the AI capabilities are. Like, what can you actually do with it? What can you not do with it? So, so those would be, those would be my key ones. And maybe to just drill into it just a little bit, the, the, the decisions around machine learning and structured data, it's like people have seen at this point that, that, okay, yes, you can, you can get a little more intelligent and use some machine learning to solve some problems. But sometimes those technologies are being presented as a black box. And you can't see how they work or you can't really build your own machine learning models inside these tools. You you just get— if you use this tool, it does something intelligent for you. And that makes it really hard for people to like see behind the curtain and understand what's actually going on. And, and so when people are evaluating tools, they, they need to evaluate what machine learning capabilities they have, the algorithms they're using, how transparent those are, and whether they can actually create their own models in some respects. Because, because it's only getting more sophisticated. And so it's not just you buy off the shelf and hope for the best. Uh, you should be actually customizing, um, you know, and creating your own algorithms or machine learning models, uh, for your specific use case. And that's, that's something that I think a lot of people get wrong. They just assume it's already been done for them, and then they're disappointed when they realize there's still work involved.
Seth Earley:So I, I like that a lot. And, uh, you know, I, I, I want to also talk about application monitoring. So I was at a major conference a few weeks ago, and there were vendors there displaying their tools, uh, you know, for application monitoring. And I had some, some issues, I guess, with how they described things. They would say, all you have to do is like install this agent and then it collects all the data and then it instruments the code and you're all done. And I thought, well, no, that's not quite right because, uh, yes, it collects data from the application and the code and, and then it— and then you're instrumenting it. But if I don't connect the application to the infrastructure or if I don't understand what my infrastructure looks like or— so the thing I didn't like was that that there's so much more to it. It's almost like if I— you had a car and you can tell by my engine how it's working and you can— my engine is the application and you can tell all these things about my engine. But you might not be able to tell me that my, that my steering wheel's broken, or you might not be able to tell me that my, uh, my, my wheels are flat. So like there's so much more to the overall experience or the overall operations than just the code. And so I think that the challenge is, is, you know, if you're really trying to say, all right, I need visibility into my operations, especially as it relates to AI, like I need to know, all right, how much compute am I spinning up, how much is this all costing me, what could, what could cause an outage? Yes. Application monitoring is a component, but I would argue that that's such a small part of what you really need. Can you comment on that or—
Trent Fitz:Yeah. So I mean, APM in general is, is critical. Like, like, like you do need, um, uh, those types of APM tools. And so APM or application performance monitoring, and there's a couple of like big vendors that, that are really well known. I don't know if you want me to name them or not, but, you know, you you have Dynatrace, AppDynamics, uh, New Relic, those are the big three. They focus on, like you said, application monitoring and, and even across, uh, uh, infrastructure as well. However, they're focused on the application layer. So, so Dynatrace for its part has tried to expand that definition. And so you'll now see an, if you actually go to the Dynatrace websites, and I'm not here just to promote them, but, um, they'll talk about having a Davis AI, right. And so they, and they'll say, hey, we were actually the first AIOps company, right. Which is interesting. And so they're using, you know, AI and machine learning as part of how they manage that application performance. So, so I would say overall in general, the role those tools play, yes, they're really important. What's happened is they now have some good standards around how you get information in and out of those tools. So that's like Open Telemetry. So Open Telemetry is like an open source standard that everybody's now agreed to. And so that's that's being baked into products. And so that, that type of data coming from APM tools, it's essential and it's hard to do without, um, but it is that is only one aspect. It is only a piece of the puzzle. And so what you need is to be able to say, okay, this APM tool is giving me, um, you know, metrics or events or whatever telemetry data about my, my application, but I want to correlate that to, uh, to what's happening underneath. Right. And so. And that's where you need the actual infrastructure visibility. And infrastructure meaning your servers, your network, your storage, etc.
Seth Earley:And, and so that's where I would say, yeah, you're, you're correct on, on— there— not wrong. Not incorrect. But there's more to the story than just the APM alone. And so you need, um— I would, I would be talking about observability, right. For, for your infrastructure, which is what Zenoss does. And so if I have that infrastructure data, I can correlate what's happening in that layer to what's happening in the application layer. And now I have a much more complete picture. If my application is slow, is it because the application itself has an error or is it because my disk is filling up on a server somewhere, you know, and, and it's, it's not related to the application code. It's that you just have a storage problem and you need to go fix that. Right. And so, so there's, there's correlation that needs to happen and, um, but there's standards being developed for that. And so, and this is good for our industry. It's good for IT operations industry. There are standards like Open Telemetry, there's things around machine to machine communications. The ability to integrate tools together and have that observability across all layers of the stack is, is much easier now, and it's continuing to improve. And so, so those are really important developments.
Seth Earley:Yeah. And, and I would argue also, you know, if I've got the application layer and the infrastructure layer and I'm able to, to have information about my services and my application, I want to connect my service and my applications to, to the business function that it performs. So, you know, if we're doing electronic payments, I have a payment application, and, and if the payment application goes down, I need to know that, right? So if I— I've heard some vendors talk about the term service now-ability, uh, meaning how do I connect all these technologies to the actual business so that people understand the business impact as well as the technology impact. And, uh, you know, I think at the end of the day, you know, I'm a big fan of a lot of the APM tools like New Relic and AppDynamics and, and some of the others out there. And, and they're really focused on developer experience and developers have, they've got tools that they love. Uh, but the people who are running, let's say, network operations or, or who are, uh, you know, infrastructure folks or even more senior people who are, who are responsible for uptime, those those are the people that I think would benefit more, I guess, from a tool like Zenoss. Would you, would you agree with that characterization?
Trent Fitz:Yeah, I think that's, I think that's true. And, and I also would say DevOps is, is a key persona for us, right? And so DevOps are the ones that, that are, are building infrastructure-as-code and, and they're, and they're building in the cloud, and they're using automation, and they want everything to be automated. They want to use CI/CD pipelines. Uh, and so they want to have all these like modern DevOps-y, uh, type, type things, type approaches. And so, so Zenoss, we, we've been focused on building APIs, building extensibility, building integrations, um, so that you can integrate well with all these other tools in a DevOps pipeline. And so, so that, that's a key, uh, key persona. And so, and, and actually tying that back to even the APM tools, uh, uh, we're integrating with those tools and, and hopefully benefiting mutual customers because, you know, we, we, we'd like for people to have the best solutions. And if, if there's an APM tool that, that the, you know, the customer is using, we want to make sure that we integrate with that, that they have a, they get a full view of their application and infrastructure together. Um, but, but you're right. Network operations, site reliability engineers, infrastructure and operations folks, those are all key, key personas. And, and they, they have pain. There's a lot of pain, um, especially if you look at their, their day-to-day activities. Um, even if you just look at a, a ticket queue, right, or an incident queue, and you say, okay, how many, how many incidents do I have open at a given point in time? And, and it could be 300, it could be thousands, depending on the size of the environment. Right. And so, so that's, that's a big issue. And so, um, yeah, I, I don't know if I— that was a long answer to your, to your question. So. Yeah.
Seth Earley:But look, let's, let's— before we run out of time, I want to ask a couple, you know, fun questions. Um, so, so, and these are not funny ha ha kind of questions, but just to get a little bit more personal. So tell us a little bit about your, your background, your, your, how, how did you end up doing what you're doing? And, and what did you study in school? And, um, yeah, just tell us a little bit, uh, a little bit more about yourself.
Trent Fitz:Yeah. So, um, so I, I like to tell people, sometimes I say I'm like so close to retirement, but I mean, I was joking, I was just trying to make a joke. Um, but
Seth Earley:It's the bald head. Hey, I'm bald too. It's okay. Yeah. No, you know, that's a sign of, uh, that's a sign of being a, being a very wise, um, uh, you know, a wise and thoughtful human being, right? Yes. Yes. Well, okay. So, but, but I started, I started looking at where I am today. So when I was in college, and I won't give away when that was, but, but it was a while ago, um, I, I became a network architect for IBM. Let me back up. Back when I was in college, which I won't say how many years ago that was, but it was a few. My senior year, I participated with a partner in a Motorola-sponsored design contest for a new microcontroller that they were launching that had AI capabilities. And so Motorola went to the universities across the country and said, we want you to create a project to showcase the AI capabilities in this new controller. And so, so my partner and I participated in this, and we actually got top 10 honors in that contest. And so, so it hasn't been contiguous, but my AI experience started 25 years ago, right? And so, uh, then I got into less AI, you know, type stuff. Uh, I was a network architect for IBM. I became a product manager at some small and medium-sized companies, and I found that was my niche, is running product teams in these small and medium-sized companies. And that's what I really love. Uh, it— when you initially were saying, you know, describing my experience, it sounded like a lot of stuff, but really most of the first half of my career was in security, cybersecurity. And then this is close enough. The second half-ish was infrastructure and operations. And so it's not very common to say that you're a cybersecurity expert and an infrastructure and operations expert, but I'm expert-ish at both of those things. And so that's where I got where I am today.
Seth Earley:And what do you do for fun? Um, uh, what I do for fun— I have kids, so I spend a lot of time with my kids. Uh, I find that, uh, the, the most fun. But I run a lot, uh, and to, to get some kind of zen time, I play golf. Nice, nice. Oh, I love it.
Trent Fitz:And if, uh, if I were— if you were to be able to go back and give yourself some advice from right after you got out of college with this perspective, what would you say to yourself? What would you tell yourself too?
Trent Fitz:Oh man, uh, that's a, that is a, uh, hard question to answer. Pretend, pretend I'm you. This is, this, I tell my kids this, so it's really not that hard to answer, that, uh, I kind of have an engineering brain. This is personal, it's not even going to help your podcast audience, but I I liked the engineering type jobs. I was— when I was younger, I was very risk averse. As I got older, I outgrew that and I found that the most success that I had was when I, you know, I hate to say, you know, be silly and say follow my dreams, but, you know, to go for it, to, to take risks. And it's the ironic thing is when you're young is when you really can afford to take those risks. And when I say take risks, I just mean take that job that sounds really exciting and like what you want to be doing, even if it's with a tiny company. And if it doesn't work out, you'll still learn something from it. Take that job in, in another city or even another country. So I've just gotten to the point where to me it's all about life experiences. And there were plenty that I missed out on when I was younger because I was just risk averse and kind of in my comfort zone. Yeah, totally get it. This has been great. Thank you so much, Trent. We'll
Seth Earley:have your LinkedIn information in the show notes. The company website is Zenoss.com. You're simply on LinkedIn, Trent Fitz. So thank you so much for being with us today. Again,
Trent Fitz:thank you for having me. And I just want to say one more one more kind of plug here, but it's really for educational purposes. For the past few months, we've been doing this AI Explainer blog series. So if you go to zenoss.com, you can find one that starts with— the very first one was, here's a glossary of AI terms, and then we just start digging into what each one of those are and just trying to help. This is part of this, you know, education process. And so I love that. I'll be sure to push that out on LinkedIn. That sounds like
Seth Earley:a great— awesome. Would appreciate it. Thank you to our audience. We really appreciate it. If you learned something today or enjoyed this podcast, please tell someone about it. And again, thank you, Trent, for your time today. It's really been great. Thank you, Seth, and thank you,
Trent Fitz:Liam. And this has been another episode
Seth Earley:of the Earley AI Podcast, and we will see you all next time.
