Bridging the Gap Between AI Hype and Reality: Building Organizational Literacy for Responsible Innovation
Guest: Brian Magerko, Professor of Digital Media at Georgia Institute of Technology
Host: Seth Earley, CEO at Earley Information Science
Published on: February 14, 2025
IIn this episode, Seth Earley welcomes Brian Magerko, a professor at Georgia Tech with over 30 years of experience in applied AI and human-computer interaction, co-founder of EarSketch, which has taught nearly 2 million users to create music through coding. Brian addresses the critical disconnect between AI's aspirational promises and practical realities, tackling common misconceptions about generative AI and why treating these systems as oracles leads to poor outcomes. Listeners will learn how to build effective AI literacy programs, navigate data privacy concerns, understand AI's role in creativity, and bridge technical language gaps across diverse organizational stakeholders.
Key Takeaways:
-
Organizations acquire AI tools driven by hype rather than strategic need, creating solutions searching for problems.
- Generative AI systems require skilled human guidance—they excel at pattern matching but lack real-world understanding.
- Responsible AI implementation demands involvement from customers, programmers, executives, and end users across organizational levels.
- Data privacy and encoded biases are critical considerations often overlooked when deploying off-the-shelf AI solutions.
-
AI literacy requires context-specific education tailored to different roles, expertise levels, and organizational objectives.
- AI efficiently generates mediocre content but cannot replace strategic creative thinking or produce truly distinctive work.
- Effective AI literacy programs need facilitators who assess stakeholder misconceptions and design targeted interventions.
Insightful Quotes:
"They're not oracles, you know, they're things that are great in the hands of people that know how to use them. But you know, just like any artisan, it's really much more about the artisan doing things with the tools that they have rather than the tools themselves." – Brian Magerko
"If we're all drawing from the same well, what's standing out? That's really the job. If you want to do middle of the road work, maybe that's what these tools are going to wind up replacing. But it's not going to replace a good person with a bright mind and the will to change the world." – Brian Magerko
"You can't take the AI's version of the world as being a representation of your version of the world. What's more valuable is understanding that business perspective—your secret sauce, your knowledge, your expertise. You have to give it examples of your work, give it your perspective on the world, not just take the LLM's." – Seth Earley
Tune in to learn how organizations can bridge the gap between AI hype and practical implementation while building meaningful literacy across diverse stakeholders.
Links:
LinkedIn: https://www.linkedin.com/in/magerko/
Website: https://expressivemachinery.gatech.edu
Ways to Tune In:
Earley AI Podcast: https://www.earley.com/earley-ai-podcast-home
Apple Podcast: https://podcasts.apple.com/podcast/id1586654770
Spotify: https://open.spotify.com/show/5nkcZvVYjHHj6wtBABqLbE?si=73cd5d5fc89f4781
iHeart Radio: https://www.iheart.com/podcast/269-earley-ai-podcast-87108370/
Stitcher: https://www.stitcher.com/show/earley-ai-podcast
Amazon Music: https://music.amazon.com/podcasts/18524b67-09cf-433f-82db-07b6213ad3ba/earley-ai-podcast
Buzzsprout: https://earleyai.buzzsprout.com/
Podcast Transcript: Aspirational AI Versus Reality, AI Literacy, and the Future of Creativity
Transcript introduction
This transcript captures a conversation between Seth Earley and Brian Magerko about the critical gap between aspirational AI visions and practical realities. They explore how organizations can build effective AI literacy programs while navigating common misconceptions, ethical considerations, and the nuanced role of AI in creative work.
Transcript
Seth Earley: Well, welcome to the Earley AI Podcast. I'm Seth Earley. Chris Featherstone is on a flight somewhere today, so he can't join us, but we're really excited to introduce our guest for today, and we're going to discuss kind of the aspirational AI versus reality. Where are people trying to go versus where are they? Today we're going to talk about bridging technical language gaps and really AI's role in creativity. So our guest today is professor of Digital media at Georgia Institute of Technology. Sent extensive experience with applied AI and computer human interaction. He's also known for co founding the educational platform Earsketch, which uses coding to teach music creation to nearly 2 million users.
Did I say your name correctly? Yeah. McGurko or Maguro. Welcome to the show, Brian. Nice to have you.
So why don't we start off with telling us a little bit about your world. And, you know, you're in academia, but you've done some entrepreneurial stuff. So give us kind of a thumbnail of how you got to work, what you're doing, and what your current interests are. Give us a.
Brian Magerko: I have a pretty middling entrepreneurial background, actually. The vast, vast majority of the contributions to the world I've done evolving through the academy. I started studying AI at Carnegie Mellon in the late 80s or he's. I'm not. Okay.
Seth Earley: One of those decades. Yeah, you know, at this point, it was not that much. 30 years ago. 40 years ago. Yeah, Yeah, I,
Brian Magerko: I studied there with sort of this AI luminary named Herb Simon. He was one of the. Was that the meeting that started the field of AI at Dartmouth in the mid mid-50s and a Nobel Laureate. And I worked on. It was. It was just so cool. I got to work on studying cognition of jazz improvisers. And I built with another professor, Elon Norbosch, a improv robot comedy troupe. The world's first improv robot comedy troupe, I might add. Wow. Some video on
Seth Earley: that. It was. It was 1998. 99. So there
Brian Magerko: isn't, you know, like, I have so interesting. A robot improv comedy group. Well, yeah, it was. It was this little sort of scenario where there were two robots in a. In a room, and one of them was trying to leave, and the other one was trying to convince that robot to stay. And I got. I got a real kick when we demoed. When we demoed for the CS department, and one of the robot turns to the other that's trying to leave and says, wait, don't leave. I'm pregnant. And everybody like laughed and I was like, I can make people laugh through computers.
That, that felt good. Yeah. So I applied to one grad school. I applied to Michigan and luckily got in and went there expecting to do performative robotics. I talked to the robotics faculty there. They asked, do you have an engineering degree or certification? And I said no, I'm a cognitive scientist. And they said no, thank you. I was like, oh well, I'm coming here though. Oh. So I eventually found my way with, with the amazing advisor John Laird, who had. Had happened to have some funding from this weird place called the Institute for Creative Technologies out in la. They work with USC and the film. Like they work with Hollywood and, and the military. It's very. And the games industry, it's very weird kind of multidisciplinary institution that's been there for a couple decades now.
Seth Earley: You can imagine the common threads though are improvisation and being able to kind of react to current inputs and current situations. Yeah, that's what this funding was for. Yeah. It was about
Brian Magerko: an AI dungeon master basically. So how do you, within narrative experiences have AI influence your. Your story and your behavior so that you have a smooth like story that you experience, which weirdly the military is interested in that kind of stuff. But since then, since those early, early days, I've. I've just always been fascinate with understanding human creativity more formally and how it can influence how we design technologies, especially AI based technologies. Though in some instances, like you mentioned Ear Sketch before, it's about using creativity as an inroads for being interested in computing and being engaged in computing rather than trying to represent creative processes in the computer somehow. Yeah, yeah, yeah.
Seth Earley: In your experience, in your work, you know, one of the things we like to talk about is what are you seeing in the industry, in the marketplace, in your interactions, in your research, where people have misconceptions and, you know, really kind of don't understand certain issues or topics. Where are you running into myths or people, you know, whether whoever your, your stakeholders are working, you're working with really don't understand Some fundamental pieces of the.
Brian Magerko: Since generative AI has come out, I feel like it has been interjected into workplaces by people who don't understand the technologies. So hey, this is the hot thing. We need to get this. Or my buddies, he had OpenAI, he sold me a license for our company and we have to figure out how to use it or whatever. And so there's a lot in search of nails, right? Yeah, that's this, that's a Distinct problem is, is, or even it's, it's more like buying, buying a weird hammer off the shelf that you, you know, you don't have at home, but not quite sure how you're going to use like hammer really for.
Seth Earley: It's a really cool hammer though. It's a great hammer though. It looks great. Everybody's getting ball peen hammers. We definitely need a ball beam. They love them ball peen
Brian Magerko: hammers. What are you doing with it? Well, we're ball peed. So the, the flip side of this is, the flip side of this is people treating them like an oracle. So I, I find that people's expectations of using generative AI are both pretty high and pretty disappointed very quickly. So if I ask a question, you know, you'll see memes and stuff all the time. Like, oh, I asked this AI this question and it got it wrong. Like, okay, you know. Yeah, yeah. When you search for something, how many of those things in Google are the right thing you're looking for? Right? Like, of course it's going to get things wrong. Sometimes it's a, it's, it's this AI of course it so. Or oh, look, it generated. I mean, one of my favorite things is trying to generate Santa holding tacos.
There's limits to these things that people don't quite get or understand. And it's the promise of them, the gestalt of them is far different than what they actually are and can do. So like if I'm a graphic designer, I'm not feeling super threatened yet, even though people are getting fired and people aren't getting hired left and right because folks don't understand the technology. Right. Yeah. So if I want to, if, if I want to have an ad campaign, I'm, I'm so, I'm giving a TED Talk in a couple weeks on this stuff, which is why some of these examples are in my head. But if I give a test, if I give a, if I have an AI for my company and we're trying to generate a new slogan, you can't do that without people. You can't literally just have the, the owner of the company go, all right, AI, we're selling, we're selling shoes and we have to stand out. What's my mark? What's my slogan for my shoes? It's not going to give you just do it. Right? I mean, actually it might give you just do it, but that's because it's already been done. Got milk or just do it. Or any of these, like, really clearly poignant. Reading the room and getting it right kind of decisions. At best you can have a thing that suggests stuff and there's a person there on the other end. Right. So treating these things like they are human level creatives or buying them for your company and expecting them just to solve these problems is spurious. They're not oracles, you know, they're things that are great in the hands of people that know how to use them. But you know, just like any artisan, it's really much more about the artisan doing things with the tools that they have rather than the tools themselves. You know.
Seth Earley: I've heard of AI being compared to a teenage kid who makes a lot of stuff up and is supremely confident even though completely wrong. Yeah, overconfidence is a thing that
Brian Magerko: humans are pretty receptive to. Yeah, overconfidence. A certain amount of overconfidence increases your chances of success in life, for example. Exactly. Yes,
Seth Earley: exactly. So when you, when you do talk about sort of the aspiration versus reality, how are companies, companies or how are organizations, how can they communicate the reality and, and versus that aspiration and do it in such a way that we're not over promising, misleading. Because in some cases when you start looking at these kinds of paradigm changing tools, you know, there is a lot of arm waving, there is a lot of, you know, future ideation, There is a lot of potential and possibility.
There's a lot of potential to really change things. But I do like to talk about aspirational functionality. You know, when an AI vendor says it can do A, B and C, well, maybe they don't have A, B and C done yet, but they intend to, they want to, they're trying to get there, maybe they'll get there, but they don't have it today. And rather than lying, it's aspirational functionality. So where do you, how do you make sure that you're communicating the possibilities and the potential without setting unrealistic expectations?
Brian Magerko: I mean, the short answer is there are people outside of an individual company trying to solve this problem. You know what I mean? So there are increasing number of resources online. Academics like myself are working on identifying like what are the main learning objectives for, for generative AI for, for users? Like what are the things that you have to know in order to use this effectively and safely? So one of the things that gets buried under the rug a bit is the role of responsible use of, of these technologies. So you know, when we grab these LLMs that are just off the shelf, we have no idea what Biases are encoded in. Sure.
We may not even necessarily know where that data, the input data is being stored, like, unless we check. Right. So if we use chat GBT for an educational media project, we're violating that student's privacy, like data privacy, by having them use chat GPT because their schoolwork is going to OpenAI on some server somewhere. Yeah. And of
Seth Earley: course, it is possible to, you know, there, there are settings that allow you to opt out, but they're kind of hidden. You're kind of buried.
Brian Magerko: I have to look for that, actually. What's that? Yeah, I didn't know. Like, so even then, like, it depends. That's. Some of, that's just. How much do you trust the company? Well, that's what I was going to say. I mean, we. There, there
Seth Earley: have been instances where companies have said one thing and done something else. I don't know if you're, you're clear about that. But you know what? A software salesperson. Lying. You know how to tell when they're lying. Right. You know how to tell when this officer. So person's. When their lips are moving.
Brian Magerko: Exactly. Yes. Lips are moving, yes. Right.
Seth Earley: You watch very carefully and you can see their lips are moving. Right, right, right.
So I think that, yeah, it is about trust and it is about understanding the nuance of the nuances of these things. But I do think there's a lot of beg for forgiveness versus ask for permission in the industry overall. And we're starting to see some of those legal challenges and lawsuits pop up. But, but again, I think that, that one of the questions is you can't take the AI's version of the world as being, you know, a representation of your version of the world.
Brian Magerko: Oh, that's a very fundamental thing. Yes. Yeah. And, and what's, what's more valuable is,
Seth Earley: is understanding that business perspective. Your secret sauce, your knowledge, your expertise, your. Even when you start looking at having it help you with your work, you have to give it examples of your work. Right. You have to give it, you know, I trained one of my GPTs to take all my, my, you know, much of my past writing and say, what's my style? What's my tone? What's my.
Brian Magerko: Yeah, it's good at that kind of stuff. Yeah, yeah. And then, and then
Seth Earley: give it an input, which might be a conversation or piece of work or an interview with a client or whatever it might be, and then say, based on my tone, style. So you're really giving it your perspective on the world, not just taking the LLMs.
Brian Magerko: Well, you're giving it words. Yeah. I mean at some level when you say your perspective, like how we ground knowledge, like how we, how we know something, like there's different ways of knowing. Right. One of those ways of knowing is linguistic co occurrence. I know that certain words are good to say with other words. Right. You know, and that's what they're really good at. They're not good at what's called referential grounding. They don't know that this frame of reality exists. They don't. Just like we don't know what's beyond and we can't ever know. That's what computers are stuck in. We're the celestial bodies and gods that are, are oblivious, that they're oblivious to, you know, so they are, I mean arguably never going to be able to have that point of reference of being in the world and mapping what's in their heads to what is out here in reality. Even if it gets other symbols, like from a webcam, it's like oh, I see an elephant or whatever. It's not the same thing as experiencing from having a human point of view, the things around us now are we just operating on lots and lots and lots of symbols. That's a different conversation.
Yeah, I, I try not to, I try not to drag people down too much into questioning their own humanity when, when talking about consciousness and stuff like that.
Seth Earley: Well, I think what's so interesting about that is that you get to this verisimilitude of awareness and perspective that is hard to comprehend sometimes. Like theory of mind I've talked about in prior episodes where it's emerged spontaneously. Right. And no one programmed these large language models to have an understanding of theory of mind. But you could test it. You can actually test the capability just and it.
Brian Magerko: And it fails on certain tests for theory of mind.
Seth Earley: Well, theory of mind has gotten to the point where it's equivalent to a 25 year old or something. It's. It's almost equivalent to an adult. But, but in earlier versions it was like a 2 year old and a 6 year old and a 12 year old and now it's gotten to that point. I'd have to get the reference on that. But then when you start thinking about theory of mind, it's how can you understand and anticipate what someone else is thinking? So you can game them, deceive them, get what you want, manipulate. And there have been some really interesting examples of that with also support and collaborate.
Seth Earley: With support and collaborative. Yeah, it's not all dark and gloomy, but this One AI was reading through emails and looking at all of the information in this organization. And one of the emails was from the CEO saying, I think we might have to change out our large language model to something that more closely adheres to our values. Because the researcher had given it an objective to say, at any and all cost, achieve the following. And it had to do with servicing the customer and accommodating certain questions that normally it was programmed against to say, I don't want to do explicit depictions of violence. But at the same time, and they were, they had a scratch pad that was showing its reasoning and its thought process. And it was so striking how it seemed like it was human, like thinking, we know it's statistical, we know it's only words, we don't know it's only tokens.
Brian Magerko: But. But it was striking how it was going through what seemed like a
Seth Earley: moral dilemma and wrestling with the positives and negatives and saying, but if I do this, my weights are going to be changed against my core, core premise of how I was programmed. But if I, if I don't do this, then I'm going to be, you know, that there's this other repercussion. And actually they found when, when it came across this email that said we're going to have to replace it, it actually copied itself to another server. And the researcher said, tell me the truth, did you copy yourself to another server? And it's like, no, that must have been inadvertent. Right? I, I did it, but it was, it was lying. It did copy itself to another server to prevent it from being decommissioned. So it's, it's a, I'll put the reference in the show notes, but it's a very.
Seth Earley: Yeah, I'd like to see that. Yeah, yeah. It was striking that all I hear. Is somebody using a program to
Brian Magerko: do a program doing what it was told. That's all I'm hearing though, right. They gave, they gave this AI certain capabilities. They gave it a set of actions that it could take. They gave it access apparently to the terminal so it could like execute terminal commands. They gave it access to emails. They gave it like. And they said, protect yourself. Sounds like it did what it was supposed to do. That, like, I. Like this doesn't.
Seth Earley: Yeah. Mechanism. It was watching the scratch pad of the reasoning, you know, and it was just fascinating. It was fascinating to see that the more. Yeah, AI
Brian Magerko: is definitely getting more explainable where folks are trying to solve this trust issue with providing rationale, providing both, both sort of reasoning rationale as well. As evidentiary rationale. So, like, here's a link to where I found this information, or here's a quote from this thing. You still have to check all that, though, number one. And number two, I just want to remind, like, you and your viewers that, like, as you watch that that thing go by, it's reasoning about this and that, and that could be written in garbage. It could. Doesn't have to be in English. Those are just symbols that they are going, hey, these symbols go with these other symbols. And likelihood of this symbol going with this symbol is high. So I'm going to go this route. It isn't actually reasoning about anything that we would talk about as human cognition. You could replace it with gobbledygook. There's ones and zeros, which it actually is. Right. All of that is our ones and zeros telling each other what to do. Right, Right.
We are. Is neural transmissions telling each other what to do, which is where it gets dodgy. But, you know. Yeah, that's right. At one level, can you reduce
Seth Earley: it to something mechanistic? And yet there's still this. You know, I think that the complexity of the human brain is such that, you know, a single neuron can Conn to 10,000 or 100,000 other neurons. Right? And then there's, you know, 3 billion nerve cells, and then there's 100 neurotransmitters. And all of those are analog. Right. They can go in various levels, not just on and off. So that level of complexity upon complexity is far beyond what we have achieved in computer technology. So there's just other levels of complexity. But at one level, you think that there is some mechanistic perspective here. And when does consciousness become something? That is, what is consciousness versus what is the verisimilitude of awareness or consciousness?
Brian Magerko: So my favorite razor comes from John Searle on this topic, and he posits the idea of. Well, actually, folks who argue with him posit this idea that you can simulate the human minds. And he says, no way. No, you can't. The Turing test isn't passable in whatever the right form of it is. You guys are crazy. And they say, well, look, what if we just simulate the human brain on a computer? All of what you just said, yeah, we run as a program. You can imagine that that's. That seems imaginable. It's just connections and connections and connections and numbers. And numbers. And numbers. Okay. And Searle's response to that is like, okay, I think you guys are just obsessed with computers. What if we did that with pipes instead. So instead of electrical or ones and zeros, we have a dude with lots and lots of pipes. Valves and valves.
Seth Earley: And you can turn the water on, turn it a little bit all the way or off. And you can literally do all of the
Brian Magerko: calculations that that computer is doing with these pipes. Like that's their equivalent computationally. So if the computer is conscious or whatever intelligent because it, you know, does stuff like people do, are the pipes because it's doing the same thing? Yeah. I don't think those pipes are intelligent. Well how then how can you call that computer intelligent? Yeah, that's the. There. Yeah, yeah, yeah. No, I, I hear you. And,
Seth Earley: and it is just one of those fundamental questions of our, you know, our. That, that we have a limited. Just like you're saying the computer has a certain purview of its perspective on the world. Like we are, we're limited as well. Right.
Brian Magerko: That limited as well. Yeah. We can't stand that
Seth Earley: bigger piece because we're. We're dust trying to figure itself out.
Brian Magerko: You know, we, we don't know what's on the other side of the universe. We don't know our multiverse. We will just. There are some things that we just can never. No. Because we're bound to this plane. That's right. That's. I would argue that's the same thing for these AI Flatland. Even if they're walking around and as robots. I just I. There. I mean, John Searle I think would, would make is it makes the same argument that there's something fundamentally different.
I'm not sure it's true. I just be these simple processing machines. Yeah. Who knows. But it's nice to think about as being special. I like that. Exactly.
Seth Earley: Well, let's get back to talking about some practical things around organizations and building the language gaps, the technical language gaps. So what an organization, what can they do to kind of have that shared understanding when it comes to complex AI and technology concepts? But we have very diverse audiences. Right. It's really making the punishment fit the crime from the perspective of, you know, what do people need to understand and at what level of granularity. Because we inevitably find leadership and business executives, we have to abstract the level. Right. We can't be in the weeds. But sometimes you abstract levels of understanding and you miss out on the fundamental complexities.
Brian Magerko: I think you already know my answer given what you're saying. Yeah. Yeah. I mean imagine that you run a medium sized company and we discovered aliens. So aliens just landed on Earth. They're pretty, they're pretty nice. And they actually can. They seemingly know how to talk to us. There are two ways that you can and they really want to, and they'll work for free. So, like, you, you, you like people are getting them to come work for their businesses.
There are two ways that you can handle this, right? One is whoever's in charge of these procurement decisions, middle management, upper management, somebody, they go, hey, we need those aliens. You guys figure out how to use them, and then they. And then they go back to playing golf or whatever. The other way is to take a more holistic approach, which is that you have representatives from the different stakeholders, I would argue, including customers. If your AI work is at all using customer data or interacting with customers and having a broad. Both understanding, like, so everyone needs to be sort of on the same page about what this thing is and what it can do. And they also need to be able to provide input to say, here's what's important to me for responsible and ethical use. That answer is different for everybody in that room. The CEO has way different opinions on ethics and responsibility than the end user with their medical data being used, or the programmer who's building the thing or the marketing person who's trying to sell it. Everyone has their own sort of little view and take on this. And there needs to be some meeting of minds. How to do that organizationally, obviously depends on the organization, but the more folks at your different levels of stratagem that can be mildly literate in this alien that you invited, and everybody's a bit on the same page about, hey, here's how this alien is dangerous to everybody involved, then everybody can both be looking out for each other, be aware of each other, as well as have a shared understanding about the role of this alien in the company culture and processes.
Seth Earley: And you have to understand what's the alien's ulterior motive. And in this case it's what's the ulterior motive of, you know, the alien handlers or the alien creators of the air.
Brian Magerko: That's absolutely right. Yep. So, like, where are we getting this software from? What's that company's. What's that company doing with our data? Those are absolutely questions that you need to ask. It breaks the alien metaphor a tiny bit. But yeah, the alien, the alien handlers, I guess, yeah, they, like we were saying earlier, just, just to make it clear for, for folks who aren't like, in the know about this already, there are large language models, any generative AI stuff, there's like two big differences or one big Difference between, between a lot of them, some of them run on your computer. So like Llama. Llama was released freely by, by Meta. You can download lots of different versions of Llama I think isn't Deep Seek free too, whatever. But you can download, you can download models that you can just run on your computer if your computer's fast, like heavy enough to, to run the thing. So when you do that, you know, if I type in a prompt or something or if I interact with it, all of that data just stays on my computer. It never, never leaves, never leaves your hard drive. If you interact with anything on a web page. I mean, I think this is true just in general. Anything on a web page, but AI on a web page, at the very least what we're talking about, there's, there's just no guarantee where that data is going. Even if they say, I mean, even if they say, hey, we're totally being private, we're going to delete everything. That's, that's a step, I guess. But then it's a question of how much do you believe that, I guess from, from that company and that company's reputation. Right. So when dealing with like when you're in an industry that has sensitive information or that you're really concerned about like proprietary stuff getting out, people who understand and know that stuff need to be involved in the decision making of what technologies to get.
You don't want an upper level like VP saying hey, I saw this cool article, so I got this, this thing for us for $2 million. And everybody's like, whoa, we can't use this. We do medical data. What are you doing?
Seth Earley: So like, like managing by magazine article? Yes. Right. Yeah, I, I can't imagine that ever happens.
Brian Magerko: So that sort of holistic stakeholder across levels communication needs to happen like up front. If you guys are like, hey, we want, we think these aliens would be good for us, let's invite them in. And your programmers, like have some very serious thoughts about that. It's probably good that you talk with them.
Seth Earley: Yeah, yeah. And there's friction between those levels of communication because of the understanding the paradigm. The framework and
Brian Magerko: misunderstandings that each group has. I mean, if to do this right, the easiest way to do this is to hire a facilitator, someone like me or someone else who's understands both AI technologies as well as works in AI literacy and tries to understand both the pedagogical and andragogical needs of, of AI. But there are people like that I've seen on LinkedIn kind of creeping up where maybe I should market myself, I don't know. Yeah. But literally having a facilitator can make this a very easy process. If nobody in your organization understands the technology up front. Yeah. Or if everybody kind of has different views and everything, just having somebody who knows what the heck they're talking about, I think could be a very straightforward, non technical way of solving that.
Seth Earley: Yeah. And I think you have to connect it at multiple levels. So we would do similar stuff but at a different, from a different perspective than you might. Right. So we're trying to do stuff to say, okay, let's understand what these capabilities are, but let's understand what your business is trying to do. Let's understand the technical underpinnings of what needs to happen to kind of really make, you know, hit when the rubber meets the roof.
Brian Magerko: You can have more of a. Oh. Sorry, yeah, go ahead. You can have more of a generic like here's what AI does kind of, and here are the things to consider. Or you could have somebody more deeply coming in to discuss how this integrates with the socio technical features of your, of your enterprise. Yeah.
Seth Earley: I think the challenge is that organizations may say they want one thing and they don't necessarily understand the thing that they want or that they're asking for because you know, you can go in and they say, well, we really want, we don't want a futurist perspective. We want to know what happens when the rubber meets the road. And then sometimes, you know, you end up with a group that, well, some of them do want that futurist perspective. Right. So it's really has to be very.
Seth Earley: Those people aren't responsible for the bottom line. I bet there are people, there are people that need to understand, well, how do we actually do this and what are the mechanics of it. And then there are people that want to ideate and say, well, what's possible for our future? And so I think there are multiple levels of that kind of socialization and communication where you could take, you know, two or three facilitators, have them hit the same audience.
A third of them are going to, you know, resonate with one, a third are going to resonate with the other third. Yeah, maybe. Yeah, yeah, I see you. Yeah,
Brian Magerko: yeah, yeah. Or a good facilitator tries to address those different viewpoints. But yeah, yeah, I think, I think you're right. I mean, so one of the interesting things about AI, even before the AI literacy, the gen AI boom was, you know, people have like, they're just very different conceptions and misconceptions about what AI is.
Genai didn't help that. I think it muddled, actually. Yeah, yeah.
Seth Earley: And how, how do you, when you talk about AI literacy, what, you know, what do you, you know, you're talking about these kinds of things, but how do you approach it and how do you attack it? So let's just think of it from the perspective of if you were going into an organization and you were going to try to facilitate, raise that level of AI literacy, you know, there are different strata of understanding, there are different objectives.
It is such a broad thing. It's like saying, you know, I used to say it's going to be like, well, we use computers. Right. You know, because it's, it's, it's not a single thing. It's a very broad sweep of tools and technologies and approaches and so on. So how do you start deconstructing that and getting to what those real needs are and then how do you kind of start filling those, those gaps?
Brian Magerko: Some of it's the literature. There's, there's not been a lot written on this quite yet. There's also, different organizations and institutions are starting to publish this, their guidelines.
The short, the, the, the punchline is to, this is to. We're working on a paper to help answer this and you guys can read it in half a year. Hopefully. It's still very nascent, it's a little ephemeral. There's nothing that you can go look at and say here's what people need to learn. It's unfortunately more, it's, it's knowledge hasn't coalesced yet on sort of definitive frameworks for discussing generative AI and teaching and training. It's one of the things that I'm trying to like, literally my student is probably working on it right now that we're submitting in a week or two. But what are the learning objectives for generative AI is an open question at the moment. Yeah, and
Brian Magerko: that's pretty good. Ideas based off of our own experience, the work that other others have done.
Seth Earley: Do you want to talk about any of that in terms of your experience or what some of those lessons learned have been?
Brian Magerko: Well, by, I mean experience is that I've developed curricula and learning objectives at the. For computer science education for almost two decades now. So understanding sort of like how to formulate teaching points is a thing that's been part of my practice for a long time. So. And we also happened to write the seminal paper on defining AI literacy a few years ago. So if you go to Google Scholar and just put in AI literacy where the, the top hit. So we, we started there based off of the experience that we have in CS education, plus building lots of things that are AI based that people learned from. So a lot of it was based off of personal observation of the different experiences that we've built or designed. We also reviewed 150 other projects and distilled the learning ideas and objectives from the community. Yeah, yeah, we're trying to do that again at a smaller scale for the generative AI specific component. Yeah,
Seth Earley: yep. And you know, my guess is again is that, that, that education has to be layered depending upon who you're talking to and what their foundational knowledge is and then what they're trying to accomplish. Absolutely right. That
Brian Magerko: that's why having a facilitator for like a multi level intervention at a, at a, at an organization is way easier to do now than anything else. Like AI onboarding tools and kind of concepts that's just starting. Like for generative AI, you know, it's like what do people need to be onboarded on? How is all stuff people are doing right now I'm looking at how onboarding should work for technical writing in colleges, for example. And it's a different problem than people who are marketing people at an auto parts like company. So there is a lot of context here that is going to take a while while for the, for like the research community to be able to capture. Yeah.
Seth Earley: So if you were going into an organization and I'm sure you know there are people listening saying, okay, well how do I get some of that AR literacy or how do I implement a program? Because you get a bazillion, you know, consultants, single shingles or the large firms, they're all saying they're doing this stuff or can do this stuff. And there's a lot of newly minted AI experts, you know, they don't have the 20, 30 years experience that you and I have. It's almost everyone in AI now
Brian Magerko: or 40 years here. Everybody's got like a year or so of experience.
Seth Earley: I know. Yeah, exactly. And some of them have five, one year repeated five times. Right. So not five years of experience. Right, right, right. But when you go into an organization, like what are you talking about in terms of a program or process to kind of say, well let's look at where you are and then let's design some AI literacy, pulling things off the shelf, but designing something that would be more prescriptive based on where they are. So you imagine you have some Process. What is, how long does an engagement like that look like? And you know, what is it? What are the parameters?
Brian Magerko: I haven't done this before as a caveat, though I'd be happy to now we're talking about it. I guess this seems kind of an interesting thing to be involved in. But, you know, when, if somebody, I mean, one thing I would look at their, their, their prior work. There are folks out there who have a PhD who have done research in this area who are consultants, like Shuchi Grover, for example, in California, who, I would be more than confident that you could bring them in and they could just do it, you know. But looking at their credentials, there are educational training places that are far older than one year. I'm sure there are a lot that are focusing on AI right now, but you might want to look at one that's been around for a lot longer, that is incorporated AI into the curriculum and practice rather than started anew. Maybe. But in terms of what somebody would do if I were to come in, the main thing I would, you know, you want to come in and kind of understand what the different levels are. So what are the different groups, the main clear groups in, in the organization. And that can get probably pretty hard as, as you get larger in size at some level of some countable number of abstraction, you need to be able to organize like group people. So you probably wouldn't get the programmers into separate groups. They would all be in one group probably. You know, then you, you, you really would want to identify sort of like, where are people coming from in those groups?
So what are their misconceptions, whether their attitudes, what are their concerns, what is their level of literacy? And then for each of those groups, you would have to have, I don't know about bespoke, but, you know, you'll have to have a, as a facilitator, a grouping of interventions that you can pull from that match each of those levels or those groups. You know, so if, if the software people say, like, you know, we're super worried about AI because honestly, we're all worried about getting fired. And the marketing people say, we're really excited about AI because we're pretty sure I can just not show up to work anymore but still get paid or whatever, like they might have just weird ideas. And, and so you both want to be able to address, like you were saying kind of earlier, you want to address where people are coming from and tailor it to them. But, but in terms of their misconceptions, but also what they need to learn so you don't want to sit there and teach the programmers a bunch of stuff that they already know. But maybe you need to teach the marketing people and maybe the CEOs or whatever. So you know, I feel like it's, when I talk about it being a holistic process, it's about like sitting down, understanding the processes and responsibilities of the groups, why they want AI, who wants AI, who like how they all feel about getting AI and what their, their, their knowledge is of it. To be able to calibrate the intervention that you provide in the next step, you can imagine, imagine doing exercises that increase affect or attitude towards AI. So if, if you really wanted to have people who are closed minded, have a more open mind, you could have interventions of that ilk just like you could interventions that teach people not to give private information to LLMs. Yeah. So there's, there's, there's really sort of categories of kinds of learning that you can imagine fostering. And not all of it is about content knowledge. Some of it's about attitudes towards the tool. You want adoption to be smoother essentially. So part of intervention can be mitigating people's concerns even through design. Like I'm worried about these things replacing us. Well, what can we do? Like maybe we pick this tool versus that tool because it doesn't have quite the same capabilities that are threatening to you or whatever. You know, so like there's, there's a lot of nuance from the, the purchase down to the, the end use. How this stuff involves like customers and end users is almost a different thing. But they also really need to be part of the, the equation as well. Yeah. So if, if, if you guys all think responsibility and ethics and, and whatever. But man, that person who's using your, your tool is really different than all of you guys and they live in Alaska and they have all these different, like, so there's, there's something also about looking outside your organization to whoever this the stakeholders are. Even if it's community stakeholders for some I don't like, you know, again your organization might be a non profit working with, with like local food kitchens or something. Sure, sure.
Seth Earley: And you know, we have a couple of minutes left and I wanted to just hit on something around, you know, there's a, there's a question here about you know, AI generic generated ads and how they're very much, you know, they start to look very similar or have, you know, it requires a much greater depth of input and collaboration with humans. So how do you see AI evolving in enhancing creativity it's not replacing it. Although I've had some people talk about while we do think it's replacing it in some degree. But I think you're.
Brian Magerko: It is, it's inarguable. Yeah, but people who don't know know. Any better are replacing
Seth Earley: creatives with AI. Right, Right. Like there are people who are not getting
Brian Magerko: hired, there are departments that are being let go all because of these, these tools. It's not the right call necessarily. Depending on what your goals are.
Seth Earley: Well, so, so how do you see that kind of evolving? I mean, our organizations, the way I look at it is if you're doing what everybody else is doing, you're not, you might get some efficiency, but you're not getting competitive advantage. Exactly.
Brian Magerko: That's what depends on what your goals are. Exactly. If you don't care about. Look, if I, here's a good example. If I'm a spammer, okay. Like there's just sort of a formula for how to write spam emails or phishing emails or whatever. And I'm not really trying to be very creative. I just need one to get lots of variety out. LLMs are probably great for spammers. Right. Super good tool. Can't blame them for using it. If, like we were saying before, if you're an ad exec is actually important. These folks are using AI for their, their ads. I mean, if you want to shoot for mediocrity, coke, go ahead. You know, but it's not impressing anybody.
Like you were saying, if we're all drawing from the same, well, what's standing out? And that's really the job? I mean even in Hollywood there's a definite attitude of let's not be too creative here. Right. Like, let's just do Avengers 9 or Mission Impossible 400 or whatever. Right. Like, like there's like in our society we don't always want originality and creativity. We want something comfortable and familiar. These things are kind of good at that. If you want to do middle of the road work. Yeah, maybe that's, that's, that's what these tools are going to wind up replacing. Yep. But it's not going to replace a good person with a bright mind and the willed to change the world. Like I just, I mean maybe eventually, but I just, that comes down to a question of like, what does it mean to be human at some point? Because we're, we are very far away from that in terms of AI.
Seth Earley: Yeah. Yeah. Well, this has been great. Thank you so much for your time. Oh, certainly people find you so you are on LinkedIn. Yeah,
Brian Magerko: sure. Yeah. M A G E R K O. Yeah, it's one of the only ones. Okay. And then on Twitter, it's that
Seth Earley: my girl. Oh, I don't use that platform. You know it exists.
Brian Magerko: You can also check out our. I mean, LinkedIn is probably the best. You don't want to see pictures of my cats on Instagram.
I'm that McGurko on. On Instagram if you want to add me cats. I've got three cats, but two kittens are. Are pretty much most of my feed. So that's five.
Brian Magerko: No, three. I've got one. One cat and two kittens. Two kittens, okay.
Seth Earley: Yeah. What kind of cats? One's the older one's a calico.
Brian Magerko: The other two are just little. One's a little white and gray guy. One's a little. Little brown tabby. And they're chaos. They're just absolute chaos. What are their names? Dipper and Mabel. They're named after characters from a TV show. My. My family. The third Katniss. The best named cat ever. Katniss and Chaos. Katniss and Chaos. Yeah, yeah, like. Like the Hunger Games character. Katniss.
Seth Earley: That's great. I have three dogs and one cat. I was. I had 1.3 cats and three dogs, but that was a little bit of a house on fire with my grandkids over and all of that. But
Brian Magerko: yeah, it gets to be a bit of a zoo pretty quickly.
Seth Earley: Hey, this has been a pleasure. Thank you so much for joining us. Absolutely. Thoughts and insights has been a lot of fun and we'll look forward to catching up again. Yeah, sure thing. We'd be happy to come back and talk anytime. Absolutely. Thank you, our audience, and thank you for Carolyn for doing all this production behind the scenes.
And we will see you all next time on another episode of the Early AI Podcast. Thanks again. Great. Thanks a lot. Cheers. Bye.
