Balancing Generative AI Capabilities with Emotional Intelligence and Human-Centric Marketing Communication
Guest: Nick Usborne, Copywriter and AI Communication Strategist
Hosts: Seth Earley, CEO at Earley Information Science
Chris Featherstone, Sr. Director of AI/Data Product/Program Management at Salesforce
Published on: July 22, 2024
In this episode, Seth Earley and Chris Featherstone speak with Nick Usborne, a veteran copywriter with over four decades of experience who advocates for sophisticated use of generative AI in marketing. They explore common misconceptions about AI capabilities, the importance of structured prompting frameworks, and how organizations can leverage AI without losing their authentic voice. Nick shares practical strategies for integrating emotional intelligence into AI-generated content while avoiding the "sameness trap" that threatens brand differentiation.
Key Takeaways:
- Most people treat generative AI like traditional software tools, expecting miraculous outputs from simple inputs, leading to disappointment and abandonment after first attempts.
- The RACE framework provides structure for better AI outputs through four elements: Role definition, Action specification, Context provision, and Example inclusion.
- AI models cannot genuinely experience emotions or sensory input, but they can approximate emotional intelligence when given proper sentiment analysis and avatar development.
- Organizations risk brand damage when they automate content creation without human creative oversight, losing the differentiation that builds competitive advantage.
- Human creativity remains irreplaceable in marketing, as AI has yet to produce truly great headlines or top-tier creative work that distinguished agencies historically delivered.
- The "sameness trap" emerges when everyone uses identical models and prompts, producing homogenized content that eliminates brand uniqueness despite cost savings.
- Successful AI integration requires iterative collaboration, treating models as co-intelligence partners rather than software tools, with humans adding emotional authenticity at final stages.
Insightful Quotes:
"This isn't like a software tool. This isn't Google Search, this isn't Microsoft Word. You don't just plug in an instruction and get out a miraculous output. You have to work with it." - Nick Usborne
"We're social animals and that human connection matters. That is missing from these AI interactions and tools. That's why I look at the elements of emotional intelligence and ways of adding real human element to everything that I create with these models." - Nick Usborne
"The math is so irresistible. If I can use AI to create marketing at 100 times the speed, but 100th of the price, it is irresistible. But the more you do that, the more you sound the same as your competition." - Nick Usborne
Tune in to discover how to maximize AI capabilities while preserving the human touch that creates genuine connections and differentiates your brand in an increasingly automated marketing landscape.
Links:
LinkedIn: https://www.linkedin.com/in/nickusborne/
Twitter: https://x.com/nickusborne
Website: https://nickusborne.com
Website: https://bemorehuman.ai/about-nick-usborne/
https://nickusborne.com/earley/
Ways to Tune In:
Earley AI Podcast: https://www.earley.com/earley-ai-podcast-home
Apple Podcast: https://podcasts.apple.com/podcast/id1586654770
Spotify: https://open.spotify.com/show/5nkcZvVYjHHj6wtBABqLbE?si=73cd5d5fc89f4781
iHeart Radio: https://www.iheart.com/podcast/269-earley-ai-podcast-87108370/
Stitcher: https://www.stitcher.com/show/earley-ai-podcast
Amazon Music: https://music.amazon.com/podcasts/18524b67-09cf-433f-82db-07b6213ad3ba/earley-ai-podcast
Buzzsprout: https://earleyai.buzzsprout.com/
Podcast Transcript: Misconceptions About Generative AI and the Importance of Emotional Intelligence in Marketing
Transcript introduction
This transcript captures a conversation between Seth Earley, Chris Featherstone, and Nick Usborne about the misconceptions surrounding generative AI in marketing, exploring the importance of structured prompting, collaborative approaches to AI, the critical role of emotional intelligence, and strategies for maintaining human authenticity while leveraging AI capabilities.
Transcript
Seth Earley: Welcome to the show. Good afternoon. Good evening. My name is Seth Early. And I'm Chris Featherstone. It's good to be with you, Seth. And we're here on the Early AI podcast and we're really excited to introduce our guest. For today, we're going to be talking about some of the misconceptions around generative AI in the context of creativity, in the context of copywriting, and how we can use generative AI most effectively. We're going to look at context, rich prompting. Of course, we had Nick on one of our webinars, but. But we're gonna have more of a conversation about prompting. I'm sorry, I gave. Gave away who our guest was before giving it away. The importance of emotional intelligence when we're doing prompts so that we can be more engaging and be more genuine. So our guest today is an industry veteran with over four decades of experience in copywriting, and he's a very passionate advocate for the use of generative AI in a much more nuanced way than many people are doing. Originally from the uk, Nick has been at the forefront of Internet writing since the 90s. Now focus on. Focuses on educating others and how to effectively integrate AI with human centric communication. Welcome to the show, Nick Usborne. Thank you very much. Did I say your name correctly?
Nick Usborne: Did I say almost? Almost. Usborne Osborne. You know, I usually check that ahead of time. My faux pas, my bad. All right, no worries. So
Seth Earley: a lot of times we. We get started by talking about the things that people don't quite understand about the topic about AI, about generative AI and what are the most common misconceptions, especially when it comes to, you know, thinking about how AI can handle tasks. People are talking about, you know, artificial general intelligence. I think we're a little bit of away from that, but, you know, what are your thoughts about this? And give us your kind of summary of what people miss about prompt development, design and engineering and really what those misconceptions are. Nightliness, space. You thought you'd start with an easy question then, right? Yes,
Seth Earley: exactly. I thought I'd start with a nice softball and easier way into it. I know there's about five questions in there. I think a lot
Nick Usborne: of people, if you look at some of the data, it's like everyone's tried ChatGPT. Everyone's tried it, and I think the majority of those people have tried it like once. Come on, curiosity. Like, what is this thing? And I think a lot of people start off treating it as like another software tool in a Sense and they go in and they just ask a simple question and you know, write me a blog post on the general aviation industry. And then they look at the output and again, not so good, not very good, very disappointing. What's all the hype about? They kind of throw in just a question and expect like a miracle to come back and they're disappointed. So then they don't come back again and they tell their colleagues and friends and family that it's all a lot of hype. So where I come from is trying to get people to be, to change their viewpoint. This isn't like a software tool. This isn't Google Search, this isn't Microsoft Word. You don't just plug in an instruction and get out a miraculous output. You have to work with it. So you've heard me talk before about the kind of collaborative approach to actually go back and forth. And part of that is a mindset thing. I think of not treating this as a software tool, but almost treating this as a co intelligence, as a partner, as an ideation partner or research partner. So I think when you reset your expectations like this is like old tools like search, this is something different. And you have to, like with any of these tools you have to learn, you have to learn some fundamentals of how to use the tool to get the best output. So that's my kind of top level thing is when I talk to people, I ask them what they've done so far, what they've tried so far. And usually when the output is disappointing, it's because the input from the user has been disappointing. This sense to the AI, like come on human, you can do better than that. So yeah, I guess that's my, that's my starting point. I guess they don't really understand
Chris Featherstone: the inputs to your point that you want a really good outcome. They use really limited inputs and you don't want to, that they're lame or that they're, you know, you want to. Say that, but
Nick Usborne: you just use the. Limited amount of information and context in order for
Chris Featherstone: you to drive, you know, this grand outcome that you thought was going to happen. However it didn't, that's disappointing to you. But then when you go back, you know, and looking at the question, your question is too vague. Right? Right.
Nick Usborne: Or otherwise there's an ambiguity in there. Right. That's happened to me. Or I remember once I was writing something and I wanted some quotes from established experts to start each chapter or section of this document. So I asked Chat GPT, I said, hey, write me some quotes. From recognized experts for these each heading section. And it did, and I was about to publish and I thought, hang on. So I copied a couple of these quotes, pasted them into Google to see if they existed and they didn't. GPT chatgpt had made them up and it was my fault. I didn't say find me existing quotes, I just said write me some quotes. So it's like, okay, I can do that. So again, the problem was me not the tool there. But yeah. So there's different ways of, I think becoming a bit more sophisticated with your input. One is to be more structured in how you do that and another is to kind of let go and trust in a sense, and I hesitate to use the term because some people will push back, but trust in the intelligence of these models. So let me start with the second one first. I have a good friend and former colleague, Anne Handley, who's very well known in the content marketing world and has written a few books. And she wrote in a newsletter of hers a few weeks ago, she's writing a new book right now. One of the things she does is she'll write a section of a chapter and then she will paste that into, I believe it's Cloak three she uses. She pastes it in and she says, hey, what do you think? Does this flow? Am I missing something? What do you think? So, so here we have a New York Times best selling author using Floor 3 as a co intelligence as a partner. So it's not using it as a, she's not seeing it as a dumb software tool. She, she has sufficient respect that, you know, she's, this is, this is, this is her business speaking and writing. It has to be the best she can do. So she's using this model. So that's one way of, hey, just change your attitude to this. This can be enormously helpful and powerful to you. So don't be shy about asking tough questions. The other thing, the other way I think people can improve on their inputs is to be more structured. So I think Seth and I talked about this before, but a very simple starting point on a structured framework is called the race framework. And that is where you actually give the model much more detail in terms of instruction. So R for race is role. What is the role? And stick with the general aviation industry. So I just say you are speaking as an expert in the general aviation industry. So you're telling the model who they are. A is for action. I want you to write a white paper on the state of the industry. All right? All Right here is I'm going to give you tons of information about our company, about the industry, about what we're doing. Here's a whole bunch of data. You're just giving them a huge amount of background data. And then we've got role, action, context, and then examples. And I'll say, okay, here are five examples of really successful white papers we've done in the past. So this could be 100 pages of information. And in the early days of these models, that was a bit of a stretch. But something like Gemini, the context window can take like over a million tokens. So you can put in a whole book in context. So with this example, I'm structuring this, I'm saying, hey, look, this is who you're speaking as. Here's what I want you to do. Here's a bunch of background information on the company and on the industry, and here's some previous examples. So that's a very simple framework, but very, very powerful. So I put that in, then I ask it, based on this information, please write a white paper or case study or whatever it is. That's it. By being a bit more formal and structured in that way, you're almost guaranteed to get a much, much better output. So usually disappointing output is not the problem with the model. It's the problem with just either ambiguity or insufficient information in the prompt. Do you think that
Chris Featherstone: people also maybe misalign with some of the core AI capabilities of a specific model? Right, because we, you talk about Claude, talk about the one from Google. Talk about. So there's varying abilities that these models have, and I don't think people keep that in mind either. Which by the way, is also a catch 22 because you get into this notion of if you want to get aligned with good accuracy, take the same prompt and ask it across all the models to see what the responses are. But at the same time, you may have under complicated or over complicated the prompt to a model that you really don't understand what the core capabilities are and what it could respond. Because like Claude 3 is amazing. But is it sonnet, haiku or opus? Do you know what I mean? Which variances in synchronization and, or just broad, you know, declaration. So I don't know
Nick Usborne: your thoughts and it's tricky. And people, you know, like I said, Anne Handley uses Claude 3. I know a lot of writers prefer Claude to say GPT 4. I, I still use, I use Claude 3, I use Gemini, I use all kinds of models, but I, I Probably still use ChatGPT4 the most, just simply because I'm most familiar. So it's almost like, it's like, you know, three, three or four different software tools for working with images. Could I learn all three or four? I could, but my time is probably better served just learning a lot about one, unless there are significant capabilities that something else offers. But to that point, sometimes what I'll do is I'll do something in GPT4 and then I'll paste that into Gemini and I say, hey, what do you think? So I'll have different models checking on each other's output. Yeah.
Seth Earley: And tools like, oh goodness, what's the name of it? There's a tool that you can use to submit to multiple large language models. I've talked about it before. Oh, like perplexity. Well, it'll give you the ability to do Perplexity and to do ChatGPT and to do Claude and to, and compare. But I think to your point and Chris's point, you know, sometimes it can overcomplicate things and you have to choose choices and now you have to integrate them and find the best one. So I think it's it. I, I like the way you provided the example of how Ann Hadley does it because again, I think I look at the same thing and they say where can I improve it versus giving it tasks from the start? But to your point around ambiguity, you know, when we, when we were, when we've been dealing with search problems over the years, you know, we always used to say search terms are short, ambiguous and an approximation of the searcher's real information need. Right. Because they would, you know, they would put in a very ambiguous query and expect a very specific result. Well, and it's because of the interface,
Nick Usborne: search was built to actually accommodate the fact that we don't want to type in 100 words, we want to type in four words. Right. So why, that's why even, even long tail keywords, you know, tend to be four, five or six or seven words and not 50.
So you're right, people bring that kind of expectation or that behavior right to these models and expect it to work. But this is something very new. You know, it's also interesting is I've
Seth Earley: seen in different contexts like in a one chat one playbook for LLMs where you're giving it different tasks and you have a lot of pre built context and a lot of pre built prompts and in some cases these, these folks who are demonstrating this would say get angry at it, like berate it, you know, tell it that was a terrible job and to try much harder, and you're very disappointed. Right? And then the model is, I'm so sorry. You're right. I can do much better. And then other times it's like, encourage it, tell it that it's doing great. Wow, that was awesome. Thank you so much. Can you do a little better job? Like, thank you for your feedback. I mean, it's so funny how some people believe that negative emotions impact the results, some people believe that positive emotions impact the results, and you do get different results. But, you know, it's a little bit black magic. A little bit of
Nick Usborne: it is weird. And in a sense, it's what makes this such an interesting area right now. Even the people creating these models don't quite understand exactly what's going on sometimes. There was a. I think it was last week or the week before. They had two AIs talking to each other, and they got into a bit of a muddle. One of the AIs was just like, caught in a loop. And the first AI said, hey, are there any humans out there? Because the AI I'm talking to needs to be shut down. Which was like, completely out the blue. And the researchers were like, wait, what? We just had these two AIs talking to each other. One of them should not even think about saying, hey, human, my friend's got a problem here. Can you close? It's crazy. So it does this.
Seth Earley: Fascinating. Yeah, it's. It's really interesting area. Yeah. Well, the emergent properties are very fascinating because theory of mind was supposed to be uniquely human. Right. When. And children learn theory of mind, you know, around age three or something, where you can start to plan and understand what the other person is thinking. And that's when they start looking at deception. No, I didn't, you know, eat the cookies. You know, my baby brother ate the cookies. Right. You're trying to deceive people. And that. That emerges in children at a very young age, but it starts to mature over years. Well, it spontaneously emerged in large language models around 2022, and by 2024, it's been able to perform at the level of adult. And nobody tried to predict that, Nobody tried to produce that. Nobody tried to program that. That emerged spontaneously, which is crazy. So when you start seeing some of these emergent behaviors and you start thinking about, well, what's really going on in these models, I always look at it as you're trying to get closer in vector space across all these dimensions by giving it role, by giving it context. But the emotional prompts I'm not sure I understand, you know, the emotional prompts of. It's very strange. Do you want to talk more about that in your work? And you see that impacting the quality of outputs. So yeah, the emotional side,
Nick Usborne: this is like big area. And as you know, it's an area that really, really interests me because of course these models, they can't experience emotion. They can't fall in love, they can't eat ice cream, they can't walk on the beach and feel sand and water between their toes. They don't have that experience of, of sensory, they don't have sensory experiences. They, they don't feel emotions firsthand. But they read about it. You know, they've read that, you know, they've read Love Story, they've read Romeo and Juliet, they know what humans mean when we talk about love, but they've never felt it. And there is, there, there's a kind of limit there because of course they can't feel and experience emotions the way we do. But again, with the right mindset and the right prompting, they can get pretty close. So I will encourage a model to, hey, one of the things these models do surprisingly well is build avatars of say an ideal customer avatar and even from an emotional perspective. So I'll feed the model all kinds of information like you know, interview transcripts, product and service reviews, transcripts of customer service. Just, just all the input I get is in the voice of customers. And I'll throw that all in and I'll say, hey, based on this information, create me a, you know, an ideal customer avatar based on this. And there's a lot of emotion in there and it does a really interesting job of that. And I've used this as a tool for many projects that I've done is I start with the avatar. And so I make that part of in fact my context. I say, okay, today we are writing as Emma, who is a 45 year old pilot in the general aviation industry. So I keep that and I might update it as I get more information. But when it does that, or I can either say do the avatar or another thing that I'll do is I'll say, hey, write me a. Create a sentiment analysis based on this input. And the sentiment analysis is really interesting because that is very much focused on emotion. So again, it's not human, it doesn't feel emotion firsthand, it doesn't have any kind of sensory experiential experience of this, but it does a pretty good job of sentiment analysis. And again, I'll put that in part of my input and, and I, and I've seen like before and after, if I'm writing copy or content before or before and after including a sentiment analysis in there, there'll be a very, very different quality and feel in the output from that model. So yeah, they, they don't get emotion, but you can, by feeding it, by helping it, you can get it into that place. It, it's almost like getting
Seth Earley: into that mind space, which is the vector space. Right. It's like, how do you get into that emotional mind space? Well, it's like an actor getting into character. You're getting into that place in the n dimensional vector space and there's tens of thousands and thousands of dimensions. Chris, you had a. I was just. Gonna, I was just gonna ask you.
Chris Featherstone: And so at what. Because you brought this up. Right. Obviously I wanted to double click on a little bit. Right. In terms of when is it then appropriate for the human in the loop to come out? Right. And to be that especially, I mean, I don't know if that's specific to copywriting and some of the stuff, the work that you do on a day to day, but just maybe in general when, you know, because it becomes a trust issue.
Nick Usborne: Right. But when does human in the loop really become essential in your mind? So human in the loop, I think is essential every step. And in fact, the term comes out of the training period for these models before they even became public. You had a human in the loop sort of putting in guardrails. No, do not tell people how to. If they ask you. So that's the human in the loop. In terms of training the human in the loop in terms of, of our role on, on the. When we're using these models is. Yeah, we have to check stuff. There's this wonderful example. Well, we've seen, I think it was when Gemini first came out and people asked it to show pictures of the founding fathers. And Gemini tried to be a little bit too woke about it, I guess, and, and, and had different colors and you know, people of different colors and ethnicity, where in fact they were all kind of white male Europeans, the founding fathers. So people got upset about that. So you have to check for stuff like that. There's another one. Somebody asked one of these models, I forgot which it was. It's like, what's the record time for walking across the English Channel? And for those who don't know, the English Channel is a body of sea between England and France. You can't walk over it. But the model gave a very, very Convincing answer. And it named the person and how long it took that person to walk across the channel. So just make stuff up. It's like hallucinating. It's just make stuff up. But you gotta be really careful there because it makes stuff up very convincingly. You've got to look for bias. Ivory has done this wonderful series of how to prompt these models not to be biased. So if you go to these models and say, show me a picture of the most, create a picture of the most beautiful woman in the world, it'll be Barbie, it'll be Blonde Blue Eyed Barbie. And again, it's not because the model is biased, it's because the data, the human data it's trained on is biased. So Ivory has done this wonderful thing of actually teaching people how to prompt where the idea of what beauty is is much, much more varied in terms of size and shape and color and ethnicity and gender and things like that. So, yeah, in terms of the human in the loop, those are the things we watch for. You have to check for accuracy. I've done work where it's on the medical side where it cited a whole bunch of references and sources, which is great. I love that because it saves a huge amount of time and also inputs a lot of information I might not have found. But you have to check those sources because sometimes they're not accurate or they're not relevant or it's not exactly as said. So yes, and it's one of the things that worries me all the time is that as companies get into this, they are using these models almost like a conveyor belt in a factory. You know, just, just the output just flies off the end of the conveyor belt. There's no human in the loop checking it. So that speaks to things like accuracy, bia, things like that, but also to creativity. Like these models are having a huge impact in the advertising and marketing industry. Wpp, which is the largest agency group in the world, is integrating this across all of their 100 major agencies so that all their writers, designers, are using this model now. And there's a. Marketing is being automated in all kinds of ways already. But what worries me is like I'm an old school agency guy. I started off in ad agencies in London in the 1980s. We had a creative director and nothing left the creative department without the creative director, the boss of all bosses, looking at it and saying, that's good enough. That that is a brand voice. Voice is correct. Yes, that is creative. Yes, that works. What worries me now is I see a whole bunch of Stuff coming out at the end that doesn't have that creative director at the end saying yes or no. Because I used to get literally slaps around the back of the head if I tried to put something out that wasn't good enough. Right.
Seth Earley: And there's a lot of this, this mediocrity of, of content that's being pushed out. And I've had to reprimand some of my folks to say, hey, don't just give me a lot of AI power driven content or generated content unless it's relevant. I was doing a facilitated session and, and I don't want to say too much, but a lot of the stuff that was created in preparation was not necessarily relevant. You know, it was just a lot of, you know, that was not. It can be very
Nick Usborne: damaging. I purchased a service from a company and they sell sent me this welcome onboarding package, like just to me, Nick, to make you feel welcome. And I read it and I was three lines in and I recognized it had been written by AI and not edited by a human. Right. And I had that, that moment of recognition was also a moment of deep disappointment. It was also a moment where my perception of the value as a brand took a huge dive. Yep, They've broken trust with me. Yep. They gave me the impression I was getting this personal welcome onboarding package, but it wasn't, it was automated. And my thought was not cool. Where's, where's the value? Yeah. And it does have like an instantaneous
Seth Earley: negative impact on the brand. And I think, and I think all of us who
Nick Usborne: are interacting with this are getting better at recognizing like, like when I look at like visual output from Dall E or mid Journey now and I look at that at the beginning, I think, oh my goodness, that's amazing. I can't even tell that from the real thing. But now we can. Right. You can instantly see when these things come out of those models. And it's the same with writing. I think at the beginning we didn't, we didn't have enough experience, so we hadn't seen enough of it to be able to recognize output. Right. And that, that's set to another point that you, we've talked about before is the whole layering in emotional intelligence. So this is a point of differentiation. This is something that you want to do as a human. It's at that end of the conveyor belt before it goes off the end. There are things you can do. You're going to be the human in the loop, checking for accuracy, bias, et cetera. But you're also going to be the human human. And saying, you know what? And I mean, I've done, I have this iterative approach. The human human, not the simulated human. Yeah, me. The human still. Still the real me. So I'll have this iterative approach where I'll. Let's say I'm asking it to write and draft an email for me. I'll say, do the email, here's the information. I give it the race framework information. And I say, a bit too much feature, not enough benefit, rewrite it with benefits up front. And I say, okay, let's start with a story. Start with a story or anecdote up front that catches people, that gets people engaged. So it does. It'll come up with a little anecdotal story and then what happens? So now the kind of structure of the thing, it's done a lot of the work for me. But then I will go in now. So you know what? That story reminds me of something that either happened to me or I've heard. I want to replace it with real life. This is where these whole areas of emotional intelligence, which has been around for decades, you know, the study of emotional intelligence and also the use of emotional intelligence and marketing. But it's very, very relevant now with AI because AI can pretend it can give you a good impression of emotional intelligence, but it can't have it. And as humans, we are very, very attuned, very, very sensitive to connection to. I'm rambling a bit, but I just want to throw in another quick story. I. I was at a. We're moving shortly and I had to paint some walls here, clean up a bit. So I went to a big box store, bought some paints, brushes, went to checkout and it was automated. And it's so. So I, you know, I scanned, bagged and paid myself and left. And I again, at that moment, I felt this little twinge of loss and disappointment because I actually wanted my moment with the cashier. That the cashier is a stranger would have maybe shared five words together. But it matters. It matters. We're social animals and that human connection matters. And it's strangely as being very, very powerful that something as minor as that five words with a complete stranger I'd probably never see before mattered that much to me. But it does. That is missing from these AI interactions and tools. So that's why I have. Is a part of my process is I will look at the elements of emotional intelligence. I will look at ways of adding real human element to everything that I create with these models. So that people get that moment of recognition, of connection, of like, hey, it's oppressive. Yeah, yeah. So
Seth Earley: on that topic, how do you see training and education evolving to keep pace with AI and usage of AI and especially in the realm of injecting that a bit of, little bit of emotional intelligence and connection. And again, when we're trying to remove too much human from the interactions, we lose a lot. And yet we're trying to automate so that humans can engage. Right. Machines automate, humans engage and talk a little bit about being more human. Your project and how that's training, focusing on training for marketing groups like what, what does that entail and what, what does that involve? And how do, what do companies learn from that initiative? So going back to
Nick Usborne: the first part of your setup of like, where companies are at in terms of education, I think it's very, very early days right now. I was hearing some research recently about basically employees. I think it was 67% of employees said they're either getting no education or insufficient education with regard to AI. A lot of companies don't even have AI policies in place. Employees are using AI. They're using it, they're coming in, they're using it on their own phones, but there's no formal policy yet in a lot of businesses. And a lot of individuals are saying, we'd love to use this. Our company is not giving us the training and they're not giving us the green light and they're not telling us which models and they're not telling us which databases we can access or not, what information we're allowed to, you know, make available to these models or not. And I get it, it's incredibly complex. But, but in terms of training, I think we're way behind with regards to being more human. This is my, my, my effort to be helpful in this space. So I have a website called bemorehuman AI and the group I am qualified, I guess best qualified to speak to are people in the marketing department of like, okay, within a company, what are the processes? What, what, what are the steps that we can take to. On the one hand, lean into AI and do everything, you know, use it as capabilities to the max, but also how to again, be the cashier at checkout, how to be more human, how to introduce that human element. So again, it's kind of step by step, process based, where I'm talking about maxing out on AI, understanding how to integrate emotional intelligence and then how to integrate the two. So you're getting the best of both worlds. You're getting the maxing out of AI and you're maxing out on being more human. Don't you feel like
Chris Featherstone: too, that that's, you know, because I, I know one of the, you know, hot topics for you is the sameness trap. Right? Right. That's going to be one of the key areas to stay unique, to keep unique thoughts, brand awareness, whatever that is, because that's what made it, made the brand in the beginning, was the uniqueness of it and. Or whatever it is. Right. The human creates that unique fingerprint that, you know, touches all of this kind of stuff. And I feel like at some point too, we have diminishing returns when we are all starting to use the same models for good writing. Now all they start and they start to look and feel the same. Right. So that, that is definitely part of the kind of
Nick Usborne: be more, be more human training. Is that addressing the sameness trap? Because you're right, people use the same models. They downloaded the same top 20 prompts from LinkedIn or Twitter. There are people offering frameworks for email and for white papers and for sales pages. If we all use the same tools, we're all going to get very similar output, which from a marketing and sales point of view is the last thing we want. There are companies who invested billions of dollars in creating a unique voice and brand. The difficulty here, and it's really interesting watching it play out, and it will continue to be interesting, is that the math is so irresistible. If I can use AI to create marketing at 100 times the speed, but 100th of the price, it is irresistible. I can automate everything. I can fire all the humans. And I shouldn't say that with a smile on my face because it's not funny at all. But it's like, it's like the math is like, oh, my goodness. But the more you do that, the more you sound the same as your competition. And hey, if there's anything important to marketing, it's differentiation. You've got to differentiate yourself. Well, you can't if you're doing everything in the same way. So again, that's why I step back and say, yes, use the models, but then introduce that emotional element. Use emotional intelligence. Build an emotional avatar for your company or corporation and say, say, what is that? What is the emotional avatar for our business? And then include that in your process. So, yeah, you're right. The sameness trap is a huge problem. And it'll be a problem because like I say, it is so irresistible to automate as much as you can. Yeah, I Always say that
Seth Earley: if you do what everybody else does, you have some efficiency. Standardization gives you efficiency, right? You might need that for data interchange or something. But differentiation gives you competitive advantage, right? So we need to have that competitive advantage of differentiation.
Nick Usborne: It's how Nike did it, it's how Apple did it, it's how I did it. You got to set yourself apart. It goes back to the creative and emotional
Chris Featherstone: side of things, of why humans, you know, are critical and still in all of this. Right. I mean we work within the boundaries and the parameters and, or try to blow past it if we can. But that's the beauty of it is the creative. So I was saying earlier
Nick Usborne: that I come from an agency world and the creative director was the boss. And then as I, with a bit more experience, I became a creative director myself. And yeah, there is this deep like creative part. The really great marketing is deeply creative. And so with that history behind me, I use these models a lot and I'm amazed and incredibly impressed by what they can do. But you know what? If all the kind of pages that I've asked it to write, whether it's content or copy or email, I have never seen it write a really great headline. You know, the kind of headline that a top performing copywriter or creative director might come up with. I've never seen anything to compete with the top tier of human creativity in marketing. So yeah, I absolutely believe that the human needs to be part of that process and you have to have to allow the human to be creative at their speed. So. So again, it's really tricky because these models are blindingly fast. But like in the back in the 1980s, we'd be creating a print advertisement. We might spend as a team at a pair of us, we might spend 10 days on one ad and it's like that's the creative process of how you. So again, I'm not sure how that can now fit in to the process of working with AI and automation because nobody wants to allow the creative team two weeks to come up with something really amazing. It, it's too slow, too expensive, changed. It's changed a lot of the expectations that are out
Seth Earley: there. So, so where do you see this, where do you see this kind of evolving over the next several years? Like what do you think organizations are going to, what, what's in store for them? Especially when you think about the sameness trap and model collapse and where will the disillusionment come in and where will the ahas be, do you think? Where will it. So there's going to be
Nick Usborne: disillusion. So I like, I think people, I think companies in particular, and the larger the company, the slower they are proceeding. And again, for understandable reasons, and they're looking at, say, ChatGPT4O and they're trying to prepare for that. The trouble is, by the time they're ready to use that, we'll have GPT 5 or 6, which will be something totally different all over again, with different implications for how do companies integrate this and use it. And so in terms of how do I see the future, I honestly have no idea. I know it's going to be messy. I know it's going to be really messy. There's going to be more politics, there's good legislation, there's legislation, like in Europe, there's quite severe, a lot of constraints on AI in Europe already because of legislation. So there's going to be politics, technology, there's social implications, there's work implications. And the trouble is, I have no idea what the future holds because I have no idea when GPT5 or some other equivalent will come out and what its capabilities will be. Will it just be one step ahead of where we are, or will it be a huge step ahead of where we are? Because if so, it's kind of all bets are off and we have to rethink a lot of stuff. That's
Chris Featherstone: awesome. So let me ask you this real, real quick too. If is there something, which I generally do with a lot of our guests, is there something that we didn't ask or that I didn't ask you, that I should have, that is, you know, maybe kind of pulls this together or something that I just missed, Right. In terms of what this looks like with AI and you, AI and marketing, AI and Generative, really. But I think we've covered. Hey, we could talk about this for hours and
Nick Usborne: hours, but I think generally, if I always have one last thing to say, whether it's someone working on their own or someone working within a company, is, for goodness sake, lean into this. Don't be part of the crowd who say, oh, it's just hard. It's not going to impact me. It doesn't matter. I think almost whatever the business you're in, you, you need to lean into this because it will impact you. I was like I said, we're moving house and we went to Delaware a few days ago and I live in Montreal where the official language is French. So contracts are in French. And she sent me this contract and I don't read French well enough for a legal document. So So I asked GPT, ChatGPT to translate it, which it did. And then I said to ChatGPT, I said, oh, by the way, before I sign this thing, are there any points where you think I should, you know, ask questions or, you know, important points I should consider carefully? And it gave me a little list of three or four things. So then we went in and spoke to the lawyer and. And I said what I'd done and she was blown away. She didn't realize she one, she didn't realize that ChatGPT could translate from French into English. Two, when I showed her the questions that GPT suggested, I asked, she was blown away because she said, well, if I was representing you as a client, that is exactly what I would have told you to ask. And she. She was shocked. And I think part of that is she started in her mind thinking, well, hang on, what is this going to do for due to my business, I don't know, you know, whether you're a writer, whether you're a marketer, whether you're a lawyer, an accountant, whatever. Final piece of advice to me would be lean into this because it's going to impact you. The more literature are, as it were, in these tools, the better. Yeah.
Seth Earley: Well, Nick, thank you so much for sharing your insights and for joining us today on the Early AI podcast. And thanks everyone for tuning in. Appreciate your time, Chris, and for Carolyn behind the scenes. And again, thank you, Nick, so much for your participation. You're very welcome. Hey, as you know, I love talking about this
Nick Usborne: stuff. Yeah, it's good, great stuff. And we'll have to talk about some areas
Seth Earley: where we can further collaborate. And again, thanks everyone for joining us. We'll have some information about how to reach you in the show notes and we'll see you all next time. Thanks again. Visit earley.com to find links to the full podcast.
