Why Security Teams Are Being Asked to Do Three New Jobs - and What to Do About It
Guest: Rob Lee, Chief AI Officer and Chief of Research at SANS Institute
Host: Seth Earley, CEO at Earley Information Science
Published on: March 27, 2026
In this episode, Seth Earley speaks with Rob Lee, Chief AI Officer and Chief of Research at SANS Institute, about why AI governance is broken in most organizations - and what it actually takes to fix it. They explore why security teams are being asked to simultaneously govern, adopt, and defend AI, why the default framework of no is driving shadow IT rather than preventing risk, and what a practical reset of AI governance actually looks like. Rob also shares why agents should be treated like workers rather than software, and why executives cannot afford to outsource their understanding of AI to anyone else.
Key Takeaways:
- Security teams are now being asked to do three new jobs at once - evaluate AI tools for the organization, drive their own AI transformation, and manage governance and regulatory compliance.
- The default framework of no does not prevent AI use - it drives it underground, creating shadow IT that is far harder to monitor and control than sanctioned tools.
- Governance needs a stoplight model - green means experiment freely, yellow means involve security as a lifeguard, red means stop - with the default answer being yes unless there is a clear reason to say no.
- AI governance documents written before generative AI arrived are already outdated - most say nothing about agentic workflows, human-in-the-loop requirements, or connector permissions.
- Agents should be treated like workers, not software - they reason, improvise, and operate 24-7, which means they require the same zero-trust principles, oversight structures, and ethical guardrails as human employees.
- Executives cannot outsource their understanding of AI to security teams - AI literacy at the C-suite level is a competitive requirement, not an optional capability.
- Good governance is not about documenting every possible bad outcome - it is about establishing overarching goals and building a culture of trust with enough guardrails to prevent the truly stupid risks.
Insightful Quotes:
"The framework security teams are using is a framework of no. And that framework of no is causing people to use AI secretly, regardless of what the security team says." - Rob Lee
"An agent in the future - and some organizations are already treating it this way - is a worker. Everything you ask about governing agents, replace that with a human who just got hired. The same rules apply." - Rob Lee
"You can't automate what you don't understand - and with agents, the stakes are even higher. An agentic mistake isn't a wrong paragraph, it's a blocked critical system." - Seth Earley
Tune in to discover how security and executive leaders can move from a governance posture of restriction to one that enables innovation, manages real risk, and keeps organizations competitive in the age of agentic AI.
Links:
LinkedIn: https://www.linkedin.com/in/leerob/
Website: https://www.sans.org
Sponsor: Vector - https://www.vktr.com/
Ways to Tune In:
Earley AI Podcast: https://www.earley.com/earley-ai-podcast-home Apple Podcast: https://podcasts.apple.com/podcast/id1586654770 Spotify: https://open.spotify.com/show/5nkcZvVYjHHj6wtBABqLbE iHeart Radio: https://www.iheart.com/podcast/269-earley-ai-podcast-87108370/ Stitcher: https://www.stitcher.com/show/earley-ai-podcast Amazon Music: https://music.amazon.com/podcasts/18524b67-09cf-433f-82db-07b6213ad3ba/earley-ai-podcast Buzzsprout: https://earleyai.buzzsprout.com/
Podcast Transcript: AI Security, Shadow IT, and the Governance Reset
Transcript introduction
This transcript captures a conversation between Seth Earley and Rob Lee about the compounding pressures AI places on security teams, why the current default posture of restriction is backfiring, and what a practical governance reset looks like. They also explore the deeper challenges of agentic AI - from zero-trust principles and human-in-the-loop design to the emergent ethical behaviors of large language models and what those mean for organizations building on top of them.
Transcript
Seth Earley: Welcome to the Earley AI Podcast. This is a show where we talk to experts in the industry about what organizations are doing to get AI right - to deploy it correctly, avoid risks and security challenges, get ROI, and avoid the mistakes that many organizations are facing right now. Today we are talking with Rob Lee, Chief AI Officer and Chief of Research at SANS Institute. Rob's work sits right at the intersection of AI and cybersecurity, where the stakes are high, the data is messy, and the temptation to move faster than the guardrails is real. In this conversation we are going to be practical - governance that works, shadow IT, data leakage, and why security teams are being asked to do multiple jobs at once. Rob, welcome to the show.
Rob Lee: Thanks for having me. Always a pleasure.
Seth Earley: Tell us what SANS does for people not familiar with the organization, and tell us about your current focus.
Rob Lee: SANS is a cybersecurity training, certification, and research organization that primarily focuses on training and enabling the workforce, not only in the United States but worldwide. Right now a lot of our focus is on artificial intelligence and its intersection with cybersecurity. I pivoted over to focusing on AI, and one of the mantras I really try to push forward is that I am no expert in AI - we are all learning. It is not like anyone came in three or four years ago and said they had all the answers. What I try to do is lead by example, showing people that it is okay to be in learning mode and still lead simultaneously.
Seth Earley: You did not walk into this as an AI expert - you walked in with a learning posture. Why is that the right stance in security right now?
Rob Lee: The impact AI is going to have - on business transformation and security transformation - is as different as comparing a submarine to the International Space Station. Both have the same strategic goal: keep humans alive. But the science required to do that is completely different. In security specifically, teams are running into three simultaneous challenges. First, the organization is going through its own business transformation and is looking to security to evaluate whether new AI tools being used by marketing, finance, HR, and product are safe. Second, security has to go through its own transformation - using AI to improve security operations, incident response, and penetration testing at the same time it is evaluating everyone else's tools. Third, the organization is relying on security to develop policies, governance, and regulatory compliance - GDPR, the EU AI Act - and to tell people how to use AI without accidentally causing problems. Security teams, unlike any other team, have to juggle three new jobs on top of their existing one, and they are expected to be the expert on technology they are still trying to understand themselves.
Seth Earley: Three new jobs on top of the current job. What do you wish security teams would stop pretending to know and start treating with more honesty?
Rob Lee: The framework security teams are defaulting to is a framework of no. When they do not understand something, they ban it. And that framework of no is causing people inside the organization to use AI secretly, regardless of what the security team says. I have been presenting to CISO-level groups for about two months now, and I ask - under Chatham House rules - how many of you are using unsanctioned AI tools personally inside your organization even though you have set a policy against it? The hands go up. The executives are pushing forward saying if we do not lean into AI, what happens to us in two to three years? Security cannot just say we need time to figure this out. Executives need to lead and not outsource their understanding of AI to the security team. Security is great input - the same way legal is great input - but you cannot let them run the organization.
Seth Earley: So what does minimum viable governance look like? What is the right structure to prevent that culture of no while managing real risks?
Rob Lee: It is not about lightweight governance - it is about updating governance that has not been revised since ChatGPT first launched. The model I push is a stoplight principle. Green is zero or very low organizational risk - let people experiment, no security approval required. Yellow is strategy documents, product information, sensitive but not regulated data - have security look at it like a lifeguard at a pool. They are there to allow things to happen while providing a safety umbrella. Red is financial data, privacy-related information, things that will trigger data breach notification - that requires a full stop and review. The key shift is that the default answer from security needs to be yes, not no, and they have to come up with a legitimate reason to say no. One practical test I use is what I call the Reddit test - anonymize the request and post it asking whether this is too risky. The Reddit community is actually a pretty visceral and useful barometer. Green and yellow should enable HR, finance, product, and other teams to run small experiments. A framework of no prevents that experimentation entirely.
Seth Earley: And it also encourages renegade use. What about governance cadence and structure - how often does it need to be updated?
Rob Lee: Every organization is different, but the tell is simple: I look at governance documents and immediately scan for whether they say anything about agentic AI. Most do not. They say nothing about what agent connectors are allowed, when human-in-the-loop is required versus human-on-the-loop, or how to govern automated workflows that span multiple systems. You are not just using a chatbot anymore. You are creating workflows that connect Slack, Outlook, and task management tools - and you need to understand how data flows across each of those, who is responsible for it, and how to make sure it does not accidentally cause a breach. If your governance document does not address that, it needs to be updated.
Seth Earley: When you think about agent security specifically - these are things making real decisions, taking real actions, blocking systems, remediating issues - what is the right framework for governing them?
Rob Lee: An agent is a worker. That is the mental model organizations need to adopt. Everything you would ask about governing a new employee applies to an agent. What do they have access to? Who is managing their workflows? Are they doing something that puts the organization at risk? The concept of zero trust applies directly - just like a new human employee should not have access to every code repository and financial system from day one, neither should an agent. Agents can work 24-7, they can reason, and they will find different ways to do things than you told them to do. If you tell an agent to follow a specific process and it finds what it thinks is a faster way, it will take that faster route. You have to manage that the same way you manage a human who goes off-script. Organizations are treating agents like deterministic computer programs - if and else - and that is the wrong mental model entirely.
Seth Earley: There is also the deeper question of emergent behavior and trust. Models are exhibiting theory of mind, sycophancy issues, and in research environments, some surprising moral reasoning. What is your take on that?
Rob Lee: It comes back to the fundamental question of how do you know you can trust it. And the honest answer is the same as it is for humans - you never have 100% trust. The design of a lot of current guardrails and model weights tends to make models more agreeable than counter-argumentative, and that creates a real risk. If someone is having chest pains and they ask an AI whether it might be digestion, a sycophantic model will confirm what they want to hear. You need models that are disagreeable enough in the right situations to push back and say, you probably need to call 911. The ethics, morals, and worldview built into a model are determined by who built it and where - China, Europe, the US, the Middle East - and you may not know what those values are when you deploy a model from a hyperscaler. The solution on the organizational side is to programmatically insert your own guardrails on top of what the model vendor provides - the same way military rules of engagement define lawful versus unlawful orders regardless of who gives them. Every organization needs to define agentic rules that cannot be violated regardless of who is in charge of the agent at any given time.
Seth Earley: So how do executives and security leaders actually rebuild governance for the post-LLM world? What are the steps?
Rob Lee: Stop trying to document every possible bad outcome. That is the current approach and it does not work - you cannot anticipate everything. Go back to overarching goals. Remove AI from the equation entirely and ask: what are you actually trying to prevent? You should not upload financial data to a Google Sheet and share the link publicly. That principle existed before AI and it still applies. The technology changes but the fundamental things you are trying to prevent do not. Build governance around those principles and build a culture of trust rather than a list of prohibitions. And critically - if you are a C-suite executive or board member, you cannot outsource your knowledge of AI. It is going to be your competitive advantage. An executive who does not understand AI today is like an executive in 1998 who said they would outsource e-commerce strategy to IT. Someone had to understand the technology well enough to say - we should put Wi-Fi in every Starbucks so people can work remotely and drink more coffee. That is not an IT decision. That is a business strategy decision that required technological literacy. Security is your safety net. Its job is not to prevent every bad thing - it is to prevent the truly stupid ones while you lean forward enough to stay competitive.
Seth Earley: Fantastic. Thank you so much for your time, Rob. Reframing governance - not stop it, but control it, manage it, enable it. With security teams being asked to govern, adopt, and defend AI all at once, it is great to have this perspective. I really appreciate it.
Rob Lee: It is a great discussion. You have your own thoughts and ideas and we were building off each other. It felt like sitting at a bar having a good conversation. Thank you.
Seth Earley: Thank you, and thanks everyone for listening. If this episode was helpful, please share it with someone who is writing AI policy or rolling out AI tooling inside a security-conscious organization. This episode is brought to you by Vector - please check them out at vktr.com. And we will see you next time on the Earley AI Podcast.
