![]() |
Earley AI PodcastAuthor: Seth Earley
In this podcast hosts Seth Earley invites a broad array of thought leaders and practitioners to talk about what's possible in artificial intelligence as well as what is practical in the space as we move toward a world where AI is embedded in all aspects of our personal and professional lives. They explore what's emerging in technology, data science, and enterprise applications for artificial intelligence and machine learning and how to get from early-stage AI projects to fully mature applications. Seth is founder & CEO of Earley Information Science and the award-winning author of "The AI Powered Enterprise." Language: en-us Genres: Business, Technology Contact email: Get it Feed URL: Get it iTunes ID: Get it |
Listen Now...
Earley AI Podcast - Episode 85: AI Security, Shadow IT, and the Governance Reset with Rob Lee
Episode 85
Friday, 27 March, 2026
Why Security Teams Are Being Asked to Do Three New Jobs - and What to Do About ItGuest: Rob Lee, Chief AI Officer and Chief of Research at SANS InstituteHost: Seth Earley, CEO at Earley Information SciencePublished on: March 27, 2026In this episode, Seth Earley speaks with Rob Lee, Chief AI Officer and Chief of Research at SANS Institute, about why AI governance is broken in most organizations - and what it actually takes to fix it. They explore why security teams are being asked to simultaneously govern, adopt, and defend AI, why the default framework of no is driving shadow IT rather than preventing risk, and what a practical reset of AI governance actually looks like. Rob also shares why agents should be treated like workers rather than software, and why executives cannot afford to outsource their understanding of AI to anyone else.Key Takeaways:Security teams are now being asked to do three new jobs at once - evaluate AI tools for the organization, drive their own AI transformation, and manage governance and regulatory compliance.The default framework of no does not prevent AI use - it drives it underground, creating shadow IT that is far harder to monitor and control than sanctioned tools.Governance needs a stoplight model - green means experiment freely, yellow means involve security as a lifeguard, red means stop - with the default answer being yes unless there is a clear reason to say no.AI governance documents written before generative AI arrived are already outdated - most say nothing about agentic workflows, human-in-the-loop requirements, or connector permissions.Agents should be treated like workers, not software - they reason, improvise, and operate 24-7, which means they require the same zero-trust principles, oversight structures, and ethical guardrails as human employees.Executives cannot outsource their understanding of AI to security teams - AI literacy at the C-suite level is a competitive requirement, not an optional capability.Good governance is not about documenting every possible bad outcome - it is about establishing overarching goals and building a culture of trust with enough guardrails to prevent the truly stupid risks.Insightful Quotes:"The framework security teams are using is a framework of no. And that framework of no is causing people to use AI secretly, regardless of what the security team says." - Rob Lee"An agent in the future - and some organizations are already treating it this way - is a worker. Everything you ask about governing agents, replace that with a human who just got hired. The same rules apply." - Rob Lee"You can't automate what you don't understand - and with agents, the stakes are even higher. An agentic mistake isn't a wrong paragraph, it's a blocked critical system." - Seth EarleyTune in to discover how security and executive leaders can move from a governance posture of restriction to one that enables innovation, manages real risk, and keeps organizations competitive in the age of agentic AI.Links:LinkedIn: https://www.linkedin.com/in/leerob/Website: https://www.sans.orgSponsor: Vector - https://www.vktr.com/ Thanks to our sponsors:VKTREarley Information ScienceAI Powered Enterprise Book








