![]() |
Cloud Security Podcast by GoogleAuthor: Anton Chuvakin
Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure. We're going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject's benefit or just for organizational benefit. We hope you'll join us if you're interested in where technology overlaps with process and bumps up against organizational design. We're hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can't keep as the world moves from on-premises computing to cloud computing. Language: en Genres: Technology Contact email: Get it Feed URL: Get it iTunes ID: Get it Trailer: |
Listen Now...
EP264 Measuring Your (Agentic) SOC: Two Security Leaders Walk into a Podcast
Episode 264
Monday, 23 February, 2026
Guests: Alexander Pabst, Global Deputy CISO, Allianz SE Michael Sinno, Director of D&R, Google Topics: We've spent decades obsessed with MTTD (Mean Time to Detect) and MTTR (Mean Time to Respond). As AI agents begin to handle the bulk of triage at machine speed, do these metrics become "vanity metrics"? If an AI resolves an alert in seconds, does measuring the "mean" still tell us anything about the health of our security program, or should we be looking at "Time to Context" instead? You mentioned the Maturity Triangle. Can you walk us through that framework? Specifically, how does AI change the balance between the three points of that triangle—is it shifting us from a "People-heavy" model to something more "Engineering-led," and where does the "Measurement" piece sit? Google is famous for its "Engineering-led" approach to D&R. How is Google currently measuring the success of its own internal D&R program? Specifically, how are you quantifying "Toil Reduction"? Are we measuring how many hours we saved, or are we measuring the complexity of the threats our humans are now free to hunt? Toil reduction is a laudable goal for the team members, what are the metrics we track and report up to document the overall improvement in D&R for Google's board? When you talk to your board about the success of AI in your security program, what are the 2 or 3 "Golden Metrics" that actually move the needle for them? How do you prove that an AI-driven SOC is actually better, not just faster? We often talk about AI as an "assistant," but we're moving toward Agentic SOCs. How should organizations measure the "unit economics" of their SOC? Should we be tracking the ratio of AI-handled vs. Human-handled incidents, and at what point does a high AI-handle rate become a risk rather than a success? Resources: Video version EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success EP238 Google Lessons for Using AI Agents for Securing Our Enterprise EP91 "Hacking Google", Op Aurora and Insider Threat at Google EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP189 How Google Does Security Programs at Scale: CISO Insights EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil The SOC Metrics that Matter…or Do They? blog An Actual Complete List Of SOC Metrics (And Your Path To DIY) blog Achieving Autonomic Security Operations: Why metrics matter (but not how you think) blog







