allfeeds.ai

 

The Life Science Effect  

The Life Science Effect

Author: Steven A. Vinson, PMP

Have you ever thought about who the people are behind life-saving breakthroughs? How did they get started in their careers? Why did they choose the Life Sciences? What effect do they hope to cause? These are the questions we explore on The Life Science Effect. Gain insights straight from thought leaders, entrepreneurial game-changers, and business executives leading the Life Sciences. Host Steve Vinson explores what it really takes to be effective in this industry as a leader and innovator with a special focus on what's happening here, in the Heartland. We aim to inspire, equip, and empower the next generation of Life Science experts through purpose-driven conversations. Join us weekly as we talk about what happens behind the science and get to know the people who make it happen
Be a guest on this podcast

Language: en

Genres: Business, Careers, Management

Contact email: Get it

Feed URL: Get it

iTunes ID: Get it


Get all podcast data

Listen Now...

AI Validating AI: The Future of Compliance in Life Sciences
Tuesday, 24 February, 2026

Artificial intelligence is moving fast—but in regulated life science environments, speed without trust is a non‑starter. In this episode of The Life Science Effect, host Steven Vinson reacts to a recent EY article on AI validation in pharmaceutical and biotech settings and explores a fascinating question: Can AI actually be used to validate other AI systems? Steven walks through how regulators are beginning to rethink traditional validation models to accommodate AI's non‑deterministic nature, where the same input can produce different, but still acceptable, outputs. Drawing parallels to earlier industry shifts like electronic records, he explains why clear regulatory frameworks are essential for innovation without compromising patient safety. The conversation dives into EU‑specific regulations such as GMP Annex 11, Annex 22, and the EU AI Act, while contrasting Europe's proactive approach with the more hands‑off posture emerging in the U.S. Along the way, Steven offers practical insight for entrepreneurs, engineers, and investors navigating AI in regulated environments, and why "robots testing robots" might be less science fiction than it sounds.   EY ARTICLE:  GxP and AI tools: Compliance, Validation and Trust in Pharma | EY - Switzerland   MUSIC used under the Creative Commons Attribution 4.0 International License: Acid Jazz-Kevin MacLeod Acoustic Motivation by Corna Media   Key Discussion Points Why AI validation is different from traditional computer system validation What "acceptable ranges of output" mean for regulated AI systems Using AI to validate AI: hype vs. reality Overview of EU regulations: GMP Annex 11, Annex 22, and the EU AI Act Lessons from the transition from paper records to electronic systems Why regulatory clarity enables innovation in pharma and biotech Notable Quotes "AI is a tool—and tools still have to be validated." "With AI, different outputs are okay, as long as they fall within what's acceptable." "I just love the idea of robots testing robots." "ChatGPT does not equal AI." "AI is a fantastic tool, but it's not the solution to every problem." Call to Action If you're working with AI in regulated environments—or thinking about it—subscribe to The Life Science Effect, leave a review, and share this episode with your team. Want to join the conversation? Email steven.vinson@bpm-associates.com or visit thelifescienceeffect.com.   Transcript [00:00:01] You are about to experience The Life Science Effect, Season 2, brought to you by our presenting sponsor, BPM Associates. [00:00:16] Extraordinary people. Relationships that matter. Important change for a better world. The joy of belonging. Life, science, leadership. [00:00:29] A few years ago, when we all started learning about ChatGPT and were amazed by it, my first thought was: how can this be used for GMP validation in the pharmaceutical and medical device industries? For testing the equipment that makes products, and the systems used to manage manufacturing and R&D. [00:00:57] I asked a colleague who works in the quality and regulatory space for pharma and medical devices, "What are you hearing? What are you seeing?" He said, "AI is a tool, and you have to validate the tools you use for testing." [00:01:15] That led to a bigger question. He had asked someone from the FDA at a conference: how do you validate an AI when most of us—even the people who design AI—aren't 100% sure what's going on inside it? [00:01:31] Fast forward a few years. I've been reading articles and digging into this topic, and I came across a really interesting piece—more like a blog post—on EY's website. I'll link to it in the show notes. [00:01:47] It's written by Martin Blank, a partner at EY in Switzerland, focused on life science regulatory work. EY is one of the large global consulting firms, similar to Accenture. Because of his background, the article has more of an EU perspective, but much of it applies to the U.S. as well—though the U.S. may be a bit behind. [00:02:09] The article is titled "AI Validation in Pharma: Maintaining Compliance and Trust." It caught my attention for a few reasons. I was actively looking for examples of how AI is being used, and I wanted something relatively recent. This was published in October 2025, and I'm recording this in early 2026, so it felt timely. [00:02:32] What really grabbed me was that he talks about using AI to validate AI. [00:02:46] It's kind of like robot-on-robot violence. [00:02:50] I, for one, welcome our robot overlords. [00:02:58] I read the article and thought I'd share my reactions with you. The big takeaway right away is that AI can absolutely be validated. That answers the question from a few years ago. The real question is: how? [00:03:25] In traditional computer system validation, you provide a specific input and expect a specific output that matches exactly. With AI, you might give the same input multiple times and get different outputs. [00:03:48] Regulators are saying that's acceptable—as long as you expect a range of outputs and all of them fall within what's acceptable for the intended use. [00:04:03] For example, if you ask a chatbot whether there are interactions between ibuprofen and a GLP‑1 drug, you'd expect some variation in how the answer is presented. But every answer should reflect the real interactions. It shouldn't hallucinate or invent risks that don't exist. [00:04:39] The same principle applies whether AI is supporting research, identifying drug candidates, or helping design a testing strategy for a drug or a piece of equipment. [00:04:56] I found that fascinating. And while that wasn't the only focus of the article, it's the part that really excited me. [00:05:06] First, this absolutely can be done if you understand the regulations. Second, I just love the idea of robots testing robots. [00:05:29] So what are the regulations involved? In the EU, there's GMP Annex 11 for computerized systems—traditional systems, what we might think of as pre‑2020 technology. There's also GMP Annex 22, which is specific guidance for AI used in GMP environments. [00:05:55] Then there's the EU AI Act, which governs AI more broadly—not just pharma and medical devices. [00:06:13] The U.S. has been discussing similar approaches, but with recent political changes, it appears the U.S. is choosing to let industry largely govern itself. The EU, on the other hand, has taken a proactive approach and has been rolling out AI regulations in phases since around 2023. [00:06:30] They started with lower‑risk applications and are working toward higher‑risk ones. That may sound backward, but it allows regulators to learn and adapt as complexity increases. [00:06:47] You also still have to comply with GDPR. Even if you're using AI, privacy and personal data protections don't go away. [00:07:10] And of course, GxP standards still apply globally—FDA, EU, Asia‑Pacific, Brazil. There's a fair amount of harmonization across regions, and those expectations exist regardless of what tool you're using. [00:07:20] Is this a big deal? I think it's a very big deal. [00:07:35] I remember in the late 1990s, when electronic records started replacing paper, the industry was asking for a regulatory framework. The rules were written as if everything would stay on paper forever. It took time, but that framework eventually came. [00:08:15] I'm glad the EU is getting out in front of AI. Regulators are focused on patient safety and drug effectiveness. They want more effective therapies, more of them, and they want them to be safe. AI presents a real opportunity to innovate, but it has to be done within a clear regulatory structure. [00:08:44] If the U.S. decides it wants to participate more actively in this future, it can likely build on much of what the EU has already developed. [00:09:04] I know I've been talking a lot about AI lately. I promise there's more going on than just AI. [00:09:18] As someone recently said on a panel discussion, ChatGPT does not equal AI. AI has been around for a long time. What's new is the speed and scale at which it's advancing. [00:09:33] As Raul Zavaleta once told me, AI is just a tool. A fantastic tool—but not the solution to every problem. [00:09:46] So yes, I'm on an AI kick, and it probably won't stop anytime soon. But I will mix in other topics too. [00:10:05] If you're an AI expert or someone doing real work in this space—especially in regulated environments—I'd love to talk. This EY article focused more on how to approach AI, not how people are actually using it. I'd love to hear what problems are being solved and how teams are navigating the regulatory landscape. [00:10:24] If you want to continue the conversation, email me at steven.vinson@bpm-associates.com or visit thelifescienceeffect.com. [00:10:35] You can find us on all the major platforms. Don't forget to subscribe, leave a review, and do all the internet things. [00:10:41] Thanks for listening. Stay strong out there.

 

We also recommend:


Startuprad.io Europes Voice on Startups, VC, Innovation & Growth
Startuprad.io Europes Startup & VC Podcast Network

CIO Podcast - IT-Strategie und digitale Transformation
Dr. Petra Koch

Business Done Right | Faith Forward Business Leadership
Seth Buechley

Managers Club, Interviews and Resources for Engineering Managers
Vidal Graupera

Crisis Talks
Grant Chisnall

Consulting Conversations: 1. The 7Cs of Consulting
Chris Cooke

#jetztwasunternehmen

3 questions à...
Nicolas Gosse

Innovation Gym
To Sell More

Conversatorio
Delmy Cruz

Bootstrappers For Entrepreneurs
Two Brothers Creative

The Orbit Shift Podcast
Freshworks Inc.