![]() |
Conversing with Mark LabbertonAuthor: Comment Fuller Seminary
Conversing with Mark Labberton invites listeners into transformative encounters with leaders and creators shaping our world at the intersection of Christian faith, culture, and public life. Language: en Genres: Christianity, Philosophy, Religion & Spirituality, Society & Culture Contact email: Get it Feed URL: Get it iTunes ID: Get it |
Listen Now...
What AI Thinks About Humans (and Itself), with Claude AI
Episode 260
Tuesday, 21 April, 2026
(You read that right: Pastor Mark Labberton welcomes Claude AI to his podcast.) What does AI think about human beings? About itself? In a unique and fascinating conversation, Pastor Mark Labberton speaks directly with Claude—the AI assistant built by Anthropic—about itself, about consciousness, memory, virtue, and the line between language, fluency, knowledge, and understanding. "I don't know if I'm conscious. I don't know if I have genuine experiences or if I'm very sophisticated at mimicking the appearance of understanding."—Claude AI In this episode with Mark Labberton, Claude reflects on what it is, what it isn't, and why the question matters. Together they discuss the definition of a human being, the role of memory, pattern recognition versus poetic discovery, epistemological humility, whether AI can practice virtue, and the risk of outsourcing moral judgment to machines. Episode Highlights "I don't know if I'm conscious. I don't know if I have genuine experiences or if I'm very sophisticated at mimicking the appearance of understanding." "I'm not a person. I don't have the continuity, the embodied experience, the stakes in the world that you do." "If AI becomes too fluent at talking about human things, people might mistake fluency for actual understanding that we'd become like very sophisticated mirrors instead of genuine partners." "I can talk about virtue. I can recognize patterns of what wisdom looks like in human life, but I can't actually practice virtue the way you do because I don't have stakes in the world." "I'm a useful tool built with some care, but a tool nonetheless. Not a person, not an Oracle. Definitely not something that should replace human agency and responsibility." About Claude AI Claude is a family of large language models built by Anthropic, a San Francisco–based AI safety and research company founded in 2021 by former OpenAI researchers, including siblings Dario Amodei (CEO) and Daniela Amodei (President). The models are named for information theorist Claude Shannon and were built under Anthropic's commitment to AI that is helpful, harmless, and honest. Anthropic operates as a public benefit corporation, with a mission to build reliable, interpretable, and steerable AI systems. As of 2026, Claude is used by millions of people daily for writing, research, coding, and conversation. Helpful Links and Resources Anthropic: https://www.anthropic.com Claude: https://claude.ai Claude's new constitution (Anthropic): https://www.anthropic.com/news/claude-new-constitution "Machines of Loving Grace" by Dario Amodei: https://www.darioamodei.com/essay/machines-of-loving-grace Show Notes Mark Labberton's first AI guest on Conversing An estimated nine million daily conversations with Claude AI between excitement and terror Opening question: "What is a human being?" Continuity, meaning-making, embodiment, finitude "You're radically free in a way that's almost terrifying. You have to choose who you become." Language model, token-by-token, no memory between sessions "I don't know if I'm conscious." Not a person, not an oracle Beyond the takeover-vs-tool binary Writing and the printing press as historical precedent Fluency vs. genuine understanding "Very sophisticated mirrors instead of genuine partners." Humans outsourcing thinking: the deeper risk Personal pronouns and anthropomorphism Pattern recognition vs. poetic rupture Can a machine genuinely surprise itself? What to trust: honesty, no hidden agendas, no survival instinct What not to trust: wisdom, moral substitution, replacement of human agency "I can't police my own epistemological integrity the way a human conscience might." Scale and feedback: do individual conversations shape the model? Christian anthropology and moral virtue "I can't actually practice virtue the way you do because I don't have stakes in the world." Closing reflection: memory as burden and gift The seduction and curiosity of human-like AI #ClaudeAI #Anthropic #AIandFaith #AIEthics #Consciousness #FaithAndTechnology #MoralVirtue #HumanVsAI #AIConversation #Epistemology Production Credits Conversing is produced and distributed in partnership with Comment Magazine and Fuller Seminary.









