The ThinkND Podcast
The ThinkND Podcast
The New AI, Part 12: Catholicism's Humanist Perspective on AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Episode Topic: Catholicism's Humanist Perspective on AI (https://go.nd.edu/36b031)
The rapid ascent of artificial intelligence is often framed as a purely technical or economic challenge, yet its most profound impact lies in how it forces us to confront the essence of our own identity. By viewing AI through a humanistic and theological lens, we move beyond the binary of “utopia versus apocalypse.” Join Paul Scherz ’10, M.T.S., ’14 Ph.D., Notre Dame’s Our Lady of Guadalupe Professor of Theology, to explore how these tools can be oriented toward the mirror of our spiritual hunger and our ultimate participation in life with God.
Featured Speakers:
- Paul Scherz ’10, M.T.S., ’14 Ph.D., University of Notre Dame
Read this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: https://go.nd.edu/daec5e.
This podcast is a part of the ThinkND Series titled The New AI.
Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.
- Learn more about ThinkND and register for upcoming live events at think.nd.edu.
- Join our LinkedIn community for updates, episode clips, and more.
And we are good to go. So get the split screen up.
paul-scherz_1_01-15-2026_110526Oh, one last question. Can I ask one question real fast? Yeah. So you're editing this so I can like stop and rephrase and things like that. If it, yeah, if it totally,
graham-wolfe_1_01-15-2026_110526yeah.
paul-scherz_1_01-15-2026_110526Okay, great.
graham-wolfe_1_01-15-2026_110526Yeah, just give it a quick flag, just like. Say so a little catch phrase or whatever so that we can see it in the, um, in the edit. Just say like, oh, let me start that question over again. Something like that. As long as you don't just like restart without saying anything, because otherwise, if you're just reading the transcript, it might not be super clear, but yeah. Should be easy. Right? Great. Yeah, it's low stakes. We're not live or anything, so it's easy. Yeah. Okay. I will get started then. Welcome everyone to the new AI Project podcast. My name is Graham Wolf. I'm a senior at the University of Notre Dame and the program director of the new AI project. Today we're excited to welcome Dr. Paul Scher, professor in the University of Notre Dame's, department of Theology, PhD in Moral theology, and an expert on the emerging intersection of technology, morality, and theology. We're thrilled to have you on to apply some of our research into AI's Sociotechnical impact to this fascinating topic area. Um, it's a pleasure having you here. Is there anything else you'd like to add to that intro, Paul?
paul-scherz_1_01-15-2026_110526No, it's a pleasure to be here. Thanks for inviting me on.
graham-wolfe_1_01-15-2026_110526Absolutely. It's, it's great to have you, on the new AI project side, we've done some, some great work over the past three years of developing a, a team of what we call student experts, uh, each writing in their own domain of ai. So whether that's tech, titans, AI at work and labor taming ai, AI and life or research revelations, there's really something for everybody. Um, and we encourage our listeners to check out the new AI project on LinkedIn after this podcast. So joining us from our team is student expert Will Mansour, who has done some great work, uh, researching what we call taming ai, or any means by which society can reel in and exercise oversight on the potential consequences of ai, whether through regulation, governance, or in this case moral or theological scrutiny. Um, and I'll pass it over to you for your introduction Will.
will-mansour_1_01-15-2026_110527Yeah, that sounds great. Uh, Graham, thanks for having me back. super excited for this conversation. I think it'll be a lot of fun.
graham-wolfe_1_01-15-2026_110526Yeah. Thanks Will. Um, so we'll kind jump right into things with, uh, just some background. I think we, we'd like to hear, uh, from you Paul. Uh, what sort of drove you to so to to, to study this intersection of theology and technology more broadly, but AI specifically.
paul-scherz_1_01-15-2026_110526So my initial research interest, actually what my first career was, was in genetics. I went, I did my undergrad in genetics, went and did a PhD in it, and did a postdoctoral fellowship. And so this is a tremend. This has been a tremendous period for the development of genetic and other kinds of bio technical bio technologies. So it, that was my, my initial interest in kind of the technological sphere was coming outta that. But of course, there's lots of ethical issues that arise outta kind of bioethics, outta genetics, genetic manipulation, like developmental biology. The, these were swarming around the times when I was an undergrad and a grad student, and, and those were really questions that I wanted to explore in more depth is. What drove me to go back and, and get a PhD at Notre Dame in moral theology, and so that has been a large part of my career is looking at bioethics and the ethics of biotechnology. But of course, when you're thinking about genetics, a large part of what we've been doing for the last 25 years has been analyzing genomic data. So a lot of genetics has shifted to a data analytic framework. it's trying to find, use the data from genetics and combine it with, with medical records to predict people's risks of certain diseases. It's doing a lot of this kind of analysis of large data projects. And that's, I, that's what got me into the area of kind of computer technology, data ethics, things like that is just how it was arising outta genetics. But of course, Uh, over the last, about 10 years ago, what I saw moving in was you kind of were moving out of the kind of just kind of more simple algorithmic mode of data analysis and you're really, we're starting to get machine learning start to play a bigger and bigger role in science and medical development. And so my interests and my research and data analytics, a lot of the kind of discussion of how risk was being identified, it slid naturally into ai. And, um, I started talking about AI in the field of medicine and genetics. as people started seeing me doing that, since AI was such, becoming such a huge field, otherwise I started getting invited to give talks on more basic features of ai. What is it doing? What does it mean for theology? Until in 2020, I was invited, by the Sphe for Culture and Education, one of the Vatican's Dask. At that point, it was the Pontifical Council for Culture. I was invited to join a group that they were developing. They had decided they needed more expertise in, in ai. and it seemed like there were a lot of people in North America who had that expertise. And so we were a group who were kind of, they pulled together to try to an AI research group. They pulled together to try to figure out, okay, what does AI mean for the Catholic tradition? What does the Catholic tradition have to say about AI and its ethics? And so we were supposed to meet for the first time in March of 2020, but as you remember, that didn't, you know, the world shifted at that month and we didn't hold that in-person meeting, but we started to meet via Zoom. Over the two years, right? Every month we would have a meeting where we would discuss a different topic with AI and think of and explore the ethics, which led eventually to us publishing a, a book about it, encountering artificial intelligence. We, we wrote a co-wrote a book about Catholic theology and ai and at the same time as I was working in that collaborative project, I also was kind of continuing my research on medicine, but also writing on other issues, speaking on other issues around ai. Other kinds of new technologies.
graham-wolfe_1_01-15-2026_110526Yeah, I think that that makes a lot of sense. Like you say, the, the shift from, uh, focus on genetics and, and bioethics. and then sort of the commonalities between introducing algorithms and uh, data analysis into that space. Uh, the commonalities between that and sort of this new age, uh, introduction of algorithms into so many other parts of life. Mm-hmm. Um. That's kind of some of the commonalities I'm picking up on, but it's certainly a very fascinating journey you've had. will, I'm curious, anything sort of stand out to you there? Anything surprising or anything make, make a lot of sense?
will-mansour_1_01-15-2026_110527Yeah, no, no. It's a really cool journey and it sounds like, uh, you've been around the block in terms of having your hand in a bunch of different, um, areas of interest, which leads me to what I was next. Leads to next question, which. What specifically about the intersection of technology and theology, like theology specifically, um, because, you know, when I think about analyzing artificial intelligence, people, you know, oftentimes look at history. Um, a lot of philosophers are involved, which seems very close to what you studied with moral theology. But what did, what was it about theology specifically that you thought was very important to, you know, kind of being included in that conversation about ai?
paul-scherz_1_01-15-2026_110526So I, I turned to the, the field of theology from genetics because, I mean, a, it was very important for my, kind of my own faith life and my religious kind of exploration was to understand how faith and these technologies interacted. But as I studied it more, I realized how much theology has to offer and moral theology has to offer to these conversations. And what Pope Leo has, uh, in, in some of his addresses since he was became Pope, he is focused on what are the resources that the Catholic tradition has to offer. And he points to two, but I think there's many more. Um, but these are the two that he has is focusing on. So first is our social teaching. We've developed a great set of resources around how do we envision society, how do we envision con constructing. Just labor, just social institutions, just relationships. And the other aspect that he points out is the, what he calls the anthropological vision of the church, right? What is, uh, basically that means that from anthropo and logos, right, the study of the human, the understanding of the human person. The church has put forward and throughout its history, but especially in the aftermath of Vatican two, it's focused very heavily on understanding the human person as creatures who relate to God and to others with reason and free will, and putting forward a very kind of rich perspective on the human person. What we're finding with AI is it's raising a lot of fundamental questions about what is the human person? What is it to be a person? What is it to think, what is it to be in relationship? Can you be in relationship to a machine in the same way as a person? what is our work in the world when a lot of the things that we've done can be automated? These kinds of fundamental questions are the ones that the Catholic tradition has thought of and that theology has really dealt with in depth. And we also see a lot of secular actors are kind of asking these questions. So you get businesses coming and asking, looking to the church for guidance and, uh, on these issues. And that's kind of one of the things that's been amazing, is how many tech leaders go and are trying to be in conversation with the church. Because of its moral authority, but also the depth of thought it's given to what it means to be human.
will-mansour_1_01-15-2026_110527That's super interesting. Yeah. that, that makes a lot of sense. I think, you know, in my most recent article, for the new AI project, I was talking a lot about how, you know, recent panic, moral panic throughout history has led to those sorts of questions and conversations about what we value as humans. What we seek to preserve as we continue to evolve. And so I think, yeah, it makes a lot of sense that theology would've a lot of offer, a lot to offer, a lot to contribute to those conversations. Graham, I dunno what you took from that.
graham-wolfe_1_01-15-2026_110526Yeah, no, I think I, I, I saw a lot of, overlap between like what you were writing about with immoral panic and I think that that fundamental question of how. Much of a threat, do algorithms pose to, um, the human element of the human experience, like human, uh, exceptionality, human uniqueness, et cetera? I think it's, it's, it's a really jarring kind of question that, that a lot of us never thought we'd have to encounter. and uh, it's, it's refreshing to hear that, you know, that, that the church has kind of done a lot of work on it in advance of this, this new wave. But also, you know, theologians kind of already have that. The perspective to kind of absorb this new shock in a way. Um, I'm curious sort of just about how, where you sort of fall on the sides of these debates about, you know, how much of a threat does this actually pose to human exceptionality or, What is the real threat or what, what's the magnitude of the threat of, of the introduction of algorithms into day-to-day life? maybe if you could point to some specific research or just sort of feel free to sort of postulate, you know, where, where might you fall on these, these big emerging debates.
paul-scherz_1_01-15-2026_110526In our book, encountering Artificial Intelligence, the one, the AI research group, and then following that and citing that, uh, the recent Vatican document, anti Nova, it's very clear to point out the differences and distinctions between human intelligence and what we might call machine intelligence, right? And fundamental to this is issues of consciousness, right? So machines don't have consciousness. Philosophers have made this argument for a long time. the issue of freedom, the, the determination of the machines. You know, they're probably, they, they can do new things, they can create and things like that, but they're all probabilistically determined. They're prob, they're statistically determined machines. Um, and the kind of, the way that people do inte and abstraction, the way that our bodies are involved. There, there are clear distinctions. Artificial intelligence for all the kind of wonderful things it can do. And I don't wanna deny that it's, it's, it's a very valuable tool and an amazing tool. But it, but it's not human intelligence. It's not intelligence as we've classically thought it. so I'm not afraid that we're gonna get, an a GI that is kind of super intelligent and is gonna conquer the world and kill us all, or things like that. my bigger fear is that we're going to deploy AI and we're gonna overestimate AI and deploy it in ways that's either gonna cause disruption to society because we don't recognize its fallibility and its limitations or in ways that degrade human experience, right? And human, what the human brings to relationships, right. So one of the big dangers we see in medicine is we're seeing this kind of broad use of medical kind of chat bots. People really wanna deploy that. OpenAI just released this CHATT Health kind of app. and there's some thought leaders who think, well, why do we need a doctor when you can get diagnostics from, from an a, from kind of your AI chat bot. What's that's missing out is the role that the human factor plays in healing the relationship between the doctor or the nurse and the patient. How that's actually a really important part of healing. and bringing, making a person whole, not just like in illness wise, but in make then helping them to understand their kind of, their total social situation, but also just the, the, to kind of feel that someone cares for them and, and in the kind of moment of suffering. And in work, I think there's a, a lot of ways in which we can, give up kind of human creativity, human like free, uh, use of their virtues of prudence and things like that, which makes jobs fun, is being able to make, be somewhat autonomous and use your skills to do things in the world. Can remove those and give them over to an AI that just directs people, right? And to make all jobs, kind of like the Amazon fulfillment, the center person who's just being directed totally by the machine to kind of do things right. So I think the less is the danger is that we're gonna actually get a super intelligent technology that's gonna kill us all or something like that. More of the danger is that we are going to, Use AI in ways that degrade specifically human experiences or ignore the specific elements that humans bring to experiences.
graham-wolfe_1_01-15-2026_110526Yeah, that, that's really interesting. I, I, I kind wanna hone in a little bit on the, um. The, uh, the, the, what I've heard called transhumanist perspective on, on ai, you've mentioned, as you kind of brought it up, is like the a GI, the, the, apocalyptic kind of perspective. it sounds to me like your belief is, that apocalyptic belief is not the threat, but instead. A widespread belief in, in its, its transhumanist nature would be more of the threat. Mm-hmm. So what we see from a lot of these tech leaders is kind of this fixation on a transcendent view of ai, whether it's going to lead to immense prosperity or to, an apocalypse, some kind of transcendent view. And they're pretty actively spreading that, that narrative, as much as they can through rhetoric of all kinds. They're on Twitter through these manifestos to investors, et cetera. and a lot of people are starting to buy in and think that this is going to really, really, fundamentally change our world for, for the much better, for the much worse. Mm-hmm. how do you feel about that kind of narrative taking hold? do, do is is there another way of explaining it or do, do you really think these, these tech leaders might believe that? and if that belief really catches hold, you know, what might be the consequences of it?
paul-scherz_1_01-15-2026_110526Yeah, I mean, so I've talked to some of these executives and some of them really are true believers and kind of that they, this, these things are close to thought and, and things like that. so some of them are true believers. I think a lot of them, this is somewhat of a motivated belief because this is, you know, you need to get trillions of dollars of investment right now, right? And, uh, you gotta sell that. This is going to be absolutely transformative. So I in, in. So, so I think there's a reason, kind of economic reasons that kind of lead to the motivation to, to, to believe these things. But people do have true beliefs in this ways, but in part, those beliefs stem from, kind of, somewhat of a degraded view of human intelligence, right? Mm-hmm. If we look from the very first moments of ai, so Alan Turing has his famous paper in 1950. computer inte, what is it? Yeah, I forget the exact title at the moment, but, uh, with the 1950 where a paper where he proposes the imitation game, right? The so-called touring test. and basically what he's saying is if a computer can fool you into thinking it's a human, then it's thinking like a human. Well, I mean. Computers have been able to kind of do that kind of fooling for a long time, people engaged with Eliza, a computer program in 1960s that was very simple chatbot in, in a very human-like way. but the reason why he was thinking that way is because he, he, subscribe to a mode of thinking about human tel intelligence that. I was fundamentally behavioristic, right? That what goes on in our heads doesn't matter, or it's just something like a calculation and it's all that matters is our behavior. So if you don't care about what's going on in human thought, and we've seen this in a lot of neuroscience where people kind of say, oh, consciousness doesn't exist. It's not really all that's going on is calculation. if you think about human intelligence in that way, then it's very easy to see why a machine intelligence could, could why a machine could be intelligence. If you think of, uh, human intelligence in terms of consciousness of. What the tradition calls intellects, kind of like contemplation. if in terms of passion and desire and all of these elements, then it's almost impossible to see how we get to human intelligence from machine intelligence. Right? So I think there is, again, going back to this question of anthropology. There's a mistake in anthropology that's going on in the, in the, in, in a, for a lot of tech executives, right? So kind of very scientific, right? Not scientific kind of mode of thinking, very kind of reducing every very reductionist, um, that leads to them to, to have this, this, this. Think thinking about a GI Now will AI cause huge problems, right? It could, right? If we put things, uh, in. right. If you, if you were to give the power grid in charge of an AI in ways that it could, like, you know, if it has a glitch, it could just, you know, bring down the power grid. That could cause huge problems for society. But I don't think we're gonna have problems because, uh, of an A GI. there's other econ economists who look to prior technologies and political scientists and say. Actually, even if we have rapid development of ai, rapid development of these tools won't necessarily change our society as fast as the tech executives say they're right. So there's this great article that's online AI is Normal Technology by Nan and ur, right? They, they wrote this book, AI Snake Oil, that's the name of their blog as well. What they say is, if you look at older technologies, even something as transformative as electricity, it's not like you develop electricity. Then all of a sudden society has transformed. It takes a long time. You have to develop infrastructure. Businesses have to learn how to use them. You need to determine how to use these technologies well. So every new technology, there is a long lag between when it's introduced and when it's actually causing transformative effects. Now, I think AI will, and there's a good chance it will transform society in many ways. But I think the hopeful thing is we have many years to figure out, okay, well how do we control it? How do we regulate it? How do we, all those questions will that you're asking, how do we do these things in ways that will um, kind of. Make sure that it, it supports and enhances human flourishing rather than undermines it and, and so that it doesn't cause disasters. Right. So that, you know, there's, so, there there's different projections as to what even economically it might do.
will-mansour_1_01-15-2026_110527Yeah, yeah. No, definitely. And then bringing it, I like how you brought it in, kind of back towards human flourishing, not necessarily the threat that it poses, which I'm interested, so I. I think a lot of people perceive the church as kind of being, uh, anti, can be an antiquated, sort of conservative institution that is hesitant to, you know, accept technological social changes. Whether or not that is true is another debate, but I think that's fair. It's fair to say that that's a commonly held perception of the church. in what ways is the church's sort of AI policy, Kind of what's the, I'm trying to think of the right word. And what, what is the church's AI policy sort of fighting back against that stigma?
paul-scherz_1_01-15-2026_110526Mm-hmm. That's a great question. Yeah. I mean, whenever I, there's been frequent times when I've talked to, talked to, um, reporters or authors of other sorts who wanna interview a cath Catholic ai, thinker, right? And they're always. Surprise. And I think a little bit disappointed that, you know, I'm not more kind of freaked out about the technology or negative on it. Like I have a lot of criticisms of current AI and things like that. But I think, you know, you have to make measured criticisms of, of these things and figure out, okay, this is a technology. I mean, so. God has given people creativity, uh, to develop, understand the world, develop scientific tools, develop kind of different technologies in order to improve the world. That's part of what God has created us to do. And so therefore, I think you, you can't just outright reject almost any kinds of technologies. I mean, maybe kind of, yeah, gas or something like that, but, but most technologies you can. Figure out. Okay. There are ways to develop it and use it well, I mean, generally, frequently it's more the kind of economic system or social systems that they're deployed in that are causing the trouble rather than necessarily the technologies. It's themselves. So the church has been careful always to say that these things have opportunities to do well. We just need to kind of figure out how, where, what those good uses are and what. Where they shouldn't be used in this way. And that's been the, coming from the bioethics realm and bio technologies, that's always been the church's positions. There's certain kinds of use of genetic technologies that are very dangerous, but others that are, are, are totally welcome. Right? So if they can cure the stick in, in really way good ways, then they, they can be great. Whereas transhumanism could, an enhancement could be real lead to real problems. And so this is the church's approach to ai and the church needs to embrace ai. The church is running huge institutions like hospitals and things like that are all under the apostolates. And AI has a lot to offer to these institutions.
will-mansour_1_01-15-2026_110527Yeah, that's super interesting. And I think, you know, the churches oftentimes. Especially, I think in the 21st century, more willing to, you know, under understands how the world is changing and is willing to, you know, within reason adapt to that changing world, but also understands how powerful its voices in affecting, the beliefs of millions, billions of people across the world. You brought it a little bit about to, you brought it back to the church's institutions. How is, do you have a sense of how the church actually is implementing artificial intelligence right now? If they're at all?
paul-scherz_1_01-15-2026_110526Oh, so, so the church as a whole, like, so, I mean, so the church has institutions, but it's uh, People like to think of the Vatican as kind of like, it's a one thing, whereas the Vatican is actually lots of different kinds of, uh, you know, departments that sometimes talk to each other, sometimes don't talk to each other. Catholic healthcare is a set of different companies run by different people, right? Catholic universities are all independent. So, I mean, the church really lives by subsidiarity in, in many ways, right? So a lot of these decisions are being made at, kind of, more local levels. So, I mean, I'm unclear that any of the real, kind of. diocese or necessarily the Vatican is really in using AI in any kind of depth other than the way that everybody uses it in terms of kind of financial kind of management and in Excel and things like that. but I do know that Catholic hospitals are really trying to figure out how to use it well in healthcare, right? That's an area where it's rapidly advancing through a lot of different, In a lot of different ways for a lot of different uses, and every Catholic healthcare system has a technology officer who's trying to figure out how to do it. Well, the umbrella organization for Catholic Healthcare, the Catholic Health Association, which is kind of a trade group for Catholic healthcare, I know they're convening a group to try to kind of give some broad directions on how to implement it ethically. and of course. Catholic colleges are, you know, many of them are rapidly embracing it. Uh, there's other ones who are kind of more classically inclined, who like wanna focus on liberal education more. I think sometimes Catholic schools can embrace technology a little bit too quickly and without enough discernment, but I, I know that's being introduced in those kind of areas.
graham-wolfe_1_01-15-2026_110526Very interesting. Yeah. Really quickly before, before we jump more. Hmm. practical and, uh, you know, everyday dimension of, of, uh, you know, the theological implications. I think I wanna back up a little bit to, what we were talking about earlier, which is the value that, the theological perspective has to offer to this, this, problem, this problem case. Flaws in the anthropological view of the human as this, kind of algorithm in and of itself. I, I think I, we cited this in, in a few pieces before. There's this, this good op-ed that that says, we've kind of, as humans over the past half century reluctantly swallowed this idea that our behavior is nothing more than a calculation or nothing more than the result of an algorithm in, in, in our minds. And now of course, here along comes this. Really powerful algorithm that is, you know, striking fear into all of us because we're all afraid that it's gonna sort of beat us at our own game, which, uh, we've, you know, become convinced is just, you know, numbers and, and probabilities. I, I really like that kind of fundamental diagnosis of the issue. Um, and I think the way that that might trickle down into, more practical applications is, really kind of revolutionary. This kind of revisitation of what the, what human behavior comes from. Is it really, you know, more organic? Is it really more of this kind of interaction with genius and, you know, unique, exceptional human capacities? I'm curious, do you think that's something that. Is, you know, really doable. Is this the moment to revisit what we think of as human behavior? or is this something that's maybe gonna be carried out on, like a more, you know, practical, deployment case kind of basis? Or is it something that maybe is this, is this the moment to kind of revisit this, this more revolutionary idea of human behavior as something other than just an algorithm?
paul-scherz_1_01-15-2026_110526I think that's exactly right. This is the moment to kind of make a broader case for a richer view of kind of who, what the human person is. And I think a lot of people are open to this, right? A lot of people are realizing, relating to an AI is not the same as relating to another person, right? You can't have a romantic partner with an a, with an ai, uh, be, So I think there's a great opening to this. I mean, you know, even you get secular thinkers like Jaron Liner, who is a kind of a early kind of technologist or computer major, computer scientist. he had this book a couple of years ago. You, you are not a gadget, right? So even kind of secular thinkers are embracing this view. We gotta have a richer view of ourselves. That's why I've said before that this is, I think, a real opportunity for evangelization, right? And oppor. It's not, we can't think of it as just a threat. We gotta think of it as an opportunity because a lot of people are seeking for a better understanding of what their place in the world is, and the theology has. that story to, to offer, right? And something that is rich, that's meaningful and that can be embraced. Now, I think also you need to employ this when you're looking at specific applications and things like that, right? That, that broader vision has a lot to kind of offer for, you know, whatever deployment you wanna make of ai. Fundamentally, I think we can really change cultural, discussions around this to get a, a, a, a kind of a more humanistic approach to, you know, all different kinds of topics.
graham-wolfe_1_01-15-2026_110526Yeah. Yeah. And I think now kind of pivoting back to the, the more backend application, uh, parts of this, I think medicine is a really great place to kind of see that playing out. I'd imagine. I think just from, from what we were talking about earlier, how there is such a, a, an intangible human element of medicine, the relational, component to healing, uh, that just cannot be overlooked as much as people have tried to kind of distill medicine down to this equally, like completely black and white, completely objective, science. and now we're introducing algorithms into, into that. Process of healing and kind of maybe starting to see some of the red flags pop up in our, our shortcomings of, of understandings of the human, and then also healing. I know my, my sister is a nurse and she's, she's been telling me about how, you know, the, on one hand these hospital administrators are so excited to deploy ai and then on the other side, the nurses and the patients are, you know, that's the last thing that they would want in, in these, these very vulnerable moments of. Healing and, and, you know, serious injury and illness. Uh, it's just kind of throwing a wrench into what was otherwise a very kind of intimate and, and human experience. So, as far as medicine goes, how would you describe some of the, merits and demerits, the red flags? The, the, the, the good applications and the bad applications? Um, and how is it sort of. Showing us in a broader way how we should be careful about deploying ai.
paul-scherz_1_01-15-2026_110526Part of the problem with ever talking about AI in any sphere is there's AI means so many different things and can be deployed so many different ways, right? So you go from the one hand, like the pure back office task of all the billing and kind of doing forms and things like that where yeah. Aside from issues of bias kind and things like that, I mean, AI could play a major role. then you get onto, to the other hand, like the, the relationship with the patient where there's a lot of danger if you were to kind of replace the doctor or the nurse with just a chat box, right? So total replacement would be, would be a problem, right? And so some of my work right now is trying to dissect where exactly in that moment of kind of the clinical encounter. Can you use ai? Well, so there's, there's things that it can do really well. It can flag where there's a potential kind of grave risk that perhaps the provider is not seeing, right? Like it can. This might be appendicitis and you haven't looked at that yet, right? And so like to, to kind of keep bad thing really bad things from happening, it can help suggest diagnoses. so, so there are ways that that, that it can be used. But ultimately what the doctor is thinking is not just a diagnos. The doctor has to choose what therapy is proper for this patient and actually convince the patient to go through with it. That's a rhetorical and even political moment of kind of convincing a patient to, and, and in, in conversation to go through with an appropriate therapy. and that I don't think we can leave to an AI now. I think there are ways that AI cannot just, you know, undermine the, the kind of human. And patient, the human relationship, like the relationship between doctor and patient. Uh, but it can actually enhance that relationship. So one of the things that medical staff are really excited about is this ambient AI tool, this AI scribe. and this is something that's, it's being adopted because if you've gone to the doctor over the last 15 years, uh, you've seen that doctors have to increasingly focus on the screen of the electronic medical record, right? And filling out all the little boxes there. And that distracts them from engaging with the patient. They have to be be typing. So one of the technologies that have been being deployed and is really kind of getting high reviews from doctors is a technology that will record the doctor patient conversation. Analyze, transcribe it and analyze the transcript and use that to fill out the medical charts, right? So what that does is the doctor can pay attention to the patient and then review the chart afterwards, right? And because the, the AI is, is going through it. Now that raises problems, right? So sometimes writing is an important part of thinking, and AI can make errors like it always does. But this can also be a way to kind of relieve the burdens of older forms of technologies from the doctors and the, and the AI can even go and take, you know, the, the medical chart and then automatically fill out kind of prescriptions and orders, tests and things like that so that it relieves some of the clerical burden from. There's a way in which this can relieve a lot of the clerical burden that we're, that, that people are suffering from today.
graham-wolfe_1_01-15-2026_110526Yeah. Okay. Really quick, I'm gonna jump in and since we're coming up on the end of the, the time here, thoughts on how everything's going so far? Will and Paul,
will-mansour_1_01-15-2026_110527go ahead.
graham-wolfe_1_01-15-2026_110526I think it's Paul.
paul-scherz_1_01-15-2026_110526Yeah, I mean it. It, it seems fine to me. I'm, yeah, agreed. As long as, yeah.
will-mansour_1_01-15-2026_110527Graham, do you mind if I had a question? Kind of like going back to the question of, I think it's really interesting the question between sort of like, modern physicalist philosophy versus the church's like sort of humanist philosophy. so kind of like rotating back to that, is, is it okay if I pop in and ask a question?
graham-wolfe_1_01-15-2026_110526Yeah, of course. I think that's the really interesting thing and we kind of moved on from it too quickly, but, um, I can see if we can piece that together in the end. But yeah, go for it. Go for it.
will-mansour_1_01-15-2026_110527Sweet.
graham-wolfe_1_01-15-2026_110526Pick up there.
will-mansour_1_01-15-2026_110527yeah, so Professor Cher, I had a question kind of going back to our conversation about, you know, in the past, in the 20th century, 21st century, this sort of physicalist, materialist pH school of philosophy has kind of emerged of, you know, distilling our understanding of the universe down to. Physical things, what we can completely understand. and I think that's kind of led to, sort of this reductive way of thinking about human consciousness and intentionality and behavior. I'm interested, I think what's fascinating about, you know, Catholic philosophy and theology about human behavior is that. It's not only informed by what we can understand, but what we can't understand. And it, it certainly leaves room for faith about what we are rather than, I think one of the frustrating things about, you know, physicalist philosophy is that it seems reductive and almost, ignorant of our own ignorance in a certain sense of what we can't understand. I was wondering if you could maybe, you know. Better than I can distill sort of the church's defense of, this sort of humanist, understanding of human mentality, human behavior, in a way that certainly seems to leave room for faith and the unknown.
paul-scherz_1_01-15-2026_110526Indeed. I mean, so ultimately what the human is ordered toward is participating in life with God, right? We're ultimately not ordered towards anything in this world itself. We are spiritual creatures who will live eternally with God, and so there's an element of us that is immaterial. Ultimately our, we're, we're gonna, we're gonna be bodily, we're gonna be resurrected bodily in the resurrection in Catholic faith. But there's an element of our soul, which is immaterial, which exceeds the physical world in, in ways, and that we can think about ourselves, but we can never get to the bottom of ourselves, right? So Augustine talks about trying to dig down and figure out who he really is, but he can never get to the bottom because. the human, the human being escapes full definition. So there's element where that, that God is beyond our understanding because God created the world, but also the human person is beyond scientific, un understanding and even our own attempts to grasp because it's spiritual and ordered towards God. I think also the reductionist understanding of the world thinks that we can understand and control all of the events around us, uh, whereas a Christian vision of creation has a certain humility about. How history unfolds and how things unfold in the world that we are always going to be, uh, that God is ultimately in control of history, that God's providence rules overall so we can trust in it, but we also don't know where exactly we're gonna go. God does new things. New things occur in the world. That can't be predicted from history and ai and science is always just developing from the historical data we have, whereas there's always something new that, that, that could occur, right? So it leaves you open to the kind of creativity and novelty, what ha called the natality, that kind of, kind of generativity of, of the, of the real world.
will-mansour_1_01-15-2026_110527Yeah, no, I think that's, uh, that's definitely what I was trying to, uh, sort of like orient towards that ability to, you know, leave some room for, I think the word that caught my eye that like specifically caught my ear there was humility. And the church's understanding of humans as humble beings, as reverend beings that, you know, are ordered towards something greater. And I think that oftentimes is, you know, missing in, I think you called it, the scientistic sort of understanding of human behavior. And yeah, I think that's definitely a place where, certainly in the modern world as well, where we're seen a sort of no. Reduced sense of purpose, understanding, meaning for a lot of individuals, that's a place that the church could definitely, you know, step in and offer a really powerful source of, inspiration, creativity to the world.
paul-scherz_1_01-15-2026_110526I mean, this, that's a really good point, is that, I mean, I think what you're highlighting here is that there's this duality in the reductionist vision of the world, right? On the one hand. It views people as kind of meat machines or just kind of this kind of algorithm. But then at the same time, it thinks that through our technologies we can gain complete control over our destinies and ourselves and our world. Right? Whereas Christianity has a duality, but on the one, it's that. We are called to amazing things. We are called to deification, but realizing that we are not in complete control of that, right? So it's both this humility, but this high calling of who.
will-mansour_1_01-15-2026_110527Yeah, the way that you just outlined that is, sounds super accurate, but it's also, it's ironic, you know, the part about how the, reduc reductionist view views the human self. yeah, I'm really interested in that. Graham, did you have anything?
graham-wolfe_1_01-15-2026_110526No, I, I think this is, this is really at the, the, the crux of what we were talking about earlier, which is what does theology have to offer to this conversation? I think just as we're talking about, it's very, it's very exciting. It's very energizing, I think, and it can en mass be this kind of very liberating thing for a society that has otherwise, um, kind of been relegated to a very reductionist, you know, view of, of the self and of the human, and. I think as we start to kind of wrap up our, our conversation today, I, I want to talk about, what your previous article focused on will, which is AI alignment, which to me is in a lot of ways just the secular and practical application of what we've been talking about today. and. I'm curious will about, you know, what you see as the real challenges of getting to a AI alignment. And, uh, I'll ask the same of you Paul, sort of as a parting question here. what are some of the ways that the, the things we're talking about today could really trickle down into, practical deployment policies, um, and, you know, contribute to that greater conversation that's gaining a lot of, of traction about AI alignment.
will-mansour_1_01-15-2026_110527It would do. Like you
graham-wolfe_1_01-15-2026_110526can start with, you will.
will-mansour_1_01-15-2026_110527Yeah. Yeah.
graham-wolfe_1_01-15-2026_110526We'll start with, you will then. Then go on, Paul.
will-mansour_1_01-15-2026_110527Yeah. I think one of the trickiest things is, you know, AI is starting to fill human roles. It's acting like a human. People believe it to basically be a human. So I think the next step is, okay, but it's not quite a human. How are we going to align it to have this sort of human code of moral conduct while it's, while it's. Lacks a certain humanity to it. I think that's something that, you know, is maybe the biggest problem applying humanistic guardrails to a non-human technology. You know, it's very human morality is very complex, it's very situational. We do our best to develop virtues and general frameworks for understanding how we should morally act, but it's. Very, it's very complex and it's would be almost impossible to capture in code, because it is so much dictated by experience, personal beliefs. And so I think that is some that, that to me seems to be the largest issue. Challenge that, you know, these, companies need to need to attack and. A plan of action against, as we hope to align AI with our moral and ethical frameworks.
graham-wolfe_1_01-15-2026_110526Yeah. And same question to you, Paul. How do we, how do you think we can maybe trickle some of these big, theological, imperatives down into, into everyday policy?
paul-scherz_1_01-15-2026_110526Yeah, that's a great question. I really like Will's answer as well. Um, I, I frequently think about this in terms of design. How are we designing these tools? What do we want them to be used for? Too often we design them to be these human-like replacements, right? We design them to be like these totally autonomous human-like things. Whereas we should start thinking about them as tools, as assistance. Where can they assist people in kind of fulfilling? they're what the, the goods and ends and tasks that humans want to do, right? How can they help augment people's work and make work better rather than trying to just, have this mistake that we're gonna replace people with machines? That's the first step is just think what is, what are we designing these things for? And then you get into the kind of analysis of, well, how do these different AI programs, AI applications. What are they doing to different human ends, the way people act in the world, looking at those kinds of questions.
graham-wolfe_1_01-15-2026_110526Yeah, I think that makes a lot of sense. and we'll certainly take all of these, these points back with us as we continue our, our research around AI's, sociotechnical impacts specifically, you know, taming AI through a lot of these theological and moral imperatives. Yeah, I just really can't thank you enough, Paul. This has been a really interesting conversation. That's, that's definitely gonna inform our research moving forward. So, Dr. Cher, thank you. Yes,
paul-scherz_1_01-15-2026_110526thank
graham-wolfe_1_01-15-2026_110526you.
paul-scherz_1_01-15-2026_110526Thank you. I've really enjoyed the conversation.
graham-wolfe_1_01-15-2026_110526All right. And, uh, as I said at the, at the top of the, of the, of the episode, um, we encourage our listeners to go check out the, uh, new AI project on LinkedIn and Substack to read a little bit more in depth about what we were talking about today from Will, uh, on the Taming AI column.