The ThinkND Podcast
The ThinkND Podcast
The New AI, Part 11: Ideas, Startups, and Healthcare Tech
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Episode Topic: Ideas, Startups, and Healthcare Tech
How will the next wave of AI transform healthcare? Notre Dame graduate and healthcare investor Kevin O’Brien ’88 offers a fascinating look at agentic AI, the autonomous systems designed to tackle complex workflows. Discover how this technology could solve the biggest challenges facing our healthcare system today.
Featured Speakers:
- Kevin O'Brien '88, Lirio
Read this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: https://go.nd.edu/0829cc.
This podcast is a part of the ThinkND Series titled The New AI.
Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.
- Learn more about ThinkND and register for upcoming live events at think.nd.edu.
- Join our LinkedIn community for updates, episode clips, and more.
Introduction and Guest Welcome
graham-wolfe_1_11-11-2025_100657Welcome everybody to the new AI Project podcast. My name is Graham Wolf. I'm a senior at the University of Notre Dame and the program director of the new AI project. Today we're excited to welcome Kevin O'Brien. He's a board member at Lirio, MOD and Memorial, MRI. and for years he led, a healthcare investment practice in the private equity world. and of course he's also a Notre Dame graduates. we're excited to have him on today and to have a wide reaching conversation about how this world of ideas and startups and specifically healthcare technology is adapting to this new variable of ai. Kevin, is there anything that you wanna add to that intro?
kevin_1_11-11-2025_100659no Graham, that's,
graham-wolfe_1_11-11-2025_100657Of course. Yeah. So on the new AI project side, we've done some really great work over the past three years of, developing a team of what we call student experts. each writing in their own domain of expertise, in this new world of, of generative ai. So we have tech titans, AI at work, taming ai, AI in life, and research revelations. so we really, you know, span the, the, the whole, the whole. Generative revolution. and there's something for everybody. So we encourage our listeners to, go check us out on LinkedIn and Substack after this podcast. joining us from that team today is Aiden Gilroy. I'll pass it over to you, Aiden, for your introduction. I.
aiden-gilroy_1_11-11-2025_100656Yeah, thanks Graham. Yeah, my name's Aiden Gilroy. I'm a junior here at Notre Dame studying political science and philosophy. and for the last two years I've been the lead writer of the new AI Projects Tech Titans column, in which I cover the, the main strategies and new releases from the major five AI players. Open and the rest of the many ai. so.
graham-wolfe_1_11-11-2025_100657Thanks Hayden. So just to kind of jump right into the conversation today, we've heard read and, and ourselves written quite a lot about the promise of Ag Agentic ai. and you actually gave a great talk here at Notre Dame a few weeks ago, emphasizing its merits. and for those maybe who are unfamiliar, or perhaps not convinced about ag agentic ai, Kevin, could you give us sort of a recap of, of why you think it might be, you know, the next big transformative thing, specifically in your world of, you know. Ideas generally, but also, you know, healthcare technology.
kevin_1_11-11-2025_100659Sure. Yeah. Ha. Happy to do that, Graham. So if you step back, right? I think we're all fairly familiar these days. Aiden, you touched on some of the names out there with generative ai, particularly in the form of large language models, right? We've all. Played with the prompts and use that. They're fabulous tools, right? but just in terms of how they are built, their architecture, right? They are specifically built to generate text, imaging, et cetera, right? So you, you put in a prompt, a fabulous tool for, for, you know, really scraping the entire internet and, and giving you answers. Contrast that to agentic ai, which may utilize large WA language models in some cases, right? But it's a really different architecture, right? The these are AI systems to be autonomous, to be goal-driven. They can act independently to perform complex multi-step task, which LLMs, if anyone's ever used one, clearly do not do. So they're just different, different tools for different things is the way I described'em. Not one is not better than the other. They're just distinctive architectures. it a level down maybe, and let's think about right so that you know, what is the core purpose of each generative ai, like we talked about, really generating content based on specific prompts. We're all, again, familiar with that I think in our, you know, everyday lives. Agentic AI really executes multi-step tasks autonomously to achieve a goal. and so that's one of the core differences, of agentic ai is it is able to pursue a goal. You can give it a task and it can built right, go after that kind of goal in terms of task complexity, right? AI can handle complex chain tasks like research, analysis, reporting, again, we've all used it generative ai, large language models, really best for discreet single tasks. They're not really built for these multi-step sort of tasks, if that provides a little bit of a foundation.
graham-wolfe_1_11-11-2025_100657Yeah, I think that's really helpful. and again, moving from the, the simple sort of. Prompt and response back and forth, into something that, like you said is, is sort of chained along and, and begins, of course, with a prompt or, or begins with some kind of human input, but then is given over to the hands of, of a much more equipped, artificial intelligence system. I think that does represent kind of a fundamental change in what we're seeing in this, this ongoing revolution. I'm curious, Aiden, in, in your work, kind of doing research, on the tech Titan side, first of all, anything that, that Kevin say sort of surprise you, but, beyond that, how do you think, these big tech companies are kind of, pitching ag agentic AI to us, or, selling this next, generation of, of, agent AI to, to its consumers?
aiden-gilroy_1_11-11-2025_100656Yeah, I, I think Kevin hit it spot on. When you look at the difference between generative ai, it's, it's really attuned to those specific prompts. You ask it a prompt, say, generate an image, or do research on this one thing. It'll do it for you. I think the big difference between, AG agentic AI is that it can really think on the fly. I mean, to the best that AI can think. It's not really thinking, but it can adapt to difficult questions. I think the big example when AG agentic AI started coming out was, say, ordering a pizza. before you'd have to maybe ask it each individual step. Of okay, I want this on my pizza. Then it would respond with another prompt. Now you can just say, order me a pizza. I, I guess in theory, and it'll go through and it'll actually make the call. It will place the order, it will pull from your credit card information. It will. It will a then say the credit card information over the phone, and then it'll put in your address and it'll bring it to your house. That, that's kind of the, the multi-step idea behind AG agentic ai. so yeah, I think that has the potential to do a lot, because it, it can like humans do multi-step, capabilities and, and. Proceed along solving a problem that that may have taken more steps than a user had inputted originally. So I think that's, that's the bigger difference.
graham-wolfe_1_11-11-2025_100657I, I like the, the pizza analogy. I think it's, I remember you brought that up to me. I think that's, yeah, it's very intuitive. but let's talk, um, let's talk about healthcare. So given your background working with companies sort of at the intersection of AI in healthcare, what do you see as like the big applications of agent AI to healthcare? and just more broadly, you know, how is the healthcare technology world adapting to this variable of ai?
Challenges and Future of AI in Healthcare
kevin_1_11-11-2025_100659Yeah. So, it's agen AI in particular is starting to penetrate healthcare. in the talk I mentioned there was a study, I'm blanking on who, who put it out there, but, it was Gartner, right, saying that less than 1% of enterprise software applications. Included agentic AI in 2024, that's protect, pardon me, predicted to be 33% by 2028. So you can see it's, it's coming, right? the kinds of things you would see agentic AI taken on, administrative tasks. So you think about the world of providers, anyone who is, this is doctors, hospitals, and anything that is a bricks and mortar kind of healthcare place. You go get healthcare services and then the world of payers, insurance companies. People are familiar with insurance. There is this, you know, constant sort of tug of war, administrative burden, resident within our healthcare system where, okay, you show up to the doctor, what's your insurance? Someone has to query and say, okay, are you covered based on the provider you're using? The nature of the services, that is a complex task that humans undertake today. That's a simple one to, you know, to I think for people to picture. If you had an agentic AI system that could query, could run with this workflow, think about agentic AI as tackling workflows as opposed to specific prompts that generate text, right? Again, in this distinction we're trying to create, You could have an, and these are being built an agent AI system that would say, okay, Aiden, you show up at the doctor's office, you provide their, your, your insurance card to pick on you for a minute. Right? instead of the, a person in the office either going online and querying the insurance and looking up, okay, where does this all fall? Or getting on the phone and actually calling insurance company. That's the way it used to work and in some cases works today when it's a little bit complex. If you had an agent AI system that could sit there and say, okay. Again, picking on you, Aiden, you've gone to Doctor X, they're either in or out of network. They're, they're wanting to, you're needing this kind of services and it's captured like a lot of things in healthcare and what's called a CPT code. You could actually have an agent AI system then query the, the insurance world, right? It could all the, think about all the insurance providers, including state and federal government. And say, okay, based on Aiden's card and the actual plan, because no one insurance company has a single plan, there's multiple plans. It's incredibly complex as you could picture. and then Gente system could then set sort of match and say, okay, based on the doctor, the nature of the services, I'm gonna then, you know, go out, figure out what insurance company, drill into the plan and say, covered not cover. And on the spot the, the person you're in theory interacting with. Aiden would say, okay, Aiden. covered. Here's your deductible, here's your copay, whatever, and you've got it right there instantaneously. you know, one of the things that, there's different estimates out there, but if you think about it, something like 20 to 30% of healthcare costs in the US are attributable to administrative workflows. You think, and we spend something like$5 trillion as a co, a country. You start to a whack at that, then there's a lot more, you know, one, you're saving money for the system. Two, you could take some of that money saved and repurpose it. For instance, for underserved populations, you name it. There's lots of interesting things you could do there. so I'll stop there, but I think that's one decent example.
aiden-gilroy_1_11-11-2025_100656You, you mentioned the idea that, I, I guess in current, practice, because agentic AI isn't as, integrated right now into healthcare systems, though like you said, it could very well and very might be in the next three years. it seems like it's, the process right now is very complicated, and that's just because, Now you have humans in the loop, humans that either make mistakes or don't have all the answers at their fingertips. Like agent, agent ai,
kevin_1_11-11-2025_100659Well,
aiden-gilroy_1_11-11-2025_100656or,
kevin_1_11-11-2025_100659yeah.
aiden-gilroy_1_11-11-2025_100656more likely to have? do you, do you
kevin_1_11-11-2025_100659It, highly regulated, right? We have the stuff called PHI, so you know, you know, a person's healthcare information must be protected. So it is hard to move data and manage data within healthcare just because as a heavily regulated industry as it should be.
aiden-gilroy_1_11-11-2025_100656Yeah, no. You have HIPAA regulations and, and all these different personal privacy regulations. Do, do you foresee, like those regulations being, In some sense, the, the first to go, because I feel like in some sense they, they would have to go, if you, if you want to have this age agentic ai, interface which I can go in with my insurance card and, and it knows me and it knows which doctors that had been to in, in the previous. last three years, and it knows whether or not they're in system or out of system, it would have to at least know and recognize my personal data. So is that what has to happen in order for AI integration to, to enter into healthcare? Do you foresee another way, or, or is that really the only way of it happening?
kevin_1_11-11-2025_100659yeah, I, I actually don't see that happening, right. I don't see re I think actually we may go the other way, which is like all things in technology, I think and regulations tend to lag innovation, right? So I, I could see actually standards being raised over time. I work with a healthcare agent, AI company. You know, there, there are things, certifications you can get out, go out there and get SOC two. High trust to say that you're, there is incredible integrity and security to your fundamental IT systems. That's part of, I think the, the things that, that agen AI companies will need to do. Kind of table stakes, right? And, and then you're using data there, there's these agreements called right? They're pretty common in, in, not just in healthcare, but in, in the entire. Technology industry, which, business associates agreement it, it helps, it, it, it Company X to utilize company-wise data. Right? That's a, you know, so you're, you know, my company onboards data including, you know, hipaa, you know, information from its customers, in that BAA. My company takes responsibility for that and is liable should that there ever be a breach of that information, et cetera. So, no, I don't, I I think that is something, it's gonna evolve, it's gonna lag because that's just the way I think regulations tend to go, or policies or just best in class standards, right. That, that emerge that's gonna come. And I think that's it. It, it's, it's important for that to come. It's important for agentic AI companies to paint within the lines, if you will, in terms of, you know, if, if, if. AI is gonna penetrate healthcare, it's gonna need to do it compliantly and safely. And that's something that these companies, as they emerge, are gonna need to prove to their customers. And in turn, you know, the, the users in the healthcare system, like all of us and everyone else out there that. are safe. and that when you interact with one of those systems, information is just as safe, ideally safer than it would be. in today's world where these systems aren't sitting in the middle of some of these workflows.
graham-wolfe_1_11-11-2025_100657Yeah, I think, I think that makes a lot of sense. again, one of the major barriers to entry, I think in across sectors that we've been exploring, across our, you know, different researches is trust in this new variable of artificial intelligence. I mean, with the transition into the information age 20 years ago, that kind of, inherent trust in a lot of institutions was already being eroded just by the introduction of technology. Now, of course, just as soon as people had maybe started to, get used to that new, new standard of, of, and, as soon as, you know, compliance and, and regulatory standards had, caught up to that, that transition. Now we have started this transition into the intelligence age where, where, you know, even more, Perhaps invasive or, more targeted, specialized technologies are being introduced to sensitive, spaces like, like in healthcare. so I think that that trust is a really important barrier to entry, which, which we've begun to touch on here. But, I think just to kind of zoom out a little bit. And look at healthcare. Historically, it's been kind of one of the more, lagged or, or perhaps like technology on the, on the back end of the adoption curve for lots of technologies, slower uptake you could say. and I'm curious just generally why that might be. But then of course, do you see that also being the case for artificial intelligence? Is healthcare the healthcare world gonna kind of. Follow that historic pattern and, and end up on the back end of that adoption curve. Or might we see a reversal of that trend, as we move into the intelligence age?
kevin_1_11-11-2025_100659Yeah, that's an interesting question. so, so couple things in there. will healthcare lag? Yeah, I think it will. I, I used to, my old private equity firm, you know, would, would be talking about a healthcare company and I had other partners that focused on other in industry sectors. And I would, you know, oftentimes say, you know, time out, remember healthcare is roughly 10 years behind your industry, consumer, industrial, whatever, in terms of the adoption of, technology, maybe certain management practices, et cetera. You, you asked in there, Graham, why that is the case one we hit on already, which is heavily regulated industry two. You know, one of the things I used to always look at with my companies when we're thinking about innovation or or driving efficiency is. Changing practitioner behavior, physicians, nurses, et cetera, right? They have a way they do things. They have been trained to do things safe, appropriately, safely, to do them in a very specific kind of way. Changing that behavior is always difficult, right? there, there, there is a, you know, you, I gotta overcome that trust barrier with. The provider community before you even think about the patient community in terms of adoption of, of new practices, technology, you name it. And so that's part of it is as well. and then finally you said, could we see that trend maybe flip? think we could. and, and part of it is because, you know, we as a country are spending over$5 trillion. You've, you've got tremendous strain on the federal government who administers the Medicare program. The states that administrate administer Medicaid programs that, that really tend to go to people with greater needs. Right. there's just, yeah. You know, because of the nature of the industry, like we said, highly regulated, a lot of administrative burden, it's pretty ripe for the picking Right. Gi given the nature of it. And so I do think we could start to see some that, that trend reversing. now many of the solutions will need to be very. Purpose built for healthcare, given the nature of the industry. It's not like consum, you know, the consumer industry or anything that we're, we're kind of used to the rules, are, are much more prevalent. You gotta work through those, a person's health, the stakes are inherently higher. It's not oh gee, I picked the wrong thing to stream today. I'm gonna be fine if that happens. Right? Or, or, or, you know, my streaming service recommends something silly for me. Right? Life goes on, that's fine. If AI or any form of AI is involved in picket patient diagnosis, things like clinical trials, there's a lot of what's called clinical decision support tools that aid those providers in the room with the patient on, on what, what's the, what's next, and, and when you look at, you know, someone comes in and presents with certain symptoms. There are systems that will help physicians, or nurses, not override them, but help them determine, okay, G You know, I show up at the doctor's office, I have a series of kind of issues or complaints. There are systems that will help that physician or nurse say, okay, based on this, you might think about a test for X or Y or Z or imaging to look at this issue over here. Right? That's out there today. but you gotta be, you know, careful with that. But again, I think there's a, a lot of opportunity. We have a shortage, right. Of physicians, of nurses in this country. How, how do you help those people? Work at the, what's we like to call it, the industry work at the top of their license, right? So not be burdened with tasks that other people could be doing or administrative tasks. How do we help them execute at the highest possible level? Some of these systems can really help to do that and allow those people to really take care of more people, quite simply, which is good for everybody.
Ensuring AI Efficiency and Trust
graham-wolfe_1_11-11-2025_100657That makes a lot of sense. there are, you know, many different ways that we can apply AI to the, the, the medical field. many of, many of which will save people time and, and save, you know, practitioners time, clinicians time, administrators, time patients perhaps. and. Whether it's clinical decision support tools, you know, admin efficiency diagnostics, transcription, et cetera. there's, I think just a lot of time and money to be saved. You talked about all these pain points sort of on the front end. a strain on, government agencies, federal and state, over$5 trillion in, in, in spending. and. It's, it's, it's sort of no doubt that there's, there's a big scale of, of inefficiencies to be addressed within the healthcare space. certainly there's lots of opportunity within AI and agent ai. but I'm curious how, how do we as, society or as a concerned subgroup of society, Ensure that these efficiency, tools actually create efficiencies that are then passed on to the right, things passed on to the right people. Whether it's into, you know, lower spending, lower costs for, for, patients, or better patient outcomes or, or, or better, you know, work lives for, for practitioners. How can we kind of direct that and align, the, the, the efficiencies of AI with the challenges that we currently face?
kevin_1_11-11-2025_100659Yeah. So as I think about that, you know, it's, it's gonna be a process. you know, we really are fundamentally talking about. For-profit organizations trying to build tools that, that make the differences you just articulated. Right? So number one is, you know, this, this, this is a, you know, capitalistic society, right? So if you build a solution that poor trust, is leaky with data isn't really driving outcomes, which is the essential, I think element everyone needs to focus on. Those companies will be left behind. Right? And, and so, you know, in my mind, what will occur is there will be ever, and there are many, many companies tackling, tackling these different sort of opportunities or problems in healthcare, best in class solutions, best in class companies will emerge they will, they will emerge because they're doing the right things. They're producing outcomes, they're keeping data safe. and, and the promise, whatever their value proposition is to the customer, be it an insurance company, a physician office, a health system, you name it. there, there is, you know, behind the promise, right? That, that the value proposition is coming through. Those are the companies that inherently will not just survive, but thrive. People who aren't doing it right will get left behind. That's a dimension of it. and again, I do think from a standards or regulatory standpoint that will inevitably lag. And, but at some point that will catch up, probably actually be formed in my mind as best in class answers. Get, get, you know, evolve in the market. it's likely that standards or regulations will say, okay, if they can do it, everyone else should too. And so this is what we're gonna, this is the bar we're gonna set if you want to. you know, participate in clinical decision support for imaging, for instance, or you want to, participate in that insurance space. We talked about more administrative, but again, you get that wrong. And going back to Aiden, if we tell Aiden, you know, you, you owe$20 for today's visit and he gets a thousand dollars bill, it's gonna be pretty unhappy, right? And so things will emerge, I think over time.
graham-wolfe_1_11-11-2025_100657Gotcha. Yeah. Sounds like there's kind of a, Trust the process, mindset. That, that, that goes along with kind of trying, trying things out and, and, you know, trusting in the, the, the right companies to come up with the right answers. And then, you know, regulators to kind of swoop in and, and, and fill in the gaps maybe.
kevin_1_11-11-2025_100659Yeah, I mean I, when you think about it, right, we're, we're, you know, the small company I work with, we work with insurance companies, health systems. Physician groups, right, to get to, yes. With one of those organizations you are being put through, you know, it's their reputation. Our reputa as a company, the company I work with, it's our reputation, but boy, it's really their reputation, right? They have thousands, or in some cases, millions of people they serve. When they adopt a technology, it's their name on the line. And so believe me, we get put through a lot of tests, whether it's the IT people, whether it's data integrity, you name it, because the stakes are higher in healthcare. I do think there's, you know, maybe another dimension that I didn't touch on that I should have, which is, yeah. the customers.'cause you, you know, the three of us aren't buying agentic AI systems. It's the healthcare organizations out there that are buying them. have a pretty high bar because they, you know, they do something wrong. They themselves could get into regulatory issues, they could lose their customers. And so that's a demi, a filter, let's call it. that, that these innovative companies need to be, you know, appropriately put through before they themselves are interacting with humans, re regarding their healthcare.
AI's Role in Patient Diagnosis and Efficiency
graham-wolfe_1_11-11-2025_100657Yeah, that sounds like a pretty good accountability mechanism or filter, like you say. Aiden, I'll pass it back to you. Anything surprise or interest you there about, what Kevin said?
aiden-gilroy_1_11-11-2025_100656Yeah. I think just, just going back, a couple questions we were talking about. you, you touched a little bit on patient diagnosis, and I think this is a, a really big area in which AI has. A huge potential to be very powerful, and very rewarding, for the population. We've seen this already with image screening in which, AI can very easily diagnose a problem with a high degree of accuracy that radiologists may have missed. and again with the administration woes that we kind of currently see in large wait times, with an ai, you can cut down those wait times and save a huge profit that can be used for, altruistic means, which could really benefit society. but, but with these huge increases of, in, in, efficiency, you also see this big fear from the public. And this fear has been around since 2023 when, when AI first hit the market, which is a loss of jobs. people are very worried about losing their jobs, and. What I found most interesting is what you brought up the city is the goal of ai, at least as we see it now, and, and with your companies, it seems is to help providers work to the best of their license, to the top of their license. so I guess, do you foresee any fields of healthcare or, that are, that are just gonna be completely cut off by ai or do you think in general it's really just going to come alongside and help people be better at their job?
kevin_1_11-11-2025_100659Yeah, so I guess a couple things in there. So the, as to the fundamental question of, you know, will, will AI. Eliminate certain jobs. I think the answer's inevitably yes. Right? But, but they probably will be more, let's call it menial tasks. Lo lo, lower skilled labor, or, you know, skilled, but in a very narrow domain like insur. You know, insurance is this arcane sort of thing, right? That we all have to navigate. and the example I gave about, you know, checking your insurance, are you covered, not covered, what? All this sort of thing, right? That's an obvious area where. You don't need a, a call center with people kinda looking things up right. Over time that that could go away. Right. but at the same time, when I think about agentic ai right, we're talking about, Company solutions that are really affecting workflows, could, could be diagnosis, as you said, Aiden. that in and of itself is gonna create another set of jobs, right? Because the one, these companies that here to four didn't exist, will exist. There's gonna be jobs there. those jobs. When we talk about, if I can digress into this, when we talk about agent ai. Well, I don't know. Whenever we think about ai, we think about somebody's like coding, right? You're sitting there and you're writing the code for this, you know, system. And certainly you need those people, right? but when you think about agentic ai, we're talking about. workflow. We're talking about complex multi-step tasks. As we touched on earlier, that's gonna take another dimension of expertise. How does insurance really work? Coders probably don't know that. So you're gonna need people with insurance expertise. And our insurance example, if we're thinking about imaging and, and diagnosis, right? That, that's the domain of, of a group of physicians called right radiologists. They read x-rays, cts, MRIs, et cetera. You're gonna need people with expertise. In that domain, you're gonna need people, you know, within healthcare, you're gonna need people that just understand workflows, right? And, and so how does what, we're trying to get this agent AI system to examine dataset A, make a determination, then do this next step, then do this next step, right? You need people that understand workflows. And then going back to your sort of earlier question, I think Graham. Does it work? Is, is the promise of whatever this company is suggesting, is it real? You're gonna need people with expertise in kind of pre and post implementation analytics, right? Are, are you making a difference when you look at population health calculations or, or, or dynamics, right? Are you having a population health impact? That's a whole nother skill set. So while I think some jobs inevitably will be. Subsumed by, by the capabilities of a agentic AI systems, there's gonna be a whole nother category, I think, of jobs created, not just at the companies that build these systems, but at the customers. Right? Again, thinking about my little company I work with, we're starting to see job creation at health systems and insurance companies. People who have deep, you know, have reasonably deep knowledge of. Not just ai that may be a little bit lighter there. Most of the heavy AI people are gonna be in AI companies. But who looking at, you know, there there's this concept of the digital front door, right? How, how does a healthcare organization have a digital front door? We're used to a digital first experience in so many other aspects of our life. how, how you need people with that expertise. So I think there's gonna, I, I don't know how it's gonna balance out, but I think it's gonna. Lean in the direction of, I think, higher skilled workers, people who have the ability to think critically, to think in an agile kind of way, about how do we solve solutions or how do we solve problems where there is a new solution set, but how do we do that safely and effectively? That's gonna create a whole nother set of jobs as well.
Balancing AI Communication in Healthcare
aiden-gilroy_1_11-11-2025_100656Yeah, I think that that makes a lot of sense and I think that in more fields than just healthcare is, is the way that we're going. I think you are going to see a lot of these menial jobs that. First and foremost, people don't like to do, especially administrative tasks, be kind of put aside into the AI bucket, which then will allow people to think more critically, and to do things with higher degree of accuracy. A higher degree of efficiency, which as relates to healthcare will increase, patient engagement, will increase, patient enjoyment and obviously patient outcomes, which is what we wanna see. So.
kevin_1_11-11-2025_100659But we also have to be careful, right? I I, when we talked a few weeks ago, or last month, right? Another dimension of this right, is you, you put all these tools in the hands of who work at picket, an insurance company, a health system, suddenly. so much easier to messages out to people. And I can use generative AI to c create and, and blow out, right? a higher volume of message traffic to the people we serve. You gotta be careful with that too, right? All of us have gotten, you know, text on our phone and you type stop, right? it's too much, right? I'm getting all this information. So I think, you know. And not today problem, but a coming problem will be this solution set evolves, do you really. right? What, what we're gonna go after. How do you not saturate the healthcare consumer with too much information? Because there's a risk there where people just check out. They're like, I, I can't, I can't take all this. I'm not, I'm just, I'm getting messages, some cases, conflicting messages. I should do x, I should do Y next. I'm confused. I'm just gonna do nothing, and call my doctor. Right? you could see that happening. And so we gotta be careful with that as well.
graham-wolfe_1_11-11-2025_100657Yeah, that's a really important subject. I'm glad we were able to able to touch on it. I think it's important across, again, all domains, not just healthcare. but I think what's what's emerging out of, of this conversation, Kevin and Aiden, I think is the idea of trust and how it's, sort of this facilitator of. Execution and deployment and, and uptake, of, of these new, applications of ai. so, and I, I think you've done some really great work on, understanding how, AI companies are, are targeting trust and, and trying to, to cultivate it in their users. could you maybe reflect a little bit back on, on some of that work, particularly, with Anthropic? I think you did a, a really great job, deep diving on that a few weeks ago. again, up on our, our page on LinkedIn, Yeah. Again, just sort of reflect on that through line of, of trust as it pertains to the rest of the AI world.
aiden-gilroy_1_11-11-2025_100656Yeah, definitely. Thanks, Graham. Yeah, I think when I was looking into philanthropic with. If, if you don't know what Anthropic is, philanthropic is, an AI company that has been around since about the origins of open ai. and the, the whole point behind Anthropic is safety. they've seen AI and its ability to interact with individuals, and they've seen AI in its potential, do many great things in the world. The potential of healthcare implications as we've been talking about in this conversation as well as many more. but what they don't want is misaligned ai. They don't want AI that is going to take people's data and produce, really bad ends that individuals might not want, or AI that's going to maybe incentivize human beings to go down roads that. That are really bad. And so what they've done is they've increased their safety measures. they have about five or or six, different measures that they have before they release an AI model to the public, models such as open sourcing their AI code, in which many different people can audit it and look at it, and double check to make sure that it's aligned with their, ideas and aligned with their incentives. They also have in individual chat monitoring in which, if they can see that an individual's chat with Philanthropics technology is going down, a wrong path, or going down a path that might, lead to the creation of. Say, a technology or a specific invention that somebody is trying to create that is not good. they can cut it off there and stop the chat there. And so I think when you look at this in, in a general, idea, you see these companies, such as philanthropic trying to create ai. That is, working with human beings to best benefit the human beings and, and their own individual desires. and this is maybe opposed to an AI that is really just trying to take as much as they can from human beings, whether that be data, whether that be their time, whether that be their attention. And I think this is really going to be the dichotomy. You're going to have ais that are really just about money, that are about taking as much money using the product that they're creating to really hurt human beings. and you're gonna have AI companies that are trying to best benefit human beings and come alongside, them and their own endeavors and in their creative potential. and I think that that is really what's going to be the question of the future. Which one is going to win out? but I think you, you do see this, this really big dichotomy in the world of AI right now. and it, it's been interesting to follow.
graham-wolfe_1_11-11-2025_100657Yeah. So, I, I think, people would ideally like, healthcare applications of AI to be firmly in that second camp where things are, you know, just perfectly aligned with, with humans' desires, in the long term, not just the short term attention or time or money, outcomes. Kevin, how, how do you see, the behind the scenes work of, of developing an AI tool or, or deploying it to, to a health system? How do you see the behind the scenes, working such that we're, we're keeping in mind those AI alignment goals in, in the long term?
kevin_1_11-11-2025_100659Yeah, so I guess the good news about healthcare is there, set AI aside there, there is an entire field within healthcare of assessing population health, right? And, and within population health we've got what's called the social determinants of health. You think about age, ethnicity. Level of affluence, level of education, the kind of insurance you have, right? And people have studied this, you know, there's a lot of research out there, about, setting AI aside other innovations in healthcare could, could be the, the advent of more advanced imaging. Is it good or bad? How does it affect the population? So great news is there is an existing framework. Out there. And when we think about the emergence of ai, AI will be put through that same rigorous framework of is it actually making a difference? Is go back to social determinants of health, is an AI tool right? We, we talk about hallucinations or bias, right? With within ai, there are existing frameworks within healthcare. There are incredibly skilled, incredibly smart people who can assess this. And as you. Allow, right. An AI tool out there in the world, right? That, that's a little bit of a trick. A trick is or, or challenge is unlike a large language model, which you can train in the lab on data and say, okay, if I put prompt X in, does the output of y make sense? Is it rational? Is it silly? Does it, you know, is it hitting the, the target, if you will. One of the challenges with, with AI or, or just, you know, healthcare AI in general, is it's a little hard to test it in the lab until you let it out in the world, and it starts to do what it's supposed to do. Could be an administrative task, could be a diagnostic task, could be interacting with human beings and encouraging them to take that next best step in their healthcare journey. There is some risk where you don't know what's gonna happen until you sort of let it out in the world and that, you know, so that's on the one hand, not great on the other hand. is a framework in place to say, pretty rapidly, is it doing what we think it was going to do or is it having unintended consequences? And in that sort of think about that social determinants of health framework, you know, you tend to see that the more educated, affluent people with better insurance tend to have better healthcare outcomes. Not shocking, right? So you could quickly say, if we're launching a tool out there, is it. Bringing the population along. In, in kind of a good homogenous way, or are there segments of a population being left behind, which suggests that there's, there's a gap in the architecture, there is a bias inherent in what you're unleashing on the world, then you would, you know, quickly probably wanna reel that back. and, and retool it and, and say, okay, how do we continue to work on this solution so that it is more all encompassing that we chase out any biases that may be, not intentionally, but accidentally resident within our solution. there's a way to test it. and I, again, there is, there are very high stakes, right? When you think about these AI solutions in healthcare. pretty much white label solutions, so the customer probably doesn't know the patient, the, the member of the insurance plan doesn't know. In a lot of cases, they are interacting with an AI tool, so that means the stakes for the customer, the insurance company, the health system, you name it, like we said earlier, very, very high. and so they, again, the company I work with, that's one of the things we work on very hard is within the first 90 days. Of launching what we call an intervention, a use case, right? We are assessing that. We're sharing that data with the company. We're collaborating with the health system, the insurance company, you name it, and saying, okay, are the results we're seeing, the results we wanna see? And if there are gaps, what? What do we want to do to, to kind of, you know. Massage the outcomes, may maybe tweak, the architecture of, of the tool to drive the outcomes we all intended. So there's a feedback loop in essence, is what I'm saying. it's a pretty tight feedback loop, I think, in healthcare. So is there risk? Yes. but I do think that there is a, a, a relatively rapid assessment framework out there where you can say, okay, are we doing what we think we're doing? Or, yeah, are there some unintended consequences here that suggest that this tool isn't, you know, the promise of this tool, this company, this solution isn't quite what we thought it was gonna be.
graham-wolfe_1_11-11-2025_100657Yeah. I, I think that's a really pragmatic answer. I, I think when we talk in, in other, other, you know, experts in other fields about, AI alignment, there's a lot of. Sort of talk of ideological alignment or, these very, nebulous or, or, or amorphous kind of, of, I ideas of alignment. But I, I feel like healthcare might be, you know, uniquely positioned to, ground that kind of alignment question within. the specialties that already exist, pre and post-deployment analytics, the social de the determinants of health, that kind of thing, in a lot more black and white way than, you know, a lot of other industries, which is, interesting and reassuring in a lot of ways. I think that stems from, again, the two, the two drivers that we've been talking about this entire time, which are, number one, the high stakes, involved with healthcare. And number two, this ultimate focus on health outcomes as the final push of, of this entire system. So, yeah, again, like I said, pragmatic and, and certainly reassuring. but just to keep us honest on time, I think we're gonna start to wrap things up here with a a few parting thoughts.
kevin_1_11-11-2025_100659Yeah.
Preparing Students for AI in Healthcare
graham-wolfe_1_11-11-2025_100657the, the new AI project as, as an organization we're, a, fundamentally a pretty optimistic group. so I think we, we often like to end by looking to the future and sort of, talking about what some of the, the, the merits of the, of this ongoing revolution might be for, for, the future. But then also, fundamentally we're a student organization. So, we like to think, from, from that mindset and, Think about what we can do, ourselves as students, to prepare for the future. So I'll pass it over to Aiden, again for some parting thoughts, on brand with, with our mission.
aiden-gilroy_1_11-11-2025_100656Yeah, I, I think, as it relates to the student question, we've got a lot students here at Dame who wanna study healthcare, who field of. Either surgery or nursing or some type of research. and so as we've talked about today, it seems like AI is going to be, with them into that field and in some sense, either partnering with them or in some fields or moving their. So I guess as it relates to students, what, what advice would you give students that are desiring to enter into the healthcare field? how should they utilize their time to best prepare themself for the future? And, and how can they start learning about AI now to to help benefit them?
kevin_1_11-11-2025_100659Yeah, so, a, a fair amount in there. Let's unpack it. So. Start at the end there. What, what can you do to prepare a, a, as we talked about earlier, right? I, I think that one, these solutions are emerging at a ridiculous rate, right? So you can't just say, you know, if you do A, B, and C, you're in great shape, right? In my mind, you gotta think more broadly and as I think about it, right? Of course, you know, you're gonna need the, the, the coding folks, let's call it, right? So people have deep. in, in kind of in, you know, fundamental technology, hard skills there. But as we talked about earlier, understanding workflows, having domain expertise, understanding analytics. So in my mind, you know. If, if I had a, a, a child who was at school right now who wanted to do what, what you just talked about, Aiden, you know, my advice would be, you know, just learn to be a critical thinker. I, I think these companies are gonna, you know, obviously need a wide range of skills, but fundamentally. Like serious intellectual curiosity and agile mind. Being able to think about new problems where there aren't sort of rote answers and, and think critically about not just what we're doing, but the downstream, as we've talked about here today a bit, what are the implications? What are the unintended consequences? Right? That that is, it's an emerging field and so I think. Creative thinkers, really curious people, people who can think beyond the problem we're solving today and thinking about what is the next iterations of that as we do X, what will happen? Y know, we could have a variety of outcomes coming out of that. know, how do we really assess things holistically? Right? And I think Notre Dame and, and just its approach to education, thinking about things holistically, not just the problem at hand, but the broader implications of that problems. Those are pretty fundamental skills that these companies are gonna need, I think, to be successful. so I don't have, you know, take this certain course with this certain, you know, I don't have an answer like that. I, I would really encourage people to. You know, study broadly have a really multidisciplinary brain, right? Right. That can think, about really problems, that don't have answers today, where there's an emerging toolkit, if you will. you really gotta be careful about, as we talked about here today, how those tools get deployed. How do you assess whether they're doing what they're supposed to be doing? You know, these are the things that go through my mind. So I, I, you know, it's not a prescriptive answer, I don't think I'm bailing out, but. broadly, be curious, and, and just be open to tackling new problems. And in my mind, as a, what do you study matter or how do you prepare yourself matter? I think yeah, just having a big brain that can go a lot of different places, right? And you're just not sort of narrowly focused in one lane. Agentic AI solutions are inherently interdisciplinary. and, and I think people who can bring that mentality is just a critical facet of being successful.
graham-wolfe_1_11-11-2025_100657Thank you for that. I think that's, I think that's right. That's, definitely something we, we try to emulate as, as an organization here researching everything across the, the entire spectrum of AI applications. so yeah, thank you for that. And, thanks for your, your thoughts and opinions and takes and, and expertise. This entire conversation's been really fascinating and will take a lot of it back with us as we continue to research, our sociotechnical applications of artificial intelligence. Kevin, thanks so much for being here.