The ThinkND Podcast

Health AI Forum, Part 2: Innovation AI with the Mayo Clinic

Think ND

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 56:43

Episode Topic: Innovation AI with the Mayo Clinic (https://think.nd.edu/bq/healthai-2/ )

Discover how Mayo Clinic is pioneering the future of healthcare. Go beyond the technology to see how AI is amplifying their historic, compassion-driven mission, freeing caregivers to focus on what truly matters: the patient. Emily Godsey, Administrator of Innovation & Digital Transformation for the Mayo Clinic, and Scott Helgeson, Doctor of Medicine and Assistant Professor at the Mayo Clinic, reveal a powerful vision for a more human-centered and proactive model of medicine.

Featured Speakers:

-Emily Godsey, MSHA, FACHE, Mayo Clinic

-Scott Helgeson, M.D., Mayo Clinic

Read this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: https://go.nd.edu/30ec36.

This podcast is a part of the ThinkND Series titled Health AI Forum. (https://go.nd.edu/090c52)

Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.

  • Learn more about ThinkND and register for upcoming live events at think.nd.edu.
  • Join our LinkedIn community for updates, episode clips, and more.
1

Hi everyone. Thanks for joining us here. I like the small group that we got going on here, this kind of small room here. so we're from Mayo Clinic, specifically Mayo Clinic, Florida. And we're gonna be talking about just some of the innovative, some of the innovation kind of process flow that's happening at Mayo Clinic. I'm Scotton, I'm a medical doctor. Part of my, I'm a pulmonologist and an intensivist, at Mayo Clinic, and then I'm in charge of more of the clinical side of digital transformation for the Department of Medicine and also a physician lead for the Ambient Clinical Intelligence Program at Mayo Clinic as well.

Speaker 2

Okay. I'm Emily Godsey. I'm an administrator at Mayo Clinic in Florida. So my current assignment is with Digital Transformation, Florida Innovation, and our Berg Innovation Exchange that, expands all of our sites. Before my current assignment, I was in traditional practice administration, so spent quite a number of years there. I will kick off our presentation. Let's see. You remember how to advance. Okay, so I didn't have this slide in at the beginning, but walking around Notre Dame's campus. Thank you. Walking around Notre Dame's campus really inspired me and actually reminded me of where we came from at Mayo Clinic. So in the early 18 hundreds or mid 18 hundreds, Dr. Mayo was, kind of stationed as a doctor in Rochester. Minnesota where he kind of provided your typical small town care, but he was pretty innovative in what he did. He had a focus on like innovative surgical techniques. He had a couple of sons that he was training up in, like the family practice basically. And then a really bad tornado hit in 1883. He sees some of the devastation here. It just really wiped out the town and as the town was rebuilding, there were some Franciscan sisters that were placed there. One of the sisters sister, uh, sister Alfred Mos got to know Dr. Mayo, especially after this tornado providing care. And during that time she had a vision and her vision, I outlined it there at the bottom, is that there'd be a great hospital within the corn fields here and that the name Mayo would be known around the world for innovation, for surgical techniques. Dr. Mayo actually was like, you're, you're a little crazy. No way. I don't think we should do that. I just wanna be a small town doctor. Nobody's gonna come to these cornfields. And, um, and was resistant at, at the beginning. There was a lot of anti Catholicism at the time in the area as well. She said, no, listen, we're gonna do it and we're just gonna drag you there if we have to. And so it was really through her perseverance and raising of funds that resulted in building the first hospital that led to Mayo Clinic. And also, we have really deep values. They drive everything we do. And those are really, again, from the Franciscan sisters, they spell out rich ties, but it's respect, integrity, compassion, healing, teamwork, innovation, excellence, and stewardship. Those were birthed out of this relationship with, uh, with the sisters and they really drive what we do still today with the patient always being at the beginning again. sister Alfred Moe's vision was that any patient could come to Mayo Clinic at any time and get care. Didn't matter who they were, where they came from, that they were there to service everybody for the good of the people. And that has translated to our number one value now, which is the needs of the patient come first. Any patient. And so, um, of course all of that has resulted in the Mayo Clinic that you know, today we have three academic destination medical centers, the, uh, mothership in Rochester, Minnesota, one in Phoenix, Scottsdale, and then one in Jacksonville, Florida, which is where Dr. Helgason and I are from. This actual, this design on the front of our hospitals actually made to, remind one of a kind of modern version of the nun's habit. So, uh, always keeping in, they try to have at least one like portico type of structure on each campus that looks like that. And so, again, we have the three centers today. Uh, more than one and a half million patients come, come through one of these doors every year. I, a little bragging but number one hospital in all of our states and in the world. Proud of what we do. Okay. And so where does that bring us today? So, we see the importance of really transforming medicine using digital and technological capabilities. again, kind of putting on my digital transformation hat, which is where a lot of this AI type work falls. This is how we approach our work. So our vision is the same as Mayo Clinic's vision to transform medicine to connect and, cure, to be a global authority for serious and complex diseases. That's guides what we do. It's our vision for any, you go to any project or work extreme or office, they're gonna say, this is our vision, the same vision. Our mission within digital transformation specifically where a lot of the AI infrastructure is being built out in the Mayo Clinic, Florida campus is all about the patients and the staff. So how do we create personalized and frictionless, thank you patient care, and how do we create intelligent systems that simplify our care team's work. We've been, uh, working on this for a long time, but officially, formally for a few near years Now. What we've learned over the last year and a half or so is that data should equal or should lead to knowledge. It should create knowledge and knowledge that we have should be translated into data that we can use. So our data is an asset and it's an asset that we should use to really scale and empower, people giving care on our patients. Then digital to value. We shouldn't just do digital for digital's sake or because it's shiny. And, cool. We should be doing digital in a way that, again, aligns with our mission, our vision, our values, but also is scalable. So anything we should do, we do should, uh, reduce redundancy. It should be scalable across our sites or, the, the physical locations or different sites of care. It should enable anybody to step into that infrastructure when they're ready and leverage it for innovative or transformative work. So my, my role, I always say my goal is that my job is extinct within the next five years.'cause I'm not needed anymore. The infrastructure is there where anybody can come, play, innovate, transform, and leverage the, the building blocks, the techn technological capabilities that we're creating today. And so at Mayo Clinic, like why lean in on, uh, on digital and on technology on ai? It's because we're making a few big bets. And those big bets are around growth. We believe that our patient encounters, the number of patients that come through our door will continue to double. We've seen more than double growth over the past eight years, so there's going to be more people coming to us needing care. we need to be able to serve them. We're gonna continue to have space and access constraints. So there's only so much land. There's a high cost to building buildings. So though that will continue and access. So with all of these people coming, the access to get the care, to get the lab tests, to see the providers will continue to have constraints and then staffing will continue to be, to be lean. That's something that's seen across the nations, not unique to Mayo Clinic. There's not enough care providers for the people who need the care. So we believe that will continue. So knowing that we've seen these three things and we are betting that they continue on, it makes us realize that we can't practice medicine or provide care in the same way that we've been doing it. We have to transform and do it differently. And we see digital, technological solutions like AI as one of the doors, one of the, kind of keys to unlock the door to continue to providing care in the future. In recognizing that, we went back and we said, well, what is special about Mayo Clinic? And one of my physicians looked at me and said, yeah, there's great doctors at all sorts of hospitals and this integrated, like collaborative model of care. It's not new anymore. You can find that at many academic medical centers or all across the nation. You can find that anywhere. And he said, what to me is I see it Mayo that's different than these other places I've worked or where my colleagues work, is that even in the darkest times when things are unknown, our patients still feel joy, that he'll still see smiles on people's faces. They're interacting. There's like the, just this sense of human connection. And so we thought, well, what makes Mayo magic? It's the people. It's the people that work there and the compassion and the human connection. So we wanted to build a moat around that. That's our superpowers, our people, the joy that they can inspire. So we wanna build a moat around that, and we want to amplify the kind of power of the people and, and how do we do that? Our solution was for now, what we're charging toward is make our campus a member of our care team. We have a whole nother group, I should say, that's focusing on care outside of the campus, you know, and home hospital, virtual care in the home, things like that. We're saying, what can we do on our actual campus? And that is really to make the, the campus a member of the care team. So you have one more person, right? One more thing on the care team. It's this room. And you have this room that is looking at everybody else in the care team and saying, how can I help you? How can I make you more human? And, uh, the steps that we're moving toward to do that is we need the room, the hospital, the building to be able to sense. So to be able to know what's going on. It's collecting data, it's connecting data, it, uh, knows what's needed when. And really we're doing that by building a bit of, agnostic arsenal of hardware. That can play well together, uh, with ambient sensors, thermal sensing cameras, humidity sensors. There's all types of sensors that we're starting to put put throughout our care environment. Additionally, we have your more traditional, uh, data pipelines, forms of data. So on top of that, the campus or something needs to be able to think. So we have all of that information. What do you do with it? And that is the models, the processes, some of these digital systems that can be put in place to take that data and think with it. So, for instance, we're working on an, an algorithm that uses like computer vision and audio cameras, things like that, to look for early warning signs of delirium in hospitalized patients. And so you have these sensors being able to look at what's going on. So eyes and ears in the room at all moments in time when a staff member can't, we can't, you know, sustain having a, a staff member in the room at all times. So you have these sensors doing that, that then feed into algorithms that might or might not be paired with data from the EHR that are saying, hi, what does this look like for this specific patient compared to what we would think might be leading up to de to, uh, delirium. And then when it starts to see that, hey, things are looking like early warning signs of delirium, then it moves to that next stage of act. It would send a notification to a nurse and say, Hey, you should come look at this patient. We're seeing early warning signs of delirium. We think you can step in and do something about that. Uh, right now nurses, if we think a patient's at risk for delirium, a nurse might come in every three hours or so and do this CAM assessment to look for the early warning signs. That's at Mayo Clinic where we're fortunate enough to be pretty well staffed actually on, for the nurses to carry smaller patient loads. In other hospitals, it's even harder to get in there and do those assessments. So this is a way for an assessment really to be happening every second is how this runs, runs on an edge device. So it's really continuously looking for these warning signs and getting the information to the right person sooner. So that's really walking through an example there. Uh, I put the, I put the hands on there because that's. My personal drive to why I do this. I accidentally wandered into a nursing event a couple of years ago on our campus, and it was the blessing of the hands. I was like, what is going on? It's something that, uh, has happened since the, since. Sister uh, Alfred Mo's first helped with Mayo Clinic. When they have the new nurses come on, they do a big event and they bless their hands to send them out to be able to care for patients. And so I thought walking out of that, I was like, I can help with this. The teams can help with this. Because right now so many hands are behind keyboards and so many eyes are looking at screens. I don't wanna bless a hand to type on a keyboard, though it's still important, but let's bless the hands so that they can touch people and so their eyes can be looking at people. So again, how do we make the campus a member of the care teams, sense, think, and act in a way that puts all of those hands back where they need to be? Creating those human to human moments that, that can't really be created any other way yet. And then I'm gonna kind of dive into a, brass tax example of how we're doing AI at Mayo Clinic in Florida. And again, I'm an administrator, so it's all about resources and funding and putting teams together, not, not down in the technical weeds, uh, I don't wanna be. but we know that there's a lot of clinical and administrative AI happening in Florida and it was getting stuck. It wasn't going anywhere. So. Uh, in 2024, we said, how do we find a way to give momentum to everything that's happening around our campus? And so we built a team called the AI Accelerator. It's a bit of an experiment, and uh, it is first time we've tried it and we have this embedded team of experts on our campus that is there to assist with giving some velocity to the AI innovation that's happening in the space. So right now, I think somebody mentioned earlier, the Mayo system and structure is incredibly complex. A lot of people don't know what other people are doing. We're highly matrixed. There's tons of committees. And so these people are meant to help the innovators navigate that, take some of the burden off of them, and also give a really dedicated engineering development, architectural experts to help them move their innovation forward so that they don't have to worry about trying to figure all of that out, about trying to navigate the super complex Mayo system. So right now we have six people on the team, product owner, software engineers, A IML, experts, people like that. And we really work with leadership on our campus to make sure we're prioritizing the right work and approaching it in the right way and enabling the right things. And then some, something that was really lacking was around prototyping. So taking these ideas and putting them in prototyping, so that we can start testing them out and then running proofs of concept on them in the practice so we can see how they're working and then work towards scaling them. So to go into a little more detail on it, we created these steps that the any AI project can funnel into. In intake, someone submits their idea with their SAR, we help them work through this saying, what's, what's the strategic alignment? How does this help drive that Mayo Mission forward? How does this get solutions or, you know, better things in front of patients? In day two, we move toward analysis. So there are a couple of stages that it go, goes through. They work with the technical experts and they do analysis, endorsement, feasibility, fit ability scores based on, Mayo's cloud infrastructure, the way that the data pipelines we have, how the model might need to be built, how it could be built. Built in a way that integrates right with all of our systems and starting to deeply understand all of those pieces. And then we also have this stage two. It's a committee we put together across collaborative committee. I, I, uh, lead this team, but then I wear my innovation hat and get to sit on that actual committee. As part of my innovation role, we have someone from the Kern Center, from a digital innovation lab that we have someone from business development, someone from legal, and it's really to get people in the room to say. yes, we're behind this. And not only that, but what can we do to help? What roadblocks, hurdles can we go ahead and lift up for this innovator so that they don't have to maybe navigate the traditional committee structure, go through all of these bodies in a, in a, in a, in a kind of unidentified way. We're going to help alleviate some of that from the start. Connect them with the right people. We make a go. No go decision. It then moves on to, for the time being at least this next stage, which is the AI accelerator team looking and saying, based on what's going on across our campus and our team right now, because like I said, we only have six people. Can we do this right now or when could we do it? Uh,'cause as we still test out the team and hopefully expand it. We then move on to the prototype development, which is something that was really lacking before at Mayo. So we bring in the, the engineers, the software people that are needed to build the prototype, start testing it off, testing it out, and then part of my role is to be having conversations across the organization to say, Hey, here's what we're working on, to socialize it, to champion it, and to help find people who will provide continued funding and an operational home for it as it moves in into the practice. So, uh, there's some, there's some, uh, uh, data down there on Our goal was 10 to 12 projects reviewed. at the time when this was done, we'd, we'd reviewed 12. Uh, we wanted six to eight to move to the next stage. Uh, we have a few more than that and all the way to the end where we currently are prototyping two of the ideas, one uses a, a agentic AI and generative AI to help with scheduling, and another one is something to do with the research shield and helping researchers find the right patients for trials and communicating with those patients. So those are being prototyped right now. And then the team is also helping with the scheduling one to bring in, it started in pain medicine to bring in other practices and start making those relationships to test out the concept and what they've built in cardiology, in neurology and other places. So again, we can scale it faster. And all of this is occurring in around a hundred days. In the past, it could take, quite frankly, years for something to move forward, and we still haven't perfected it. There's only so many ideas that can come in here and that we can support, we can support. Right now, Dr. Gerson's gonna talk to you guys about some more specific ideas. He, he probably would say, Hey, this is a great concept, but it's still blooming. We're still trying to, uh, have it grow so we can support more and more. And then, you know, I, I don't wanna say everything is perfect. So pulling up this slide, which I think has an animation to show, uh, something my team currently worked on is scanning the ecosystem of AI with across Mayo Clinic saying how does our AI accelerator team there at the top fit within these five large brackets, discovery, development, validation, deployment, and impact. We providing service right now in there. What are some other groups, uh, we were talking about Kern Center, for instance, a business development that we mentioned. Where do they fit into the puzzle and what are they providing right now? What are some initiatives that are upcoming that will help provide service to AI innovation? Then we noticed a gap in opportunity in the validation and deployment stage. There's not as much support there and quite frankly, you might be able to tell there's a lot of lines starting and stopping and blurring there. It's a bit chaotic when you get to that at that point still. So moving into 2026, we're currently re-envisioning what the team could look like so that we can add some additional validation and deployment support. So that's the background on Mayo, how we're approaching solving kind of the business side of it and helping with resources and teams. And I'm gonna hand it over to Dr. Halon that's going to go into, um, some, some specific use cases.

1

Alright. So just again. I'm a physician, so the technical side, don't ask me any of those questions. I don't, I can't answer any of that. But part of what this, um, is, is, I'm gonna be honest from the medical side, we have a lot of difficulties to come up with anything, not only from buy-in, from people, buy-in from the government. There's a huge long process that has to happen if we come up with anything. But it even starts with buy-in from our own organization. So some of the difficulties that we have trying to come up, everything that Emily's kind of talked about. Biggest thing, first one up there is data access. We get tons and tons of data from every single patient that comes into our hospital constantly. Where does it go? Some of it goes to the vendor, some of it leads, you know, most of the time no one has any idea. We don't save it. Things like that. So that is where we are starting to, and we have been, we're saving every little bit of data. We're renegotiating all of our vendor contracts, things like that, where we are getting all of that back to us. Over the years, the vendors have 20, 30, 40 years of data. We're starting to get it all back. We are trying to manage every little bit of information that every single patient has. The, what is it? 12 million patients I think that have been to Mayo Clinic over the years. We're starting to get all that information back, so we're able to now keep track of that first difficulty. Next is Mayo Clinic's a hospital. We're not a college, we're not a university. We don't have the staff to be doing these projects, so trying to highlight what are the best use cases, what can be done the best. This is where we're trying to partner with other organizations, institutions, things like that, that can help out because we truly don't have all the capacity to do everything that is necessary. kind of talked about deployment optimization, but I'm gonna go through just a couple kind of solutions that I've seen that, that we can kind of go through. So from my side, I see bridging what I'm capable of doing. From taking care of a patient to what the technical aspects are going to need, that we don't know, I don't know. But having people with me from a day-to-day practice, or I'm with anyone that can do any of the technical aspects to see that helps that bridging of the, the, the partnership. So I, I, I believe in this is kind of gonna go but. Capitalize that, sorry. there's a, there's a concept in studying where it's, I know I failed right there, ethnography where it's, it's like you're embedding the person and you're walking through a day-to-day practice. So the way I think about my practice, the way that someone else that's looking in from the outside versus this person's versus this person, they're gonna be like, why do you do that? That doesn't have to be done that way. Why is it done like that? I've done it that way. That's how I do, it's hard for me to change, but someone else would be like, this can save you 30 minutes of your time doing this, and be like, I didn't think about that. So having people actually in the practice with me, and what I mean by other people, the technical people, most of you all, that is key. So kind of what I've, what I see is more of a, a digital lab in the hospital. Where I've had people come and follow me around during my rounds. I'm taking care of patients. Different things like that where they actually see what happens. See, so there's, there's quite a few people. Nice little rotation that kind of comes through. But the first project comes into what we are seeing from the new type of clinical imaging cameras, not x-rays, CT scans, MRIs, things like that, but from cameras that we're implementing all over our campus, everywhere. Voice record. You know, it's, it's sound, different types of imaging, but one of this comes from is in the patient room. What happens in the patient's room when no one is in the room? So one of the people that, one of the, uh, research fellows, he was with me and he was basically saying, okay, so this patient is in the ICU. They just had surgery. They're in a lot of pain, but what happens? The nurse comes in or someone comes in every couple hours or they have to push a call button and says, I hurt. I'm hurting. This is what's happening. But what happens during the time where no one's there, we can't actually know what's happening. We kind of talked about delirium and things like that, but this was talking about a pain, visualizing a camera focus on a face to visualize brows. Mouth movement, things like that, that can take the subjectiveness out of the pain or the ability to be reactive and make it a little bit proactive from a pain standpoint. So normal scales are, someone comes and asks you, what's your pain? On a scale of one to 10, 10 being the worst, one being the least, what do you have? People are all over the place, but it, it's supposed to be based upon what you have experienced in the past, and that's kind of where you judge. And people have ideas like, oh, if I say a two, no one's gonna take me seriously. But if I say a 10. People are gonna think I'm lying. And you have, you're everyone's confused over everything that they're trying to say. So like I said, I don't know any of this, let's just leave it at this. But there, it's based upon taking external databases, training what the face does, and then using that for the subjective pain assessment to have a model program to say. This person's in pain. This is what we think the pain level is, or this person is not in pain. So the external databases up there, we can see and we have the kind of areas of interest, which would be, the whole face, only the face, brow, eye, mouth, nose, wrinkling of the forehead, different things like that. he named his model YOLO for you. Only Look once. Supposed to be for looking at the camera, so that, that, he thought that was pretty clever there. so we have these, these action units. So your brows, lowering your cheeks, raising upper lip raising, nose wrinkling, eye closure, they all have a different weight for how much pain someone is in. So this is, uh, this PSP, this is, uh, I can't pronounce this name. And Solomon pain intensity score. Sorry if I butchered that name. but that's what this is, this is based on, there's some of the technical aspects. this is through Python. This is, you know, pretty standard. but this is kind of the workflow for that model creation where we took that external databases, had the model creation, put it through a model pain. No pain is the initial one, whether they are in it or not. Then we take it and say how much pain they are in. So from my side, pain, no. Pain is very important. How much pain they're in is also important to say. If they just had surgery, let's say three days after the surgery, they still in as much pain as they were on day one. That's a problem. They should be improving. Pain should be a little bit less as they go. Or is it more pain than it was a couple days ago? How much pain, it determines what medication I use for that level of pain if I'm giving an IV medicine, if I'm giving Tylenol, something like that. So it's able to tell me kind of telling how much pain a person is in. It's pretty good. Pretty, pretty good. Based upon what the subject, that standard model or that standard practice is, that one to 10 score. Pretty good. So this is kind of what it's actually looking at. this is from one of the external databases, so that's why her face is kind of there and open, blacked out. But this is what the model's picking up on all those, uh, areas of interest that we were saying. But, um, this is that real time measurement of pain versus our standard, Hey, you in pain. What is this? What do you got? Now we're taking this into settings of. Delirium. We're taking this into settings of, is a patient getting sicker in the hospital? Are they not moving as much in the hospital because now they're sicker? Are they gonna require the ICU? Are they gonna have a cardiac arrest just by looking at them and visualizing their movements, what their face structure's, doing, everything like that. The sound? Are they getting a new cough? Is the cough changing? Are they developing pneumonia from that? You know, different things like that that we're being able to pick up just from having these sensors in the room. Next, how can we have this better unified structure for what these AI tools can run on a platform? An API that kind of moves to the platform. This is where I was saying where we're trying to get a lot of this data back. This next project that I'm gonna be talking about is this is from a hospital-based project that we're working on now, and it's taking what's called, all the Vital Sign Monitoring. So telemetry, which is EKG, patches on getting your heart rate, heart rhythm, pulse, oximetry. Respiratory rate, temperature, things like that, that's just recorded, that kind of goes out into the space. We're looking at it all the time, but most people, it's never saved. Not nothing ever happens from it. So this comes down to I am in the ICU. I deal with the patients that, let's say their blood pressure's extremely low and they need me to get their blood pressure up, things like that. So what happens again when someone's not there? We're already acquiring all this data, so why not utilize it to see what can actually be better. So what this is looking at is blood pressure, continuous blood pressure monitoring, just from a very non-invasive mean, it's a pulse oximetry, it's this top picture. I dunno if you've ever had that, but it's a little thing that just goes on your finger, measures your po, your oxygen saturation. and that's it. Pretty innocuous, pretty simple D device. Nothing like that. Then what I'm doing is using the actual waveforms from that and basically using what we call an arterial line, where I will put an invasive catheter into someone's artery, neck, shoulder, groin, wrist, things like that, to get a direct measurement of someone's blood pressure. So that is the most accurate blood pressure means that we have to measure it. You also have the standard blood pressure cuff using that as well. But using the pulse oximetry to then measure the continuous blood pressure. Just like if we had an arterial line in place, but we actually don't. So something that you don't need someone specialized to insert it, or the dangers and risks associated with putting this in. The other thing comes down to our pulse oximetry. We're using lights and infrared and red laser. What does skin tone do to it? It's a very well known instance of saying. Everyone's skin tone will affect it, and it can lower raise it. It will alter it. There's been lots of studies, especially during COVID recently, that people with darker skin tones weren't treated as aggressively because it was showing that their oxygen was level, but really it was much lower than what it should have been. So we are taking into account the monk skin tone. This is just a color. You walk up to someone and you're like, what's your skin tone? But we're using this in the models that we're trying to develop, so. Part of what this is going is, is from the patient, you get your vital signs, you have this cloud-based platform that's taking all these measurements into the cloud, and we're then able to clean the data, de-noise it, filter it clean, you know, run our models to get what the actual measurements are. So that will be the picture. Now the future's just gonna be the device that actually says what your blood pressure is. So when we're looking at what the pulse oximetry waveform actually looks like, this is what it looks like at the top. There's some wander to it, there's peaks, there's troughs. It goes and goes, and that's what we're getting. The blood oxygen. So you gotta do some pre-processing to take out that wander up and down, up and down. bad quality, you're moving your hand around. It's gonna be a bad signal. Perfectly flat, best signal. So we're trying to figure, you know, get the best way to fi, you know, get the best single quality to get our best measurement results. We're trying to, you know, peak detection, feature extraction, multiple different types of domain analysis, and we're using upwards of about 150 different features from those waveforms that we're getting. So with this pretty noninvasive means within about. Two millimeters of mercury, three millimeters of mercury difference from someone's blood pressure, and that's mean arterial pressure, systolic and diastolic. So if you take into kind of know someone's normal, his systolic blood pressure 120, but that's, I dunno, 2% air, 3% air from that, just from something pretty non-invasive. And part of this is, this is, it's near real time, which you can see how quickly this can run the model to actually tell you what the blood blood pressure is from this. So pretty, pretty quick. Our 12 million patients, they keep coming back, keep coming back. How do we get access to that longitudinal data? How do we get access to, I'm not gonna talk about radiology, pathology, genomics, we're just talking about someone over here with genomics. Uh, but I'm not gonna be talking about that. It's more that longitudinal. This is more related to. Patient care, and this is related to one thing, you know, we do quite a bit of this at at all the sites, but specifically Mayo Clinic, Florida, a huge transplant center. So this is this, this is geared towards lung transplants. So one of the ways that lung transplant failed is what's called chronic lung allograft dysfunction. Typically it is done, it's another. Reactive measure. Someone has to get a pulmonary function test every couple weeks, then every couple months, and we're looking at changes in what the ability to exhale the air out, which is that FEV one decline there, and we're going by, oh, if it declines by 20% from the prior two, then you potentially will have this and we need to get closer monitoring, maybe a biopsy. Then you may need more immunosuppression, different things like this. So this has a huge clinical impact, but what if we can take. Those measurements go back as far as we can and predict. Let's say you are this likely to get this rejection in the next thirty, ninety, a hundred eighty days. Instead of just saying you might have it now. That trajectory is what is going to save these people's lives because biopsies, more immunosuppression, all those come with risks, things like that. So this is part of that longitudinal access. This is us getting. From the far left. These are some of the vendors that we use. They had all the pulmonary function test data. We're starting to, we worked with them, started to develop an API program that now pulls that, that gets it into our cloud-based platform that then we're able to use all of our patient data on to actually get these types of tests and get these types of, uh, projects up and going. So,

Speaker 4

yeah.

1

Okay. But what I can say from a clinical side is if you say I'm, you know, pretty accurate that you might have rejection in the next 30 days, I'm gonna say, okay, we're gonna get this test now in a couple of days instead of continue on with your standard two weeks. Two weeks. And then in two days it might drop more. And then it tells me. You have this and now we need to do something else to treat it rather than waiting the couple of weeks.'cause those couple of weeks, that can completely fail the transplant. So picking up something or alerting us from the clinician side that something may be wrong, even if it's not that good, it still tells us. Or if it gives us that warning sign to say something may be coming, let's do a little bit of closer monitoring for you to take care of it a little bit quicker. So that's what's important about it. Even if it's the model itself, the numbers don't look good, it can still have a huge impact on the patient. Again, the whole point of this was saying that yes, numbers don't look good, but from a 30 day model, the density map, there's a nice split. But once you start kind of getting out 50 days, that whole density of the diagnoses kind of, uh, skew and kind of go back forward, towards each other. So. From our side, you know, I was trying to come up with something for the rise theme, but I kind of, I kind of don't even want to talk about that. I just kind of wanted to talk about, you know, from the clinical side, the most important thing from me is to work with data. It could be in any type of standards, statistics to advanced modeling, anything that can help me do my job more efficiently, be react, or be proactive and not reactive. That is very key when minutes can be someone's life in the hospital or to give me a better inclination to say, okay, there might be something going on instead of our standard protocol that has been established for years and years and years. Let's break that and maybe do a little bit of closer monitoring. I think that, to me is more of a the rise topic just by talking now more so than what I put in earlier, is the ability to save people's lives. To me, the most important thing. So that is all we have. I dunno if, uh, time for questions, anything. Small enough room. Let's go. Yeah.

Speaker 4

All right. I'll talk loud. So I don't need a bike. I, I've been sitting here smiling the whole time because I actually trained at Mayo.

Speaker 5

Mm-hmm.

Speaker 4

And I was in Rochester,

Speaker 3

so I did my internal medicine training in Rochester, and I was there in the dark ages when we still had the packets. Of the papers folded in half that went back to literally the early 19 hundreds and would go through the shoots and come in on the, the trays and the trolleys of all. Yeah,

1

that's one patience. Just like,

Speaker 3

mm-hmm. Yeah. So I just, I'm just, it's like to, I know it feels like you guys are slogging through and like, not maybe working as fast as you could be and everything else, but, you know, I was, it's just, it's just so exciting to see where, where Mayo, you know, how far Mayo's come and all the exciting things to go. you know, my question really is around the data and the access to data. So I was a practicing physician for many years. I'm now a chief data and analytics officer at Surescripts and access to data and permission to use data for all of the amazing insights and knowledge that we know are, we're capable of producing, if we would actually start doing analytics on this data. But the sensitivity of. You know, we have somewhere in the neighborhood of about 28 billion transactions a year.

Speaker 2

Mm-hmm.

Speaker 3

That for the most part we can't touch. And so we're going through the same journey of trying to get data rights, trying to get the proper permissions to use that data, just'cause we know there's so much incredibly valuable information in there that if we could just unlock it so. Any, any advice, any like, things that are working or not working for you when you're going back to, you know, your data sources or in your case it's a lot of your own data that you're trying to get back?

Speaker 5

Yeah.

Speaker 3

uh, we could probably talk offline, but just in general, um, thoughts on sort of how to unlock this highly sensitive data safely, but also because we know what incredible potential it has.

1

I will say that, every time you deal with a vendor that has that, that you're doing the testing and the data, it's taken me a couple of years to get it. So it's not a fast process from my side, it is more of a, we push the weight of how much money we spend on the vendor to get them to put it in their contract, and it's the renegotiation of the contracts. So everything that has been set in stone before. It mentions nothing with the data. So that was not a problem and is a problem, but it's, I, I, I wouldn't say it's a failure from before. It just wasn't known. And now to renegotiate the contracts, that's gonna take a long time. So the most recent one was with the last study. That's took two years for them to then create an api. The other problem is they had the raw data and they didn't. Understand their own definition of what data it was, because the person who made that definition no longer worked for the company. So then we had to go back, get all their documents to then sift through all the data. But it's just now when we're going through and trying to get with new contracts and things, were making it clear, this is ours. We're taking it back. You're giving us the definitions, the raw data, even though it's lots of storage. We're taking that and we're trying to completely understand what's happening. So that's from the clinical side. I don't know if

Speaker 2

you have anything else to add. Yeah. You know what's going on on all the sites, I think. But yeah, we're, we're renegotiating a lot of contracts. And something that was tough for us is the contracts are created out of many different areas, like nursing might drive contracting. Nursing drove our contracting with Philips, and we didn't have any of that data, for instance, so we had to renegotiate that contract. Then we have different practices with the contracts, so part of it is we've centralized some of the legal resourcing for creating the contracts and reviewing the contracts. Created like a specialized review body that looks at all of the contracts and asks these questions about data upfront to try to prevent us from getting in future situations. And then of course, going back through the contracts some,'cause we know that it's data, we're interested in some,'cause there's a specific use case that comes up and we're like, oh, now's the time to renegotiate it. We need it now. And then even within our own data, there's Trying to grapple with the patient consenting process to make sure that we can collect the data and then use the data that, that we own and the ways that we want later as something that we're still working through, quite frankly. And Mayo Clinic wants to be very transparent with patients, with, patients about that. So, of course that results in a lot of, there's skepticism, there's concerns, a lot of opting out. And then having to build the internal infrastructure to know what patients you're going to collect data on or not, based on that consenting becomes a bit of a nightmare. So we're still trying to solve all of that now. And so that's a big piece of it too. And then, uh, within like it, it leadership, we've gone out and we've been trying to really socialize with the teams that everything that they do in like the digital health space, the product teams, these teams dealing with the vintage products is to be always asking themself. Okay. I don't wanna just solve for this specific use case. I don't wanna just put this camera in for a virtual visit. How do I put this camera in, in a way that gives us access to any data it might be able to collect that we might want at any point in the future? So taking this more, like wide brush approach to everything that we're putting in place and asking those questions rather than just walking through the steps of getting product A in this building.

1

And then the stor and then the storage place too. You gotta have the storage capability.

Speaker 2

Oh yeah. And there's, there's, there's ongoing fights. People rolling up their sleeves about who's gonna pay for what and all of that

Speaker 4

security right now. Mm-hmm. The other issue we have is the more you store and keep,'cause as the data people, we want it all. And then the security people are like, every piece of data you have is a risk.'cause if you have a breach, then it's just, yeah.

1

And then even if someone else stores the data. And then you have the risk of them losing it or what do you give them? So this is that. We're dealing with that too.

Speaker 4

Yeah. I, I have a big voice. That's alright.

Speaker 5

I appreciate the talk. I'm, I wanna go back to weighing of the hands. So. On the clinical side, how has your work changed both from a physician perspective? Are you taking the video consults in your office now? or, and then for nursing, because you may certainly, you may not have a nursing shortage, but others do. And, how are you taking advantage of that technology and how is it driving your workday?

1

So from inpatient. Consult. So patients in the hospital I do not see virtually, so I'll always go see them in person because how we're set up at the hospitals, it's all once, it's all one place. So it's not big. So, but still seeing outpatient visits? Yes. that does. And we even have, we can send vital sign equipment and at home spirometry and other equipment to a patient's house to get that testing done at their home. And to get, and then I can get it back to, so patients that live far away from US hours, that, that has helped out quite a bit to them. So that's more of a patient satisfaction and an ease for care that I can give to them. So that, that's one of the things. The, the other is, I didn't talk about this, but there is, we do have these self-made decompensation scores in the hospital, so then I can pull that up and then when I'm in the ICU. I may not be called on these patients, but I will go walk around our hospital and look at these patients myself to see, should I take'em to the ICU or should I stay on the hospital floor? So it adds another layer for me to be a little bit more self-aware of what's happening in the hospital that I may be involved in that prior, I would've been completely clueless, and I can help people out a little bit earlier than not. So I think that's, that's what I'm seeing a little bit more now. In the future, I see it as helping with documentation. From the nursing perspective, yes, there's less nurses, but then most of the nursing job is dealt with. I documented this, I asked the pain score, I turned the patient. This is the vital signs, this is what it's checked, things like that. What can we do to improve that so they're not burdened by all that documentation that they can do so they can actually be at the bedside more frequently. That's what we're trying to get at. I guess that kind of helped, uh, answers that shortage type of question.

Speaker 2

I, I know that with the cameras, we have some capability for the providers to soon start doing consults virtually, but it, it goes back to kind of that magic of Mayo piece too. We always grapple with, I mean, there, there might be hospitals where they wanna do everything virtually if they can, but it's really important for us that the person actually goes and they're there with the patient. So we're asking ourselves like, what are services that the patient might not get because we have smaller staffing or maybe it's like an additional service we can offer because we can now offer it virtually, like, like pharmacy or nutrition consults, things like that. And then really, um, we've been having to grapple with the bill, the billing proponent. Of having the physician do the consults virtually. There's, um, I don't know all of the details of it, but there's nuances as to when and when we can't bill. We are making some decision to say, well, we, we'll just, not bill or not collect on certain things in order to leverage the virtual capabilities, but not everywhere is able to do that. So it's a lot of balance on there. I think with nursing there is a lot of opportunity. We have like the virtual nursing program and that's highly utilized and the nurses that we have in the hospital enjoy that virtual partner. So it's not always that the virtual nurse is there alone, but it's often both of them at the same time, or one of them helping with some of, hey, the prep for discharge and then the physical nurse is coming in. We have some ongoing projects right now using the ambient, the cameras, some of the ambient sensors to help with more accurate nursing acuity scores that help us staff our different hospitals. Again, looking at things like, hey, they, they're saying or charting that this is what's going on, but are we're actually seeing they're spending more time in the room, or they're having to go in this room more than they said they did. Their acuity score should be higher. We should staff more nurses on this floor because of that.

Speaker 6

So I just wanna reiterate kudos for what you're working on. so I have, two questions. So one, what's your patient, family and caregiver acceptance rate of actual cameras in the room. Now, I realize being an inpatient is a very different setting than cameras in the home, but I was just curious, how do you broach it? How do you get that acceptance level? And then the second is a little more technical'cause you're talking about data and using facial expressions for pain. But we know there are certain diseases, Parkinson's, where you have no expression. How do you bring that data source in? Or they may be Botox users and you get no expression. So I was just curious how you are integrating, data that we normally wouldn't have paid attention to.

Speaker 2

Yeah.

1

The second one I'll say we can't use them. Quick answer. Quick right there. Quick. No, it's, uh, it comes with all this is, this is the difficulty of the medical side of anything that has been talked about here. There's a nuance to every person that is seen that doesn't fit the model, or they don't have. let's say you can, you can examine someone's gait. If they have a neurologic disease, it'll be one certain way, but let say they had a prior injury to the leg, it's gonna look like that, but they really don't have the other disease That's very patient specific. But this is where, you know, the belief that these will never replace the physical hands on a patient, which just goes back to laying on the hands. This will just be there to augment our intelligence, not, and help with that. So that second question is, it may still pick that up, but then we're always gonna have to go verify everything to, to make sure. Yeah. And do you know the,

Speaker 2

on the, on the first one I can talk about like, um, we have cameras in probably o like over 300 rooms now, and it'll be like over 400 rooms by the end of the year. Now we have a Florida specific secondary consent that a patient signs either in the ED or through registration. We're working on a more global consent. These are very specific to these cameras. Um. We have a declination rate right now of anywhere from five to 10% of patients that refuse to have that camera turned on for like ambient monitoring. We don't allow them to opt out of virtual nursing and things like that, for instance, but they can opt out of ambient monitoring. We have AI models that run like delirium, fall detection, pressure, injury prevention, hygiene. There's some hygiene models, things like that, that they can't, that they can opt out of at this time. Five to 10%. When we first rolled the cameras out, it was closer to 15 to 20%, and we went back and said, what? What's going on? We actually set up mini patient focus groups, ran some of our documentation by them, talked to them about what we were trying to do. The patients actually helped give us a lot of guidance. They said, man, this is really scary sounding. And so what, what we did is we went back and rewrote all the documentation to start with the value statement and say, we're doing this because we want to care for you better. We're not trying to replace anything else. We're not doing this to, uh, you know, monitor you with no reason. And to really explain what the value was and having them there, and then to train all the different staff to talk about it in the same way. And training the physicians to be more upfront in having conversations with the patients about why they're there. They, they trust the doctors. And then, um, even going to the outpatient side and saying, Hey, your patient is a bone marrow transplant patient. They're gonna be on the third floor in our hospital. We have a camera in that room. Will you go ahead and bring up that it's gonna be there? And the value about it being there. So by the time they had that consent form in front of us, they were like, oh yes, you know, Dr. Cher told me that this is gonna be really important, it's gonna help my care. And that's when we saw the rate drop. If somebody declines, of course there's all of those complicated pieces in the background to make sure that the technology is turned off. Uh, it's all automated without a human having to do it, that we're not collecting the data to use it for various things right now. we can use it for different things depending on how they consent. Then we go into the rooms and put a physical cap over the camera that does nothing really. It's just we found that a lot of patients would put towels and things like that over the camera, even though it had been deactivated. They wanted to physically see that it's not there. Of course, that can be burden burdensome. It could be harder in other health systems, I would think, because that requires someone to go in and take it off. If we need like a virtual nurse, for instance, to come in the room.

1

And then if they say no, we make sure to delete everything that has been recorded before so they're well aware that everything is gone.

Speaker 2

Yeah. Yeah. And that, and the um, the de-identification part is tough. we have like a strategic partnership with one of our vendors, so we've been able to work quite a bit with them on coming up with new de-identification methods. It gets harder when you need the face, for instance. And so we talk through with each use case, how do we blur and only keep the parts we need? We found in, we're, we're working on like computer vision, vitals, where you can use the camera to measure like blood pressure, pulse, respiration, temperatures, thermal. Anyway, there's five of them that we're working on and we're we're looking and saying, oh, you don't really need their eyes. I don't really need their mouth. I actually only need this part of their cheek and this part of their forehead, so let's only retain that. You can't, it's much harder to tell who I am if you can only see this part of my head versus my entire face. So trying to talk through like how to get the minimum. And then we're also building those consents in a way where we could keep everything if we need to. But we're mainly only doing that under, uh, research consents right now. Research studies.

1

And we would love if anyone wants to partner, we, uh, we're very friendly.

Speaker 2

Yeah.

1

Usually, you don't have to say that, but it's.