The ThinkND Podcast
The ThinkND Podcast
The New AI, Part 8: Trust, Talent, and the AI Colleague
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Episode Topic: Trust, Talent, and the AI Colleague (https://go.nd.edu/7d5af7)
In this episode of The New AI, Accenture Managing Director and Notre Dame alum Jen Hall '98 joins student hosts for a conversation abput how generative AI is reshaping the workplace. Hall discusses the shift from experimentation to scaled implementation, highlighting key themes like trust, transparency, and workforce readiness. Explore how AI is changing task structures, decision-making, and the expectations placed on new employees. With insights from both industry and academia, uncover how organizations and individuals are adapting to a rapidly evolving work landscape.
Read this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: https://go.nd.edu/abc332
This podcast is a part of the ThinkND Series titled The New AI. (https://think.nd.edu/bq/ai/)
Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.
- Learn more about ThinkND and register for upcoming live events at think.nd.edu.
- Join our LinkedIn community for updates, episode clips, and more.
Welcome everybody. My name is Graham Wolf. I'm a junior at the University of Notre Dame, majoring in business analytics with a minor in the Hesberg program in public service. and welcome to the new AI Projects, podcast. Today we'll be interviewing Jen Hall, managing director at Accenture, and also Notre Dame graduate double majored in, history and finance. And, her focus is on innovation and culture as well as the workforce of the future. Currently working with Calsters, which is the largest teacher pension fund in the world. and we're really excited to have her on, for a wide reaching conversation about how the corporate world is adapting to this new variable, of ai. Jen, anything I missed in that intro there?
speakerNo, thanks a lot, Graham. the only thing I'd mention is that all the views I'll be sharing here today are mine, and I'm not representing the views of Accenture or, my client today.
graham-wolfe_1_04-18-2025_124815Understood. Yeah. Thank you again for being on. and for those of us, those listeners who are new to the program, we are the new AI projects. We're a research collective housed within the technology and digital studies program at the University of Notre Dame. and our mission is to, share the journey of learning about how AI is going to impact many different facets of our lives. we've kind of segmented, this project into six different key domains, with different student experts assigned to each one. between Tech Titans, which is focused on the major tech players to taming ai, which is focused on regulation to AI at work, which is the columnist we have here today. there's really something for everybody. and, we encourage our listeners to go check out the new AI project on LinkedIn, after this podcast. Also with us today are two of those, aforementioned student experts, Annie Z and Ella Turani. and, we're really excited to have them on. and I'll pass it over to you guys to do a quick introduction. Annie, why don't you take it away?
student 3Yeah. I'm Annie. I'm a senior here at Notre Dame. Right now I'm majoring in computer science and economics, and I also have a minor in Chinese.
student 2I'm Ella, I'm a sophomore. I'm the same as Annie. I'm majoring in computer science and economics, and I don't have a minor right now, but that's it.
graham-wolfe_1_04-18-2025_124815Yeah, we encourage you to pick up computing and digital technologies. That's the plug, for that minor, but
student 3I'm considering.
graham-wolfe_1_04-18-2025_124815good. but they've done some really great work recently. looking at, some of the trade-offs between creativity and efficiency as that kind of emerges, in the corporate world, as more and more implementations, of AI products take place. so yeah, I've done some great work on that and, really excited to have them here as well. The first thing we'll, kind of kick this conversation off with is, an open-ended question to you, Jen. I'm curious, where do you see AI coming up in your work, and really broadly, how do you talk to clients about it? What do you tell them? What do they care about? when does it come up?
speakergood question. I first wanna comment, I love that all three of you have majors that balance kind of the technology point of view and the liberal arts point of view. Obviously I'm biased because I had that as well. But I think learning to think in those two ways is something really critical. And we'll come into the conversation here as we really think about the skills you're gonna need in the future and where and when we think that generative AI and the, the, those technologies that are associated with it will be affecting how people do work, where they do work, and when they do work in the future. I'll, you know, generative AI. Really came up a lot last year and people will say AI and I think in this conversation here we're talking about generative ai. Obviously automation and artificial intelligence is already kind of defacto, we're doing this regularly. You saw robotic power to automation happening a lot with kind of ongoing, kind of billing cycles, things like that. So AI has been in use in a lot of corporations for several years. What's new about generative AI is the capabilities that it shows. So when we first were really talking to clients about this last year, we were really framing it as an evolution, like not a magic wand. Clients were cautious, or you know, they're curious. But they were cautious, and especially for some of our clients in the public sector, very cautious because obviously there's ramifications when you're serving, you know, serving your constituents that you don't want, you don't wanna be too overly aggressive with something where you really weren't sure what, what would happen. So we are really emphasizing understanding before implementation and thinking about real business needs, not just like trying to have a headline, having something be out there now, a year in and how things, you know, how the conversation has shifted is that we're really moving from, what is it to, how do we scale this and how do we scale it responsibly? That responsible AI is definitely a trend we're seeing a lot of, and we talked about it less as a tool and more as a partner, like kind of, and you probably have seen it the way you might have interacted with chat GPT or Claude or, you know, some of the other, kind of common platforms that you might be accessing generative AI with is you use it as almost like you're having a conversation with, you know, with it. So it's more of a, that can augment existing work, but also to challenge your mindset, especially if you prompt it to do so and prompt it to ask you questions and prompt it. Say I wanna think differently about this. You know, what am I missing here? What am I considering? So really where we're focusing now with clients is on trust. Trust is one of the most critical things that I think is I, is gonna become even more of a trend over the next six to 18 months. those frameworks that surround it, kind of like the trust frameworks and the governance that's associated with it, and workforce readiness, the readiness to consume it. and that's, you know, even as much as people focus on models, small mo, small language model, large language model data, this, you have to have your workforce ready to use it and use it in the right way before you can unlock the power of generative ai.
graham-wolfe_1_04-18-2025_124815Yeah, that's all really interesting. I'm hearing a lot about, like you say, the trust, the kind of permission to go forward with this new variable. it's a very kind of rhetorical conversation, rhetorical balance.
student 2Yeah.
graham-wolfe_1_04-18-2025_124815Getting the wording right, getting people on the same page, And it's, yeah, it's interesting just to, to, you know, see that playing out in the news cycle that we're keeping up with as writers, but also kind of behind that black box, from a position like, you know, yours at Accenture, where you're actually talking to the, to folks who are, you know, uptaking, these new products and implementing them sounds like cautiously and, you know, responsibly. yeah. Very interesting. Interesting. so certainly some people have raised, including our writers, have raised some concerns about AI contributing to what could be called intellectual outsourcing or Reskilling as opposed to upskilling. you know, where convenience might start to erode critical thinking and creativity might start to fall by the wayside in, you know, in favor of efficiency. I'll start off, passing this over to Ella, who just, published a great piece last week. It was, about, this exact sort of trade off. feel free to let us know your thoughts on that. Ella, and then, you know, we can pass it back to Jen and sort of, see how that might be playing out in the real world. But, you know, Ella, take it away. Let us know, you know, what you found in your research about that kind of trade off.
student 2Yeah. Perfect. so yeah, last week we wrote an article basically about the creativity trade off. And it's interesting you bring up that like teammate versus tool because that was a big part of the article is sort of how can companies teach their employees to think of AI as a teammate as opposed to something that you send your work and it sort of just feeds it back to you. Like you need that interaction to maintain critical thinking. And so I think that's really interesting. But in the article too, we talked about how even when you view AI as a teammate, you should not defer to it as like a smarter coworker. You shouldn't think of it as something that's. Better than you because you need to have that interaction and you know, have confidence in yourself. And sort of, that is where the article went, where there was certain studies, I think it was from Microsoft and they sort of said that people who are confident in their own abilities are able to use AI in a more efficient and like positive way because they aren't treating it as something smarter. So they're taking those, outputs and they're rereading them and they're. Interacting with them more than just accepting them for face value. And so I think that's an important distinction too, because that's sort of where the create creativity and the critical thinking, may falter, is when people begin to just copy and paste the output as opposed to saying, you know, AI can get things wrong, or maybe my way is better for this current situation than the output that AI is producing. So I think it's about that interaction, and not like an overreliance that really will help people and innovation sort of flourish.
speakerI think you definitely, I agree with everything you said, and it was a well-written, well-written, well-researched article too, so I was, I learned from it too, which I appreciated. I definitely think AI can be, you know, can diminish some of the strategic or the creative capabilities that people have. If it's used as a crutch instead of a catalyst. And so I think you're kind of what you were saying about the ways of working and the confidence of the worker is very important. And we've seen teams outsource drafts to gen AI and then not even think about it. And I've been on the receiving end of emails from people where it's so clearly written by a chat GPT or something that you're like, you didn't even think about this. It's it's one point, it's not even making sense. you know, strategic thinking doesn't really begin with the output, it starts with questioning. And that's really kind of like at the beginning of the process, when used, what I think. Generative AI can do really well is to spark curiosity, help you break through a mental block, especially Brene Brown and I, you may have to, you may have to go look this up for yourself, but she used to call it the SFD, the blank first draft where you'd get just stuck by a blank piece of paper and be hard to get started. Generative AI can really help you kind of start from that initial catalyst of giving you something to start from, and then it gives you time to think back more deeply. But only if you hold yourself accountable to using that time more deeply. If you just use generative AI to knock out a bunch of tasks and then you take on more tasks and you're not setting aside that time to think strategically or be like, okay, now I'm gonna really consider what I've done this research on, or these questions I've asked, and now I'm gonna go do something. And so that, that's where the trust and the critical thinking really comes in. AI can only go so far. It's our human judgment about when to use it, the questions we're gonna ask, the things that we're gonna discard that builds value. And then I'd say the other is that you really have to be thoughtful in understanding of what you language model is. Information at sourcing and where it was trained. So if you're using the entire corpus of the internet, you know, imagine everything is trained on. What we're seeing a lot of like clients move to, especially for when they're in corporate use, is a small language model that learns on their own kind of. Ecosystem within their, within our organization. So you're kind of, you're learning from that. So again, but you have to be thoughtful. It's like when you use those small language models, it is not, it is missing some pieces of information. So you always have to know where you're starting from and take that into account, and that's where the creativity and the curiosity, kind of what we call the human at the helm, really kind of takes place.
student 3Yeah, that's very interesting. I have a question about, you know, from the consulting firm's point of view since you are in a role that's probably, you know, more client facing compared to, a lot of, you know, new graduates, just entry level employees, they're probably the ones in my experience having intern, I guess in consulting before the ones using the new AI tools and, you know, using them either as a tool mate or trusting it too much, however you wanna put it. If they're doing that, but then, you know, they're not necessarily the ones presenting information. do you think that there's almost like a new filter that has to be gone through between maybe their work and then you or someone who's going to be, you know, facing with the client, to kind of discern like whether or not this is, you know, too general of an answer too, ai, you know, or has this, you know, gone through the critical thinking skills? do you feel like you're, you yourself are now somewhat of a filter?
speakerthat's a great question. I hadn't thought about it, and I immediately, when you started talking, started thinking about, I need to go check in with my team on this. because I do have on my team, I have a, a woman who went to Cal and she interned off with us for two years and she's, you know, six months into her role. And she does do a lot of kind of the frontline work, initial drafts, things like that. What I've seen. Kind of in how we work. And I don't know that I was intentional about it as you were just suggesting, and that made me think I do need to be more intentional about it. we have conversations and like in all of our discussions, we're not like looking at a PowerPoint or kind of, we have, I'd say it's almost more like a seminar type discussions when we're thinking about how we should be talking to a client, what information we wanna bring them, what we think is gonna resonate, and then she goes and creates a draft from that. So I guess defacto, we've kind of been having those conversations ahead of time and she may use generative AI to put those together. I think for the most part she's usually using the design tools, to kind of shape them. But the content and what our point of view is, we develop in a conversation and we'll, like right now we're working with a another client and developing their new operating model. One that we wanna have take into account. Like a formal strategy and innovation role, and we're talking about, who else is doing this? How are we getting that? We'll have information, we'll come back and we'll talk about it and then ask questions and she'll go and research. So that's kind of how we're approaching it. Which, which might get to, I think the hypothesis you're putting out there about kind of making sure you have those guardrails that are shaping the conversation, but the intentionality of that, I really liked a lot. love, I guess
student 3the, a follow up question I have is, you know, I think a lot of consulting firms are now, you know, using ai, generative AI too. for clients projects, do you feel like Or have you seen some sort of, sort of trend where the clients, are more skeptical in your recommendations or they need you to kind of prove your critical thinking, your, solution a bit more? Because they're, you concern that it was used with ai, Notre, I don't really have any data to support that. That's kind of a, I guess, first
speakerwe always disclose if we've used generative AI at any point in the process. So we might say this was informed by generative ai, or this was partially written by generative ai, or we use generative AI to do research. So you have full disclosure. I think that gets back to the trust is we're not trying to pretend that this is done without any, generative AI assistance. Now, this is where I'll go is that the trust part is so critical is for most of our clients for whom we're doing this work, or ones where we have a really strong, trusted relationship. So I think that their perspective on kind of our value and our ability to bring in thought leadership and the work we're doing was kind of pre generative ai. And so the trust factor is there for new clients. You know, I think that you, this is where I think trust were again, will really come in and play, is that you have to be fully transparent on where you're using generative AI in your process and in your research. And you have to, you should disclose what models you're using. You should disclose like how and where you used it within the work products so that your clients know. Because if you don't disclose it you're gonna erode the trust there.
student 2Yeah. Thank you. I sort of have one too. I just, do you think that when you use generative AI, a client expects you to do things faster or have more content, produce better solutions? in a quicker timeline?
speakerLet's hot and hope there's not more content. I mean, the proliferation of like things are I think that's actually like a deterrent to gen ai'cause there's so much out there, so many choices and so much information. It's like discerning that. You know, if anything, I think they're looking for us to have a point of view. And to come in, like having done the research to know what you're talking about. And you know, I haven't, like, where we're seeing productivity gains tends to be more on like the technology side, on building code and being able to do that faster. I know, is it GitHub? The computer science? Yeah. Unless on the technology side of that, I learned to code in cobol. fun fact, I actually might become relevant again when all the COBOL engineers retire, I'll actually have done it. but now, you know, a lot of the basic coding or the testing, the things that we're really time consuming, we're seeing clients get like a pretty significant reduction in how long that used to take, kind of the standard, the, the standard ways of working. So yes, and that for sure on the more strategic side, again, I think it's someone who's, that you can trust on the point of view and having taken the time to discern that and evaluate it and be thoughtful about, the perspective that you are. That you're bringing to them, that they're not necessarily looking for that to be faster, but they are looking forward to be really good and well thought out because those tools are available.
student 2Awesome. Thanks.
graham-wolfe_1_04-18-2025_124815Yeah, that all makes sense. I'm hearing a lot about, you know, transparency, you know, and establishing trust and maintaining trust. here's where I'll kind of bring in a little bit of the student perspective. I think what we're seeing a lot of is professors, you know, even other students and the people we're submitting work to, seeing any use of generative AI as undermining the process. And of course, I mean, education is an entirely different, you know, landscape. but it's hard to. Kind of shake that background. A hundred percent. We've, yeah, we've kind of all internalized as students. So it's the thought of, you know, putting this was partially you know, done by generative AI on something that I'm submitting. It's just it feels a little bit, right. It's hard to imagine, you know, it's hard to imagine some, you know, maintaining cred, credibility through that.
student 2So
graham-wolfe_1_04-18-2025_124815it sounds like transparency really is one of the keys there. and, to, to making sure it's not something that undermines the purpose of the work.
speakerI do think that, and again, this is my point of view. I do think you all should be using generative AI in your coursework. Do I think it should be done in everything, like in entry level classes? No. I can see a case where they're like, we wanna make sure that Graham understands the fundamentals of how to do this. After that though, I mean this is the real world you're gonna be using it. How would you, if you're there to help prepare students for where they're going next, why on earth would you ignore one of the most transformative technologies? I. Of all time. I think what you actually should see is professors evolving to incorporate gen, incorporate generative AI in their coursework. And I've seen this at some other universities where they're actually, they're kind of going back almost to the Socratic method of where you share, like in oral discussion of what you've learned and you can prove that you can talk about it and then you can write to it and you can discuss it and that you've internalized it. I mean, that's that more so than like for you to write a 30 page paper showing what you knew that could possibly be informed by generative AI versus if you're gonna have a 45 minute conversation with your, you know, kind of seminar class and you need to show what you know, and you need to be able to answer questions, that's an entirely different kind of preparation and learning. I think there should be an evolution towards incorporating more of that. And once you've proved you have the grasp of the subject matter expert, it's kind of teaching you to fish from there on out and teaching you the discernment on what tools to use and how to use them and the questions to ask, and kind of the, having a good grasp of responsible uses of AI and how to kind of self-regulate that.
graham-wolfe_1_04-18-2025_124815Yeah, that totally makes sense. Yeah, I mean, there are ways of adapting to the, this new variable. Some of them just, you know, I think take a little bit more of a risk. Some of'em take a little bit more trust and bravery kind of to, to be the first mover and jump out there and do these things. some professors, some companies, some universities are on the back end of that and that's, you know, that's natural. So yeah,
speakerI think if you go to like kind of a classic design thinking method is you go to, what's the outcome you want? What's the outcome you want from your student who's graduating? What's the outcome you want at the end of a class? And you think through that now, the ways that you achieved that two years ago, maybe now it's time to rethink how you do some of those,
student 3you know,
speakerat least considering it. And I think just shutting off completely to, we're not gonna blanket use AI at all, generative AI at all. Again, going to like the real world outcome. If you wanna prepare your, your, the students to be, you know, capable and entering the workforce or, you know, kind of like advanced society with being able to contribute, why would you not have them learn how to use this? And that's, you know, it's like, it's, I think it's almost like kind of rethinking the entire, this technology is here to stay. All right. Let's, I mean, it's look at, look at you know, the, what we've seen in the Industrial Revolution, I mean, and how, you know, people, you know, transportation, you know. Making clothes, things like that, people have had to evolve and that's, I think that we need to be considering this too.
graham-wolfe_1_04-18-2025_124815Yeah, totally. you can really put it into broad historical terms like that. you know, is this another inflection point, the same way that the internet was. The same way that the, you know, industrial revolution was that the car was. but there, you know, it's still, and it's infancy and in that regard, it's hard to be conclusive about, oh, this is just another, you know, thing we have to treat like the internet or to treat, you know, like any other innovation of the past. But, yeah, it's certainly, you know, it's a, it's an important perspective, one that a lot of people kind of aren't exposed to that this might as well just be the next, you know, big thing we all have to adapt to. I do want to just recenter things real quickly, and bounce off what we're talking about from the student perspective and preparing for the next stage. another thing that of course comes to mind as a student, not only do you have anxiety about turning in things that are, you know. Even touched by Chachi pt, but also, you know about the job search process and the hiring process and the discernment process. there's amid all of these changes to student workflows and corporate workflows, there's been a historically competitive and weak hiring market, for the past couple years in a lot of cases. and a lot of people have that kind of background anxiety, whether they're currently employed about being let go because of efficiency reasons, or whether they're searching for a job, not being able to find one because of the, you know, their job being automated away or whatever that, Anxiety is rooted in. how have you seen this play out, whether it's the anxiety or the, you know, the actual outcomes? How have you seen that play out as somebody who might be involved in those kinds of decisions?
speakerone is that, you know, I think that the most critical skill of the future is going to be maintaining relevancy and being, being relevant and understanding and applying emerging technologies. So I think your generation actually gonna be kind of well positioned as you enter the workforce to really having grown up as digital natives, to being really familiar with technology and understanding how it can translate those benefits to the workforce. And things that are not necessarily gonna be difficult for you to learn to do. And your understanding of kind of automation is gonna put, you, gonna put you at advantage at an advantage. Like I, I don't think like jobs are gonna disappear overnight, but I do think tasks will. So if you think about what a job is it's a series of interconnected tasks and I'm sure there's a, I forgot the course. Like one of the org design courses that are out there breaks down, eat job into tasks. So as tasks shift or change, then roles will evolve too. And if we involve invest in that evolution and you're really thoughtful about how people should be spending their time and. And really, kind of bringing value to the organization. So the real risk isn't actually like loss of jobs. It's loss of relevance. And if we don't help people shift to being relevant, that's when fear becomes reality and that's where it's self perpetuating. But I think that's probably, that risk is probably greatest for those in the workforce now, particularly those who are kind of mid-career, you know, people kind of in their like mid thirties to mid, you know, kind of, kind of retirement age. Things are gonna be happening quickly and new technologies are gonna be embraced. New ways of working are gonna coming quickly. So that's where, like the change management aspect and really thinking through here's how we're going to have people within our organization use it. that's where there's a real risk of where we, you know, people can't use or understand how to use generative ai. You're gonna be rapidly overtaken by quality of work and the amount of work others can do as compared to you. And so you're gonna fall down and to be a lower performer.
graham-wolfe_1_04-18-2025_124815Yeah. interesting. I, I wanna hand this over to Annie. you've been with us for a while writing about AI work for a while. That I think is the most consistent through line of the questions we've been asking and writing about in this column. is that response to the fear about, am I gonna lose my job? I think, what Jen contributed here is really, grounding, I think in a lot of cases for people, to who are experiencing this. Just it's very kind of nebulous and unfounded in a lot of cases. But I'm curious, just in the past, you know, year or more of conversations we've had, did anything, surprise you from what Jen just said? Or does anything come to mind about that? That you know, that general conversation we've been having about, is AI gonna take my job?
student 3Yeah, I think the, when you look into the news and you look into, you know, the headlines, half the articles are saying, you know, they're only gonna take, manual, like very au autonomous potential, types of tasks and jobs away, right? And then you have the people saying that it's the middlemen who are gonna be taken away, the people who, whose decisions can be made super easily. When you look at it from like an efficiency standpoint, you know, there's no one who needs to kind of manage people anymore if you can just automate it. and so I think it's interesting that you're looking at it from like a relevance standpoint rather than a middleman lower man, you know, men, whatever terminology you'd like to use. and I think it's very optimistic that you think that for us, you know, we have a leg up in, in the workplace. I think from a student perspective, I am seeing, I would especially say like the lower class, not lower class. Lower, yeah. What they called lowerclassmen like, like freshmen and sophomore. Freshman
graham-wolfe_1_04-18-2025_124815and sophomore. Yeah.
student 3Yeah. Yeah. It feels weird calling them lowerclassmen, but they're lowerclassmen to me at least. the lowerclassmen who have, you know, now had AI for two years in high school and kind of really went through their advanced courses in high school using AI and probably, you know, getting into the senioritis of high school, you know, just really slacking off on critical thinking, and creativity. whereas I think, maybe I'm biased as a senior, but I think my grade, you know, we had no generative AI to use for the first, at least two to three years of a college career. And so now, although people are using it more, you know, they have also written 30 page essays without the help of generative ai and, perhaps, you know, that gives them some sort of leg up. I think a lot of freshman and sophomores seem to be struggling when it comes to, I would say these harder tasks, like their first instinct is to go, you know, to generative ai. and so perhaps that's the pessimistic view of it, I think from a student perspective. whereas I think like potentially for the new grads, the, you know, 20 something year olds who are in the workforce already, they have, you know, what generative I doesn't have yet, which is a perfectly working brain and that knows how to critically think and decipher and discern, decisions. but yeah, and I think the conversation's always shifting between who's gonna lose their jobs and, it's really hard for me to, I guess I. con, confidently say you know, what age group or what type of job is getting lost? And I guess maybe you can talk about from either the consultant industry side of it or maybe even with the, types of industries that you've worked with as a consultant. what type of jobs you've seen or tasks, are the ones that are being, you know, the first to become automated and switched out. whether that's from, you know, tasks point of view or like a strategic point of view. yeah, I'd love to hear your talk about that.
speakerYeah. first I'll give you an example of how I use generative AI Every day we have Microsoft copilot. we use teams in my organization. And what I always do is I turn on copilot, or we generally have copilot on calls and I have it summarized and take notes. I like this a lot'cause it saves me about a half an hour. Every day in or in every call to summarize those notes. So I conservatively save about 90 minutes to two hours of time that I would do taking notes or maybe I wouldn't even get to taking notes and wouldn't send them out. And then I'd have lost productivity. Am I doing a good enough job and like pulling back that two hours and being like, I'm shifting it all to strategic. No, I'm probably doing a little bit more of what Ella was talking about where I just add more tasks. So I'm actually actively trying to manage myself better, to be more strategic. So that's kind of, that, that's a real wor that, that estimate or real world example. Of where I've used it now, where we're seeing it, and then I'll, and I'll also give you an example of where I think there's kind of a responsibility expansion rather than reduction of a role. But we're seeing automation and research and summarization, you know, basic reporting and routine customer service. So you know, like 80% of all customer service inquiries should be able to resolve without having to interact with a human. That next level, you need someone who's really competent and you invest more in that person, and they're kind of like a higher level of service. that's where we're really seeing things evolving, where you're you're kind of spending the same amount of money, or you have the same number of jobs, but they're focused more on the higher level critical thinking tasks. what we think is gonna be coming next is more of the mid-level analytical tasks. So not necessarily the insight, but the preparation and the structuring of the data, and maybe making sure that you're looking at the, the right levels of data. the stuff that, like the process heavy, like kind of knowledge work is really ripe for it. So that's like compliance monitoring, you know, preparing for audits, you know, basic proposals, things like that gist are kind of, I. Almost busy work. Not quite busy work, but it's just overwhelming amounts of work that, that, that really gets taken. So what the client, I'll give as an example, and there's on a panel this last summer with a lot of investment firms and what they're saying is that they're, they're kind of investment fund managers are now able to manage 10 funds instead of three. Because they have more capabilities. Where they're really worried is that their incoming, and this kind of goes to what one of the things you're worried about, their incoming people aren't getting the experience they need because now they can take these 300 page or proposals, have their, have it be ingested into their generative AI tool. And that tool can give them kind of like the basic summary information that you would've had a first year analyst give. They really don't need those first year analyst skills. They still need the skills of the investment manager. But now you don't have that analyst getting that experience. What are we gonna do when that pool drives up? Who's gonna replace that current person? That's that's a valid thing that, that we're seeing. like in the consulting world, like we'd see and it's I even hesitate to predict too much out in the future these days because it so much is changing. But what we think is kinda like the first pass of an analysis, the very first thing that kinda the basic writing, you know, pulling together the knowledge source and being able to have that be accessed in basic things like status reports, that is all likely to be automated. Where the humans are really gonna be, need to be in play is like building trust in relationships with clients. Because clients are always gonna look for, do I trust this person sitting across from me? Do I know who they are? You as a consultant, are you asking the right questions? Do you know how to ask the right questions? Do you have the experience to ask the right questions? And then being able to tell stories to communicate what you want to those clients. you know, kind of navigating the human experience, emotion, ambiguity, resisting change, all those are still going to be, to gonna be into effect. And then this is, and this is something I'm working on right now, is designing the organization of the future. You know, what are the, not just the processes, but what are the people systems that are out there? So I think if you look at what's gonna be most important, and this gets back to what I said at the beginning about how I really liked the three of you having a real breadth of the majors and the thinking is that you need people who are gonna bring judgment, empathy, and being able to be provocative in a really appropriate way. Not just showing up with an answer, but being a partner along the journey. And, you know, those people who aren't using just AI as a tool, but as a kind of a partner where you're still in charge, but you have this additional capability that's gonna allow you to bring even more to your client.
student 3You know, something that stuck out to me with what you just said reminds me of what we talked about, like implementing or letting students use generative AI in the classroom. Right. There's some classes that it's very appropriate to use. There's some, especially like under entry level fundamental courses that, you know, you probably do want your engineer to understand how calculus one works, right? Right. and you know, you just kind of referenced how you guys are worried about entry level employees not getting that like very busy work, manual kind of tasks to really understand how maybe status reports or whatever it is our run. do you see some sort of compromise there Because, you know, at the same time when you're starting out, you know, you want the entry level employees to be able to work efficiently and, you know, produce. What they need to produce. At the same time, perhaps it is extremely important that they understand why, you know, these forms are formatted the way they are and how you fill them out and how you submit them correctly. for their own future. So yeah, I'd love to hear your take on that.
speakerI think that you've hit on like the fundamental thing of what organizations are gonna need to figure out. Just like I mentioned that I think universities and colleges, educational systems are gonna think, what's the outcome we want for this individual? We're gonna have to think about this as well. when we have a development program at my organization where technology development program, a consulting development program, a sales development program, and we're building up the skills really intentionally for those people during their progression. To be able to get those skills. and I think we're going to have to be even more intentional about making sure they're getting that chance to learn. And the point is the process, the point is the journey. It's not the answer. I want someone to do that learn by doing. you know, that's where I really think that experiential learning is so important. They get, they, it sticks in your brain when you do it that way. And I think we're going to have to revisit how we're training people or else you're gonna end up with a workforce gap that's really problematic. You're gonna have a supply demand, kind of talent availability issue. It's gonna be very difficult to resolve at that point.
graham-wolfe_1_04-18-2025_124815Okay. Real quick, I'm gonna jump in. this is, this has been, this is a quick pause. This has been great. thank you guys so much. I'm just gonna, real quick roadmap the rest of the conversation and we start to wrap up here. it'll be. I want to kind of pivot and talk about the regulation. but I think there's a transition here saying okay, we've talked all about some people's fears and you know, roughly, you know, what we might do about it, but, you know, how can we synthesize this into a big regulatory framework? Whatever. I'll transition there. but then I think I wanna end off, and I think I'll pass this to you, Ella. feel free to reference your research or just your own thoughts. okay. I'm gonna ask you directly what are you optimistic about? and we can even decenter from work or whatever you wanna talk about, because I dunno, you, we'll be hearing from you a lot as a newer writer, and I think it's just an optimistic note. And then Jen will ask you the same question, what are you optimistic about? And then we'll wrap up. So
speakerone, and I don't know how best to bring this in Graham, but I think one thing is like the importance of trust and you know, an example we're saying, I. it's where people will use chat GPT during a technical interview. And it happens so fast. you know, basically they ask the questions and we've found that they're like typing the question to chat GPT while they're being interviewed. And then you hire someone and you find out they actually can't do the job. And that's this is something we're just now seeing like in the last three months, but we've seen it happen multiple times. So it's like that trust factor. And then the other thing that you could maybe think about is like. How do we know something is real now? And like, how do we trust art? How do we trust writing? How do we trust videos? I don't have that answer, but that, and then I don't know if you even want, we have time to take the conversation there, but those would be kind of two things to, to weave in.'cause I think they all play in to how this isn't work, is how do you trust who's showing up and how do you trust that the output is real? And what do we mean by real? That could be a whole like seminar part two. Yeah. That'd be a whole semester of I think that could be like a great philosophy class. I would come back to nd and audit that in a heartbeat, but how do you, how do we know what's real now?
graham-wolfe_1_04-18-2025_124815Yeah. Okay. Let's make the last question about that. let's do all set, sort of blend it all together and say Okay. But I do wanna talk on regulation. So
speakerregulation and to trust. I mean, getting into whether or not something's real might be too much for the rest of this. but that might that mean that could be a subject you all explore on a future podcast. I would definitely tune in and listen to that.
graham-wolfe_1_04-18-2025_124815Yeah. No, we actually, the last one we did was with our taming AI columnist. So taming is like governance and any attempts to, you know, tame it. and we had a guy on who wrote a book called A History of Fake Things on the Internet, and we talked about how evolving fake news and fake things. Perfect. Yeah. Yeah. I mean, yeah, rewatch that he was great.
speakeryeah, I'll do that. Thanks for that, Tim.
graham-wolfe_1_04-18-2025_124815For sure. Yeah, no, he was interesting. he was an interesting guy. He was kind of in his own world, but, he had some good time.
speakerYeah, it's, it is, you come along, you come across all types in this like gen AI space here, that's for sure.
graham-wolfe_1_04-18-2025_124815Okay, I'll jump back in'cause I don't want to keep us all too long. but let's talk about trust. So yeah, I'll just. I can. So I think what we're hearing a lot about, in this conversation is, trust and transparency. it's, you know, foundational and, you know, fundamental to, the relationships that sustain the corporate world and sustained just, you know, a working life in general. this new variable of ai, yes, it's impacting efficiency and it's shifting tasks, but I mean, it sounds like the thing that has the greatest magnitude, perhaps the farthest to fall would be those, you know, relationships of trust and transparency. you know, Jen, what do you make of that? and how do you, see trust sort of evolving? You know, in a more urgent sense, how do you make sure you're not undermining that trust, day to day?
speakertrust is gonna be defining trust. Who you trust and what you trust is just going to be, I think the theme for sure for the, that's kind of centers around the generative AI conversation. You know, can you trust the answers that you're getting? Can you trust that people know how to perform, that they know how to do their jobs? and that, you know, can you trust that, you know, the governments and the other, you know, corporations are handling your data correctly and using generative AI tools correctly in their own entities. That's where the transparency comes in a lot. I'll give you an example and then if you wanna touch a little bit on kind of like formal regulation of how you can kind of manage that trust? we can go there too. We're, we are seeing in the workforce now where people will apply for jobs and while they're on the job, you know, a lot of. virtual interviews, they will actually be taking some of these technical interviews and talking to chat GPT at the same time and asking the questions and getting the answers on the code. You know, if you're gonna test a computer programmer, how would you go about this activity? And ask them to tell you back this is what I would do. This is what, you know, this function means. this is how this piece of code would work. They're getting that answer from chat GPT now. Then we get them in, we hire them, and then we find out they don't actually have the skills. And that they, they basically, you know, use chat GPT for the whole interview. This person doesn't have the capabilities and I've lost trust in them as an individual. And that's not a good, that's not a good place to be. So how do we, as an organization now, we're having to really rethink how we're, how we're evaluating people. So again, like going to, what's that outcome you want? We have to rethink how we're kind of monitoring and check and testing that. Because there's gonna be an element of trust now that okay, we can't necessarily trust that someone's not going to try to bad actor their way through this process. I think that's one thing that, that we're seeing clients see, and I, you know, you wanna make sure somebody has the basic level of knowledge and skills to do a job, and now we're gonna have to test it in a different way before they enter the workforce.
graham-wolfe_1_04-18-2025_124815Yeah. No, that's a, yeah, an interesting outcome. Just a kind of a bizarre externality of this whole experience of learning how AI is going to impact our lives. it's, you know, making its way into everything, you know, even up to, you know, job reviews and the screening process and everything. yeah. Interesting. Yeah. That's something that, you know, not a lot of the people are exposed to, but thank you for sharing that. And, I think,
speakerI hope it doesn't become a thing, but it's, you know, people are gonna be incented to try Yeah. To get around the process or to take shortcuts, but in the long run that, that stuff will come back to bite you. And it, I mean, especially
graham-wolfe_1_04-18-2025_124815with the hiring market right now, it's,
student 3Yeah. I would say from a student perspective in computer science, mean, everyone always says the job market is cooked and terminology. And, I think there is a huge pressure just because you want, we wanna get the job, you will take any shortcut, you know, possible. And you know, you speak of the, the. Computer science or software engineering programming type of jobs that are, require these technical interviews. And, I would say I've heard like multiple stories of students, you know, getting jobs at Capital One at all these, you know, major firms through some form of cheating, whether, you know, I think, I've heard that the, I. The interview ones are a bit more difficult or less likely for people to fully cheat on, but right now, because of the competitiveness, you have all these screenings before you even get to the in-person interview. And that's where I think people are really, you know, going through generative AI to kind of get that. and so I wonder if there will be a shift of some sort of, you know, an academic setting. You see lockdown browsers, you see, you know, keep your camera on while you're doing something. They track your movement. whether there'll be a shift in that regard in terms of monitoring or a shift in how you evaluate candidates as a whole. But honestly, I think the latter is, it feels a little less likely to be just because there is such like an increase in number of applicants for practically every single job now. and so for recruiters to screen through that would be really difficult. do you have any opinions on, I guess, that hiring process.
speakeryou brought up some really good points there. It actually made me think about how, you know, you have testing centers where if you go in to get a certification, you do go into that closed environment. I could see something where organizations start to sit, require people to go into this testing center, to take their, to take that next level. But I don't know that identifies what you flagged as well. How do you even get to that process? Because to be able to get there, you have to make your way through all these rounds. there's, I don't have a good answer for that. I think that's a definite, concern. It's a valid concern. And, you know, trying to find those best candidates, again, we're gonna have to rethink how we go out and find those candidates and how we identify them and how you recruit them in. I, you know, I almost think that because it's so easy to apply for jobs now in bulk, and you can use bots and you can use generative AI tools to do that. Companies are struggling under the weight of the number of applicants. Just you know, when I applied to schools, totally dating myself, but I hand wrote them. I hand wrote my Monte Christo essay for Notre Dame. It's appalling now. I look at it. I can't believe I got in. But then it's you know, you had, you know, I applied to five or six schools, you got, you know, got the results. And I went to college now as people are applying to 30. But it's because you have, it's easier to do it with the, compute technologies and, you know, being able to use the internet to send your stuff in. Same thing with the jobs. You can apply to 500 jobs in a day, where before you'd have to scan the ads and you have to write it down. So we have kind of a, you have a bulk problem too. And how do you cull through that without automation? I mean, that's a real challenge. But how do you fly, how do you flag those people who have that, you know, kind of like trueness and have a, kind of a valid moral compass that's gonna be tough to screen out. A good thing I had, I'm a consultant, I can go and figure out this, workflow future problem.
graham-wolfe_1_04-18-2025_124815Yeah, you let us know. We're happy to work with you on it too.
speakerOh, I'll read Ella's next column and then have the answer.
graham-wolfe_1_04-18-2025_124815For sure. For sure.
speakerYou should actually throw that into chat GPT and ask it what it would do. It'd be interesting to see. Recommend,
student 3yeah. Prepare the answers.
graham-wolfe_1_04-18-2025_124815Okay, great. I do wanna touch on regulatory, so I'll ask that question. yes. just'cause I came up at the meeting on Monday. lemme think about how to word it though. Okay. yeah. Last major thing I think we want to touch on as we wrap up here, is corporate governance, of artificial intelligence. there's been a lot of buzz around it. it's kind of inherited the same status that environmental social governance had, a few years ago when that hit the scene. Of course, lots of strides are made there now with the urgency around this new technology. This has sort of taken the spotlight. another sort of consideration to wrap in here is that, the regulatory landscape, what underwent a major shift and we've written about that in the taming AI column, a few weeks ago about the pivot from, the Biden administration to the Trump administration. going a lot more hands off direction. so that's had, you know, real measurable I impact, on companies and on everybody trying to innovate and use technology in this space. what's sort of emerged is that these corporate governance strategies of regulating AI are sort of the defacto regulation given that, there's not much coming top down from the government. a lot of it is sort of self-governance within organizations. that is a big black box for people. you run into it every once in a while. there's like a guardrail that pops up on, on some AI tool. You don't quite know why, or you know, you hear, you know, stories about people in their day-to-day jobs. There's maybe a legal, barrier here. There's a ceiling that you bump into when you're trying to use these tools, you know, dealing with privacy and security, that kind of thing. But curious, any insight you could give us into that kind of a black box? When has it popped up? How do these frameworks come to be? and yeah, sort of break down that the, Black box for us.
speakerAgain, I'll just say that this is a kind of my point of view, but I actually think that the right regulations can drive innovation. It. And the reason is that you have a lot of organizations who are hesitant to use ai, generative ai and to bring it into their organizations and to use it at scale because they're worried about what could happen. And that nobody really seems to know. And what if they go down this path and do too much so they freeze. And they don't take advantage of basic things. Whereas if there were kind of real common guardrails in place and people knew what the rules were and they knew what was kind of safe or what would be okay to use kind of the table stakes, they'd be more likely to be able to take advantage of some of these benefits and to start using it. And it, we'd see the commensurate innovation. So I think that, you know, conversely, sometimes a lack of regulation can create that environment where people really hold back because of fear and lack of understanding. now too much regulation where we're seeing where they're like, don't use any generative AI at all. Or the worst is like no artificial intelligence. When you've been like using artificial intelligence at scale for 15 years, you're like, if you want us to pull all AI out, we actually have to roll back to cobalt. I mean that, you know, that's like sometimes you see policy makers just not. Understanding and talking about. and so they can overregulate and what we're seeing in some states is where governments will say no. Generative AI can be used without this exception. And you increase the bureaucracy and it makes it just impossible to get anything done that doesn't hold the bad actors back from progressing. Like what if you have some of these other kind of entities out there? Really being able to use generative AI better and faster and more efficiently than our kind of corporations or governments, that's a problem too. So I think there's a middle ground where you can give some guidance and regulations over here's how we're gonna use AI responsibly. Here are the guardrails that organizations should have in place and have kind of a common understanding of that. I think that would pull some of the fear of this technology out of the market and create that environment where people are gonna continue to feel comfortable using it and then be comfortable seeing that. Seeing that innovation continue to unfold?
graham-wolfe_1_04-18-2025_124815Yeah, that's a great answer. it, it makes total sense. And again, it's not something a ton of people have exposure to, so it's important to, to,
speakerI'll give you one more example, Graham, and this is like what I, we do a lot of presentations to, state leaders, state agencies, to the legislatures. And we'll ask them kind of on a continuum, you know, who uses a generative AI every day? Who's familiar with it, who wants to embrace it, and who thinks it's like the worst thing ever that is gonna destroy humanity? And more often you see the people making the regulations kind of saying, I don't have a working knowledge of generative ai. I don't really understand it. And they're over here on the continuum closer to don't use it. And so I think it's really important that our legislators and those making the laws and those really kind of creating the policy, understand what this technology is, what it means, how to use it, and then to make the regulations with that in mind. Because I think there is a real risk of kind of shutting things down too much, just as much as there's a risk of doing nothing. And that will hold people back because they don't know what's safe and what's not.
graham-wolfe_1_04-18-2025_124815Yeah. Yeah. Thank you for that. and just to kind of start to put a bow on this conversation, I wanna pass things back over to our writers, and just talk a little bit about, the optimistic side of things. what are you excited about, and that you can feel free to decenter this from, you know, work or school or the corporate world. what gets you excited from a research or just in, in general about, the direction that, you know, AI is taking your life and, society.
student 3I think something that's really exciting about it is that, like you've mentioned before, it's an inflection point and inflection points, you know, don't necessarily or honestly always or ever, turn into a direction of, you know, now the entire world's gonna crash and burn. Robots gonna take over. you know, I remember, listening to some other podcasts, where they asked Han SIMer, Han Zimmerman, Han SIMer, the, you know, soundtrack creator, framer seller, and all these amazing movies. how he feels about AI and his music and how, it's used for art and whether or not that's, you know, going to decrease people's creativity, it's gonna take people's jobs, et cetera. And he was extremely optimistic. And so I'm kind of taking his point, I guess in the sense that it, it forces everybody to kinda shift into a new way of thinking, into a new way of innovation that, you know, I think, that hasn't challenged us for a bit. And I think in order for. These progresses to continue happening, like in some way, you know, it's really good that something, you know, so stark and so dramatic hit us super quick. because I think it's gonna, I think with the right regulations like we just talked about with the right people and the right minds, that it's definitely in 30 years, you know, we're gonna look back and be like, this was a good thing. And so I have a very optimistic point of view on that at least. but I'd love to hear your thoughts, Ella.
student 2Yeah, so I was just gonna say, a couple weeks ago I was sitting in one of my CS professors' offices. We were having this entire conversation about sort of the use of AI and how it's affecting particularly computer science students, but, students in general. And I think what's scary is a sort of what we've been talking about, how it can like. sort of allow students to get by without actually understanding, but I think that the more important sort of approach, or the more optimistic one to take is that it allows students who have that drive to learn and those who, you know, want to be. Better computer scientists. It gives them a means to do that, continuously and constantly. So you have this tool that sort of allows you to always, you know, interact, ask questions, understand what's going on. You can ask it questions all night, and it never gets tired. So I think it really comes down to the type of person you're dealing with. and if that person has that drive to learn and wants to, succeed and excel in whatever field they go into, they'll be able to use ai, to aid their sort of growth and development as opposed to using it as a crutch or as something that allows them to get by without actually knowing anything. Because what's the fun in that, you know, you go to school to be educated and to learn how to do these things and, I think it would be silly for someone to. You know, use AI in a way that doesn't build that up, but instead tears that down. yeah.
speakerI think you both have, I think those points of view are both spot on and that one is that, you know, we can use this for good and this can be something that ultimately, you know, benefits society. When you think about the applications of it, just even in the medical field where you're gonna be able to maybe identify, and I know Notre Dame has a center for rare diseases. Imagine if people around all around the world had immediate access to that kind of capability, that corpus of knowledge so that they could identify some of these things sooner and get treatments faster. I mean, that kind of thing. The. The ability to affect a better, positive, quality of life is, I think, really a strong possibility. And I think, Ella, the point that you made about, you know, the students, like you're only hurting yourself if you don't take the courses in the long run. If you don't choose to learn. I kind of think about man, if you just put the effort you're putting into trying to cheat into learning you, you're gonna be in the same, you're gonna be in the same place in a much more, moral location as well. You know, I, there's always going to be dangers and there's always going to be the challenges of technology. You know, part of me wonders if we're gonna end up in the, like the Wally world, if you watch all those Pixar movies and we're gonna be sitting there in those big things like watching our Coca-Cola change. Or if you haven't seen the movie Idiocracy, I mean, that's a possibility too. But also the things that humans have been good at for eons is change and adaptation. And we've seen it in the changing of jobs and how people live and how people interact. And if you come back to kind of the core always of what makes us human and we kind of collectively as a society agree to keep that at its core, I think that's where the technologies will continue to enhance us and to allow us to, like ultimately have better quality of life, better interactions with people, and generally raise up the standard of living for all.
graham-wolfe_1_04-18-2025_124815Yeah. Thank you all for those, you know, parting thoughts. and Jen, thank you so much for being here.
speakerIt's so great to see you all. thank you so much for the time. And, and go Irish.
student 2Go Irish. Go Irish,
graham-wolfe_1_04-18-2025_124815go Irish. Last thing I'll say is, a quick plug of the AI at Work column, written by Annie and Ella here. check it out. it's a great living document, of, you know, the exploration of how AI is gonna be impacting, our working world, labor, commerce, the whole gambit. again, thank you all for being here. and that's it from us.