The ThinkND Podcast
The ThinkND Podcast
What Do We Owe Each Other? Part 1: The Future of Responsible Tech
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Arvind Krishna, Chairman and CEO of IBM speaks on The Future of Responsible Tech, in conversation with John Veihmeyer, former Global Chairman of KPMG International and Chair of the University of Notre Dame’s Board of Trustees.
Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.
- Learn more about ThinkND and register for upcoming live events at think.nd.edu.
- Join our LinkedIn community for updates, episode clips, and more.
Good morning and welcome to this very special Notre Dame Forum Series as part of the celebration of the inauguration of Reverend Robert A. Dowd, CSC, the University of Notre Dame's Eighteenth President. My name is Megan Sullivan. I am the Woolsey Family College Professor of Philosophy and the founding director of Notre Dame's Institute for Ethics and the Common Good, and I am delighted to be your host and emcee today. Since 2005, each year, the Notre Dame Forum invites campus wide dialogue about issues of importance to the university. The nation, and the larger world. Father Dowd has chosen a particularly timely theme for this year. What do we owe each other? This theme invites reflection on our responsibilities to one another. In a world where ideological and cultural divisions seem to have deepened, the forum aims to bring people together, across differences, to face the most pressing challenges of our time. There is no doubt that we are living through an era of profound opportunity and profound disruption. Pope Francis often emphasizes that seasons of change like this can push us in one of two directions. They can push us inward, making us more self centered, More territorial, or we can choose instead to look for the deep good that is always present in the human story. We can look outward, beyond ourselves, cultivating understanding, faith, hope, and love. We at Notre Dame believe that universities like ours can be places where students, scholars, and leaders turn to discover ways to expand this ethical framework for a world that is profoundly in need. Today we will do that work featuring four fireside chats with global leaders in technology, philanthropy, corporate sustainability, and foreign affairs. These leaders will help us explore this theme and think critically about what we can do Individually and collectively to bridge social divides and promote healing within our communities. Our first session today features Arvind Krishna. Chairman and CEO of IBM, who will be speaking on the future of responsible technology. Before we begin, I'd like to introduce our moderator, John V. Meyer. John V. Meyer is the former chairperson of KPMG International and chair of the Notre Dame Board of Trustees. John spent 40 years with KPMG, holding numerous leadership roles, including U. S. Chairman and CEO, U. S. Deputy Chairman and Managing Partner of the Washington, D. C. Office. A 1977 graduate of Our Lady's University, John joined the Board of Trustees in 2017 and was elected chair in 2024. He currently serves on the board of the Ford Motor Company. And chairs the boards of the LPGA and Catholic Charities of Washington, D. C. John has been consistently named as one of the top 100 most influential people in accounting by Accounting Today magazine, and as one of the top 100 most influential people in corporate governance by Directorship magazine. Throughout his career, John has championed the importance of business leaders building strong cultures within their organizations, emphasizing inclusion, diversity, and corporate citizenship. I will hand it over to John to introduce our esteemed guest.
2you
1very much,
2I will just say, I know being one of the top hundred accountants in the world is something everyone aspires to. listen, we are really fortunate to be joined today, by an incredible leader. Arvind Krishna is the chairman of IBM. And since April of 2020, the chief executive officer of IBM. not much was happening in the world in April, 2020, right? We'll get back to that. Importantly, I think Arvind is IBM's first CEO who comes from a technical background, having been trained as an electrical engineer. And I think that's enabled you over the course of your career, Arvind, to have a number of, really important business leadership roles within IBM, leading divisions like, Cloud and Cognitive Software, the IBM Research Center, which drove the innovation, within IBM. And the systems and technology group has clearly been the driving force behind IBM's transformation to become the leading hybrid cloud and artificial intelligence company in the world. I think the acquisition you made of Red Hat, that I know you were absolutely the driving force behind, that has enabled IBM, to define the hybrid cloud market. We'll talk more about that as we get into our conversation today. Arvind got his undergrad at the Indian, Institute of Technology, PhD from the University of Illinois, up the road in Urbana. In 2016, long before you became CEO, Wired Magazine named Arvin, and I quote, one of the 25 geniuses who are creating the future of business. Now I have to admit, when we invited you to join us today for the forum, I knew you were really smart. I knew you were a phenomenal CEO who's really leading the conversation on ethical AI development, but I did not know that you had been anointed a genius. that's an added benefit today, Arthur. Thank you so much for joining us today. And, in what, you all should know is a really busy day for Arvind. started the day in Chicago at something has to be back in New York for a New York fed board meeting, this afternoon. We can't thank you enough for joining us today.
4The wired magazine has a good reason that you shouldn't believe everything you read in media.
2But we're gonna find out today. I think as all of IBM has been a leading technology company for over 100 years. 300, 000 employees across 170 countries. You've had a 30 year plus career, with the company. And importantly, from our standpoint, IBM has been an incredible supporter of Notre Dame in a lot of impactful ways, not the least of which is the Notre Dame IBM Technology Ethics Lab, which, has really been transformational, I think, in a lot of respects, and just your support for our students through scholarship and other areas. thank you. I think before we dive into a lot of the detail we want to get into today, I think it'd be helpful just share your career path at IBM and how did you end up where you are today after 30 years, Arvind.
4I think we get back a lot more than we give to Notre Dame because so many of your students join us and the work that they do at IBM I think more than repays us for the work that we do here. I finished graduate school with a PhD in engineering and I'm not sure I had any idea of the career path that I've actually had. When I come out, what I wanted to do was build things that would get used by lots of people. I was in the area of networking and you think forward and this is 1990 and you begin to think about what could be the case and everyone is talking about in those days. it sounds quaint now. for the students, it should be. It's like a historic artifact. People were talking about could we have home networking? Could we have what is called broadband now? That was not the term then. there was a lot of talk about that and I began to think, but people are talking about handheld devices. laptops didn't exist really in 1990. Okay, they did. It was a brick. It was like a suitcase sized thing. 20 pounds and you needed to plug it into the wall because there was no such thing as a effective battery. But you can imagine, you can see the pace of electronics. You could imagine in two, three years, this is going to shrink down and you're going to be able to walk around. And if you do that, then how about doing something that would wirelessly connect you? over a year, we went ahead and built that. What today you would call Wi Fi. the actual roots of the standard are in what we built. then I had a really negative experience because when you were trying to convince the business that this could be a market in the millions or tens of millions of units, they couldn't quite conceive it. the idea was, yeah, I could see that for a warehouse. I don't understand why anybody wants anything then a plug in the wall. why do you want to walk around with this? So what it taught me was you've got to learn both sides of that equation. you got to get the technology, but you also got to be able to explain to people how do you distribute it? How do you sell it? What could be the size of the market? And that was an eye opener for me, but a great way to learn. And then you go ahead and learn what are the other things. I'd had enough of school by then. I'd had 10 years in school, so didn't want to go back But you've got to learn how to learn. I think something that this university is great at teaching people, that's the point of a rigorous curriculum, right? You learn how to learn, and I've never been shy about asking people, Look, I don't understand this, I'll make a slight amount of fun of my accounting friend here. I remember 10 years in, we were doing an acquisition and somebody told me, You can recognize this revenue and you can't do this. And I'm like, What? This doesn't make any sense. But you go and ask your accounting friends, why is that? And there is usually a good, logical business rationale. I'm not going to learn enough accounting to know it, but you go and ask somebody and then do you have the curiosity to go learn that? And I think that building on those things, being able to learn has allowed me to develop my career and to be able to take a, I'll call it a Mianbrick path. It's not a straight line. you can only go so far as an individual researcher. You can end up maybe, heading an innovative research function. But you got a balance between the business and then at some point it turns into how do you motivate people? I'm sure the leadership of Notre Dame thinks about that very deeply. How do you motivate people to do things? And then how do you survive in the communities? Because otherwise, why would people want to join you? And why would clients want to work with you? So you learn all those things along the path.
2that's great. Thanks for sharing that. I think a key takeaway for any students here is the comment you made, it's not a straight line and being open to some of those twists and curves, serves most people well in their careers. so one more IBM related question, leading tech company for decades. Headquartered in New York, not Silicon Valley, where, everybody assumes everything good that ever happens in tech comes out of Silicon Valley. How do you think, It's changed the way IBM thinks about technology and maybe the culture of IBM, that you didn't grow up on the West Coast in Silicon Valley.
4So IBM's got a straightforward mission. whenever we get recentered you go back to your mission. Our mission is deploying technology to make our clients business better. Now, along the way, you find gaps, so you have to invent technology, but really the North Star has to be, it has to benefit your client. So if we remain on that North Star, we have a lot of clients who are around financial services, government, insurance. Those tend to be much more on the East Coast. I think the East Coast companies tend to ask the question, how do we. really benefit ourselves and society. I think it's also in a much more hard lens. than some of our West Coast companies. I have a lot of admiration for their let's invent something. Let's not care so much about the business model. We'll figure it out. There is a lot to be said for it, but it is also something to be said for not getting too caught up in that distortion, but around what is the real value. And so to us, it benefits because one, we tend to have a lot more client gravitas on the East Coast. So that has benefited us. The fact that I'm 30 to 45 minutes away from a lot of our large clients in New York City benefits us. It's only an hour's flight to Washington. So those are big benefits. But I think there is this little bit of, let's focus on what can we do to benefit now, as opposed to what could be, And I think that lens is useful.
2And that culture we'll come back to a couple of times today, I think, as we think about the ethical development. But before we get into that, let's talk about the leading innovation that we're all focused on these days, which is artificial intelligence. We've got a lot of non technologists, non electrical engineers in the audience. including one sitting next to you. explain to folks, what are we actually talking about? There's so much information out there and different people characterizing things different ways. Artificial intelligence, generative AI. How would you describe them for the non technologists in the room?
4Yeah. So let's begin with the scale and scope of the impact. people ask, and analogies are great because it gives you a sense. Is it like the internet? Is it like smartphones? Is it like cloud? I actually go to it is probably like internet. Why do I say it's like internet? Internet came on the public conscience circa 1995, the same way as I think the collection of AI that John just referenced, I would say, came on the public consciousness in late 2022. When I was in grad school in 1985, for the faculty and the grad students, internet was already five to ten years old. I think AI is very similar. AI by itself is not new. So why did the internet come in 1995? Because a guy called Mark Andreessen invented a browser which made it accessible to lots and lots of people. You didn't have to actually get deep into the gobs of the underlying protocols and the underlying, I'll call it, arcane. It's almost whispers and rituals that you had to do to be able to make the internet work for you in the late 80s and he just said, Hey, type this little thing into this screen. It's a nice user interface, works on all devices and you have access. I think that's exactly what a company called OpenAI did. It is building on 20 years worth of research and progress. the next thing is to make it easy. AI, everything that we can conceive of, could have been done 5 or 10 years ago. So you have to ask, why was it not? Because till then, you required lots of humans to clean the data. You created what's called a model. That model could be used for one purpose. And you require a lot of trained computer scientists to be able to do all that. So now you turn around and say, I use lots of data, but I don't have to clean it up. Okay, that reduces the cost dramatically. By the way, the model you produce can be used for lots of things, not just one thing. That opens up the aperture. And if new data comes in, you have to do a small amount of work, not start from scratch again. that accessibility of ease of use. is what this generation of large language models is about. when you make things a hundred times cheaper or more accessible. that's a big difference. So I think the world is still grappling with, if this is a hundred times cheaper, how many things can we do using these technologies? That's where we are right now. And the other part, humans are always going to react to things in language because the human brain is wired uniquely around language. these are producing material that doesn't sound robotic but sounds appealing as if maybe like a middle schooler had written it, is where I go to. then it's interesting. It's not, the old C spot run language that used to be generated. I think all those things come together. can I train it, for example, on business language? Can I now begin to look at, insurance claims? not just one kind, but all of them, a new storm happens. And can I look at that? Once those costs come down like that, technology becomes incredibly powerful. And that is, I think, what the excitement is about.
2And as you think about investing IBM's capital, to deploy against this opportunity in the marketplace. how do you size the potential market? How do you view this in terms of the opportunity for IBM specifically? And then more generally, just in the marketplace and the economy, broadly speaking.
4So actually I'll begin with the tail end of the question, John, because I always committed, what is the value for society and for all enterprise, including government? I like the number that I think it was McKinsey who came up with but I tend to agree with that. They said there's about four trillion worth of productivity in the world over by the end of the decade. So you react with, hey, four to 5 percent of total GDP, that's a pretty good number. I then back into from my perspective, And the tech industry could normally get between 10. maybe 20 percent of the value because 80 percent of the value normally accrues to those who are buying it and deploying it and their own skills and alternate users So if I look by the end of the decade, that's the scale and scope of what I'm looking. And to me, that sounds pretty reasonable because that's about the same size and scope as the internet and as the cloud So I look at it like that. We are not a B2C company. Our clients may be, but we are not. So we're a B2B company. we don't have to stand up a massive infrastructure on which we're going to get a billion end users because that's not our business. We do have to invest in infrastructure on which we can train models. So we are sure, and we'll get to that, I'm sure, John, that we're sure about their ethics. We're sure about how they're used. So we do have to invest in that. And sometimes rent, infrastructure from others, but ours is kept there. And in the people that we have to bring in to help train our. All these. Now, at some level also, we're running out of people who know this, coming out of school. So you also got to retrain and upscale your own people to be able to do some of this work. So those are the two uses of, our capital that are going on here. I think we're going to see so much innovation in this infrastructure in the next five years I think we're going to see things that are a hundred times better than today, but they're going to take a bit of time. It's not going to happen in the next year or two, but probably in that three to five year timeframe, you're going to see remarkable innovation
2So on that point, what are you seeing in your clients use? everybody's reading about AI, but I think there's a lot of confusion, about what's actually happening out in the marketplace. How are companies deploying it today within IBM? how are you deploying it? And what do you see as the impediments to really scale it in a way where it pays off for companies that are making these investments.
4So I think there's three use cases that seem to be pretty well accepted and the risk in them, is low or low enough that we know how to manage around it. So the first is around customer service. when you think about, whether you're calling a call center, whether you're writing in for help, et cetera. Those are areas where we see our clients who embrace it, and this is across financial services, healthcare, government, about three fourth of what comes in can be handled by this generation of AI. What's a quick example? If you think about a telecom company, what are people calling on? It's not really what people think of. They're often calling on, there is this line on my bill, what does it mean? They're calling on, I'd to turn a service on, or they're asking, am I eligible to do X? If you just think about it, those are all where you can actually improve satisfaction. usually the person on the other end is not very experienced. What they have to do in turn is type in your question in their own language and hopefully get an answer and then they're reading that answer back to you. If you just think about this is what generative AI can do almost instantaneously. The guardrails you've got to put on is, hey, if I know what the answer is and I have very high confidence, give it. If I think it may be one of those more complicated ones punted to a human. The Veterans Administration in the United States, used to take months to enroll people who are eligible for benefits because they were matching a lot of documentation. Are you eligible? When were you here? Can I look it up inside? as you think about those, and that tends to go in peaks and valleys of workflow, you can begin to really automate that and get a veteran their benefits within a day or two, as opposed to months. These are the kinds of use cases I don't think it's going to replace a programmer, Can it make every programmer 10 or 20 or 30 percent more productive? Absolutely. That's what they were doing intuitively. Programmers, and I used to be one, we're lazy. What is the easiest and best way to program? Take some code that works. today, You're going to GitHub, you crawl around, you search, you find something. Why can't the AI do that for you? But the other side of the programming, understanding requirements, working with others, understanding what it has to do, that's still very much a human task. And the third one is around what I call enterprise operations. There's a lot of stuff that most of you in this room don't think about, except the two or three who run operations for the university. On all the stuff around, procurement, accounts payable, accounts receivable, onboarding people onto the systems in, HR, all those things, I think we're going to get a 50 percent boost. These, I'll call, are in the low risk category. The higher risk category, do I really want it to respond on our behalf? Can it really become the front end in a retail bank branch? I don't know. Because there's so much, around human interaction in those things that side of it, it's not just the transaction, is different. With the way demographics are going, we're going to need all this help. The number of skilled people in the entire West is decreasing. That's what the demographics point to. if immigration is not going to be at the 3 million level, it's going to be more like the 1 million level, we have fewer and fewer people to do all the tasks we need so we are going to need help from all these technologies and then we've got to work on the guardrails on can we keep extending the scope or what they do.
2Thank you. hopefully everybody has a better idea than when they walked in this morning as to what we're actually talking about. So now, Arvind, let's get into the real focus of the forum As Father Bob has defined it, what do we owe each other? Which aligns very closely, frankly, with IBM's values of make the world work better. And I know you personally and the company has a big commitment to the ethical deployment of new technologies. How are you thinking about this powerful and innovative technology from the standpoint of, making it work for people in a positive and ethical way.
4Look, there's so many factors that are going to play into this. One is the ethics, which for us, the lens becomes what is it used for? What should it be used for? But equally important, what is it trained on, and that has ethics in it, but it also is eventually going to have, I think, legislation in it. we have principles around trust and transparency. Those are really important to us. What are two examples of that will play out in what we're talking about? One is around what we are calling AI fact sheets. Explain to the world, this is the data I used to train it. This is the method I used to train it. It is actually quite surprising to me that most people are not willing to explain what is the data that they used to train their AI. So next to that is the ethical question, but is it a fair use? I don't think companies like us should come up with an answer. That is going to be done with a mixture of, I think the ideas should come out of academia and they should then go to policymakers, whether that's legislative or judicial, meaning I think there'll be a Supreme Court case on was this a fair use of the material. We all consider it fair use to read a book and then the knowledge is in our head and we reuse it. Is it fair if an AI algorithm reads a book and then can repeat it 10 million times and there is no more need for the book and the people who did the book? I think that's a question. I don't know the answer. There are those who will debate, yes, that's fair use. Music and in literature, about a hundred years ago, it eventually became legislated that no, it's not fair use. In music, you can buy and replay the music in your own home and for a private party. You cannot actually play it here. for this audience. If you wanted to do that, you need to pay a royalty to the publisher or the owner of that title. So there is this question that is not black and white, that has to get determined by society. I also think that what is the data? Was the data original? Was the data itself created by another model? Do you trust the authenticity of the data? These are topics that are much more than what a technology company can do, which is the reason, John, going back to the beginning, that we are so pleased to partner with a university like Notre Dame to say, how should we think through these questions? If the data is polluted with other generated data, is it okay to use it? Or are we now falling into a trap of some type? I don't think we'll know. This is going to take a few years to figure these, things out. in the end, we advocate three things that we think we can stand on to try to maybe take a complex topic and bring it back down. We say one, regulate the use or the risk in the use case so that we can stand on that. The use cases are described, I think, a pretty low risk. And so those are okay. others may or may not be, Two, we advocate for openness. So describe what you do and let others build upon it. Why openness? It actually is our way of saying it can help democratize a technology. It can make it much more accessible to others. Don't hold it in such a way that you're charging a huge toll just for the fact that you're there. And three, we say that you should hold the inventor or builder of these models I think if you hold people accountable, then the people tend to behave better.
2on that point, I'm going to put a plug in for IBM and let expand on a little bit, which I think absolutely ties to your culture and your focus on the ethical deployment of this. Explain the kind of indemnities you provide to your clients that, frankly, many of your competitors are not doing today.
4in indemnity. We give a piece of software to one of our clients. Then we stand behind it, meaning if anybody wants to come after them to say, our IP is inside of it, or this is not an appropriate use We stand to say If the technology is from us, we think it's completely fair. You got to get through us and our balance sheet before you can get to. The user of that technology. that's a powerful statement, but it also puts accountability back to us to say, Now, are we pretty sure that we are not really going to get sued by 10, 000 people because we are sure of how we produced it? And in the AI case, what data went into it, etc. We do the same thing on all AI models that are created by IBM. In our case, we have the Granite family of models, and we give the same indemnity. Meaning, if one of our clients uses it, and 15 years from now, somebody comes and says, you're misusing our IP, which is somehow coming out. no, that's not accurate. And we will defend you until you get through us, meaning drain our balance sheet, you cannot get to our client. We think that is incredibly powerful and it works both ways. One is it makes us very careful because we are living by our principle of accountability, but it gives our clients a lot of freedom in terms of what they can depend upon and what they can use.
2The quality of these models is solely dependent on the data and the quality of the data that goes into it. When we were chatting earlier, you shared something with me that I found fascinating. Talk about a couple of years from now, what your view of the data that's out there available for people to go grab. How much of it will be, however you want to characterize it, generatively, produced?
4three years from today, 50 percent of all the data that we can go get. Whether or not it's ethical to get it is a different question, already the majority of new data is being generated by artificial intelligence, not by people. typically in about three years, you double the total amount that is out there. It's even faster, but let me put it as three years. So if the majority is generated, not human, you have to ask yourself the question. if the generation is not perfect, how much of what is there is actually new and we should use it for training versus not use it for training. And what we have done is we did make a copy about a couple of years ago. at that point we think it's at least 90 percent clean, meaning human generated as opposed to machine generated. this is a topic that people are not thinking as much about as they should be. Wait a moment, I know it's a small perturbation, but if you build upon just those perturbations, over time they can build up into pretty catastrophic, events. And I think that is a huge topic that is relevant. There was somebody at Notre Dame, I remember, in the early days of the internet who did some work around how little perturbations can become very major, on network effects. And that's going to play out here in big form.
2Clearly, in terms of a big question everybody's talking about these days, fake news, false information. Think about a world where that kind of percentage that's out there, was generated not by humans, by,
4when you say that, I think the ethical thing to think about, people have been concerned about fake news, misinformation for 200 years. That's why the British for democracy created the soapbox system. If you give one party a soapbox to speak, you got to give the other party also. That's really where it comes from. In the US, the rules were around, if you give one party television time or radio time, you got to give the other party equal access. Can AI amplify the misinformation? Absolutely. But to me, it's not new. It's just the rate and pace. Here is something on AI that people have not really, or the public consciousness has not gotten to it. But AI can understand who you are. We all know this from advertising and digital media. it can tune the misinformation in a way, that appeals to you, the individual. That is a new form of misinformation, because before now, it had to go out to large cohorts. It was too expensive to tailor it to the individual. That, I think, is a new form of disinformation, which is indeed unique to this era and has not been done before now.
2Let's talk about, another development that's going to have potentially, huge implications in addition to AI, and that's quantum computing. IBM's taking a, significant leadership role in this. First of all, I want to start where we started AI. For the non engineers, non technologists in the room, What the heck is it? and how do you see it being deployed in the economy?
4I think quantum is the first time we have a new kind of computing since the mid 1940s. in the mid 1940s, a lot of giants, created the basis of all modern computing. Whether you attribute it to the ENIAC system at the University of Pennsylvania, John von Neumann, who was a math professor at Princeton, Claude Shannon, who was a researcher at Bell Labs, what this collection of people came out with, I'll call it, is what is today the computing we all use. Wonderful computing. Very deterministic. Think intuitively. What it can do is middle school arithmetic at incredible speeds and scale. just like you can take middle school arithmetic and build it into high school. algebra. That's really what today's computers do. Now, we have new kinds of computing in quantum. It's a probabilistic machine. not deterministic. Probabilistic means it's looking in the space of all the possible answers and trying to pick out the most appealing one for a problem Economists would call it utility functions, physicists call it Hamiltonians, engineers will call it an optimization function, Problems that are really hard to do on normal computers, some of them become really easy to do on these. The most famous one that people talk about is it'll break encryption. actually the problem is not the encryption. What it can do is tell you what are the two numbers that get multiplied together to make up a very big number. That's a really hard problem on a normal computer. You actually have to do what I call look at all possibilities and then you get to the answer, which means you're exploring the entire state space. These computers can somehow, and I won't get into it, but the underlying principles are those of superposition and entanglement, they use to look at the whole thing and say, that's the answer. Now, how far away? Are we talking science fiction? Not really. I think Richard Feynman is the best one to quote. he's been dead for 30 years, but in the 70s, he looked at it, and he was one of the most brilliant physicists who ever lived and said, there is nothing in here which contradicts the principles of quantum mechanics. He then actually went to the second part. If you want to simulate nature, you need a quantum computer, because he couldn't imagine how you could possibly simulate nature on those. So then, you get to why do I care about these? Do I want to design a better battery for an electric vehicle? Do we want to design better fertilizers? Do we want to design more lightweight materials that are still very strong? Do I want to be able to get more energy out of an existing oil well? These are all problems that I think a quantum computer is going to address. By the way, I think we'll be able to approach them in the five to seven year time frame, which means by the end of this decade. Can you design a better drug for cancer? I think those are going to be tougher problems because those are more complicated molecules and as we know, simulating things inside the human body. I'm not sure we fully understand that yet. So those are much harder. Is it possible? Yes, it's possible. The other is around risk. financial risk of different types. it's almost like it was built for these machines. what we have to do is probably improve these machines by about ten times from where we are today. you look at me and say, ten times? That's a lot. we've been improving them at 10 times every three years for the last decade. compared to where we were in 2016, literally by 2018 we were 10 times better, then by 21 we were 10 times better, and now we're 10 times better. This is now, to me, in the scope of an engineering problem, not a physics problem. So it's exciting to be able to look at those problems. Can you imagine? More certainty in food, safer, transportation, much better energy storage, which is going to be an important element of the whole energy transition that the world is fascinated by.
2I want to get to some audience questions, but last question on this topic When you think about those potential use cases, which are incredibly powerful, I know the kind of investment it takes to build the kind of machine you're talking about. how do you think about a world where that kind of power, because of the amount of investment it takes, may get concentrated in a very small number of companies, And a small number of nation states that have the ability, both financial and otherwise, to actually build the kind of machines you're talking about. What kind of philosophical and ethical challenges do you think that creates for society?
4Look, the question comes down to who's allowed to use them. And how do you give access? So giving access is a easier part of the question because you can turn around and say, we can run the machines and open up access to a lot of people. it's not just those who can afford to build them but those who want to use them, which can be done for a much more cost effective way. There's an equal part about but you don't want them in the hands of really bad people. And so you got to make sure that you're actually much more careful initially about who's allowed to get access both in terms of nations and who the actors are. So we do that right now. Is it scalable when you have millions of these machines? I don't know, but for right now, That's the path we're going down. But John, equally important, we know that at some time, it could break encryption. So 10 years ago, we started working on techniques that would create new encryption not breakable by these machines and no harder to implement than today's encryption. And we take great pride in the fact, and we completely open sourced it. Maybe we'll make some money helping our clients, but we open sourced the technology in NIST's latest competition, three of the four algorithms they approved were invented by our researchers. So that is what you have to do to say, here's this thing, here is one bad thing. Let's make sure we mitigate that thing.
2Yeah.
4In terms of making forward progress.
2I'd like to supersede the next hour of discussion that's supposed to take place, because we could keep going a long time on this. All I will say is, I hope you all get an appreciation out of this conversation of how fortunate we are to have these kinds of innovative technological advances in the hands of leaders and companies. So I want to get to some questions from the audience that, have come in. Arvind, in your opinion, how much of succeeding in a corporate environment is about technical skills versus interpersonal skills, and how do you develop those interpersonal skills?
4It's actually one of my favorite questions. So when I talk in private to engineering school deans or chairs of department, I've been asking for 20 years now, Can you also give the students more communication skills? amongst all interpersonal skills, how do you tell somebody else what you're working on? How do you share ideas and collaborate with other people? I think it's incredibly important to have those skills, even for the most deeply technical people, So I actually request that. Most of them look at me. I look at myself and I say the experience, how I got it. Nobody insisted that you learn all those skills. my graduate school advisor, insisted that every week the 20 students in the group sit down, take a paper that they would give you and explain it to your peers in a lunch and brown bag session. You could maybe put up a few, pieces of supporting material, but how do you convey the ideas in an intuitive way, not everybody is exactly in the same narrow area that you are, but in a way that they can follow. I look at that and say, that is a blessing to me. When I came into the corporate environment. The other part for those who go up, that's probably five or ten years in. If you're managing groups of people, you got to understand what motivates them. Not everyone is motivated by the same exact thing. And it's really important to pick up those skills. on listening for what is important to them. And by the way, you'd be shocked. I think that infamous saying 5 percent of communication is verbal and 95 percent is nonverbal is, very appropriate. I walk into a room now and often say, I know you all said all this, but I have no belief that any of you are going to do it because watching the body language and watching how you all, spoke over each other, there is no belief. You're treating me like a dog, right? Okay, pat. yeah, we'll do it, but there's no intention of possibly it happening. And I think that those skills are all important to pick up.
2I want to get you back to campus sometime to just talk about leadership, as opposed to anything technology related, but we'll leave that aside. another question from the audience, how should companies think about their investment in research and development balancing near term ROI with a lens that lets them see further into the future? And how does the company share that vision?
4The technology goes into a lot of areas. oil and gas companies do technology, pharma companies do a lot of technology. If I look at both, pharmaceutical and us, so think software and computer hardware, you have to start spending well into the mid double digits of your total revenue on R& D. If I look at ourselves, we are probably sitting at around, 15 percent right now. If you get to massive scale, maybe that's the right number. I would not be surprised if we get closer to 20 percent over the next few years. Let's call it two thirds is going to be focused on products that are here and now, meaning you're pretty confident they're going to make money that year or the next year, you got to spend at least 20 percent in what are called speculative, They're not all going to work, and that ties to risk taking. This is where I think most of the mechanisms of a lot of mature companies have trouble. Wait, you're telling me two thirds of those things are going to fail? Yes, two thirds are going to fail. The third that does work is going to give you incredible advantages and ability to make progress. So that's how you have to think about it.
2It ties into another question in terms of, and for our students and parents of students soon to be going into the Workforce. is AI going to eliminate any opportunity to get a job, or are there still going to be positions out there? And I'm being facetious and paraphrasing a little bit, but the question gets to what is all this artificial intelligence generative AI going to do to the opportunity for our students and future students to find a career, start a career?
4This is probably far less of an issue for Notre Dame than it is for some other places, just to be straightforward. For students coming out of places like this, with extreme rigor, with extreme critical thinking skills. with a depth of many, different aspects. I don't think there's any danger of job reduction, Now, if you're walking more, and I've gone on the record to say I do think we have somewhere in the 5 10 percent total job displacement. I'm convinced that there will actually be more jobs, not less jobs. And I'll use the internet as a basic analogy. compared to 1995, many jobs have gone away. But here are all the jobs that got created. If in 1995 we stood up and said, we're going to have 5 10 million web designers, people would have looked at you and said, what the hell is a web designer? How many social media influencers are there out there? If you had said that advertising is going to shift to those, so we're really good. If there is a productivity advantage, you are going to create. A whole set of new jobs that you can't really fully imagine. there are some set of people driving that. So those are all new. I do think that purely repetitive rote jobs, there will be some fraction of them. I estimated at our company, that about six percent is going to get displaced. But I said total employment will go way up. If you look at more productive companies, you're more productive. Do more productive companies get more market share or less market share? History has shown 100 percent the more productive company gets more market share because you can pass some of that productivity back onto your clients and lower cost and higher quality. If you get more market share, you're going to need a lot more people. to do those tasks which are of actually great value to your customers. So I then paraphrase the whole thing with people who use AI are going to displace people who don't use AI. It's the best way to put it all, into perspective.
2Thank you for sharing that because I think that'll put a lot of people at ease that are worried about that. I know we're out of time. Last question quickly. Can the U. S. regain an advantage in chip manufacturing, do you think?
4legislation is one thing, but government also has to actually spend the money. I look at our friends in Japan. They did their equivalent of the CHIPS Act. after the U. S. did the CHIPS Act. Their first factory, the shell is up completely. By December of this year, it'll be fully populated with all the machines. They expect to be running pilot lines in 25. In our case, we are just digging holes in the ground right now. There's no shell anywhere against what we have done. So to me, this is about speed and it is about government becoming a bit more risk tolerant as opposed to using it as a way to add on hundreds of social programs onto the back of the chips act. If you want to do that, it'll take you 10 years, by which time the world has changed. Who knows whether we can get an advantage.
2So Arvind, we're going to have to unfortunately end it there. I'd keep you here another hour, but you'd be late for your board meeting.
3Thank you. Thank you. Thank