The ThinkND Podcast

Ten Years Hence Artificial Intelligence Promise and Peril, Part 5: Past, Present, and Future

Think ND

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 58:40

Join us for a thought-provoking virtual event, “AI Ethics: Past, Present, and Future,” featuring Nicholas Berente, Professor of IT, Analytics, and Operations in the Mendoza College of Business at the University of Notre Dame. Professor Berente will delve into the evolving landscape of AI ethics, tracing its historical roots, examining current challenges, and envisioning future possibilities. Gain valuable insights into the ethical considerations surrounding AI technologies and their societal implications. Don’t miss this opportunity to explore the intersection of ethics and artificial intelligence with a distinguished expert in the field.

Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.

  • Learn more about ThinkND and register for upcoming live events at think.nd.edu.
  • Join our LinkedIn community for updates, episode clips, and more.

Welcome. This is 10 Years Hence. I'm Professor Jim O'Rourke, and our series continues today as we move from symbiotic human AI interaction To AI ethics, past, present, and future. Our speaker this morning is Nicholas Berente, Professor of Information Technology, Analytics, and Operations in the Mendoza College of Business at the University of Notre Dame. Professor Berente shows and studies how digital innovations such as artificial intelligence technologies. drive change in organizations and institutions. He teaches courses on strategic business technology and is co director of the Gamma Lab and affiliated faculty in Notre Dame's Lucy Family Institute for Data and Society, as well as the Notre Dame Center for Technology Ethics. Professor Berente received his PhD from Case Western Reserve University and conducted postdoctoral studies at the University of Michigan. He was an entrepreneur prior to his academic career, founding two technology companies. Dr. Berente is the principal investigator for a number of U. S. National Science Foundation projects and has won multiple awards for both teaching and research. He is also senior editor for Management Information Science Quarterly. Ladies and gentlemen, please help me welcome to the stage of Jordan Auditorium, Professor Nicholas Marenti. Thank you, Jim. Thank you, everyone. this is exciting, and it's exciting because I get to talk to you about what I consider to be the, probably the most important issue of our time, right? especially for those of you who are going to, to essentially, you're coming into your own. You, the undergraduate students, right? You're coming into your own here at this time when it's really just starting. So let's talk about what it is. And, And I think we have to think about how we're gonna deal with it, right? But let's go back, and let's talk about technologies in general for a second, and the importance of technology. And, I think one way to think about technology is to think about the 1800s, right? The 18th and 19th century, 17th and 1800s, where we had the Industrial Revolution. What happened is the technology for generating value in the world changed, right? throughout most of human history, the way we generated value, the core technology was agriculture. It was the farm, it was, animal husbandry, and maybe before that we were hunters and gatherers, right? But then, in the 1700s, 1800s, in the Industrial Revolution, the core technology from which we, generated value, or productive value, was the machine. And it upended the way society, all of a sudden real estate wasn't quite as important as it was previously. people moved from rural areas to cities. We concentrated, so many things happened, the institutions of society changed. due to the core technologies, right? So technology is super important. And what happens? there's sometimes these negative externalities, right? and here you see a young person, a nine year old boy, doing hard labor. there are these negative externalities, there are these ethical issues that arise that people didn't necessarily predict ahead of time. And if you look at child labor, for example, in as early as like 1790, There were people writing articles about how we need to do something about children working. And there were sometimes entire families where the adults didn't work and the children did, because they were much cheaper labor. And it took, from 1790 when people started observing this, To the 1900s, I think it was around 1914 1915 in the United States that we finally eradicated child labor by requiring students to go to school, or young people to go to school, elementary age children, right? So it took more than a century. to take care of this problem. and what happened with organizations is you could consider, if you're Rockefeller, you're an ethical organization hiring these children as long as it's legal. If it's not legal, then you'd be unethical doing it, right? But if it were legal, then you were fine doing whatever. So what happened is organizations externalized ethics to regulators. Okay, so that's important to note. There are two things that are important to note. It took a long time to figure out what we're going to do and do something about it, and until the laws were there, people were fine just following regulations. another example of this is, pollution. And this is a picture taken at noon. in this, in 1948, in a town called Donora, right outside of, Pittsburgh, Pennsylvania. And, and this is pollution, right? This is smog. This is another negative externality of the industrial age, of that machine that is changing. And it did so many wonderful things, industrialization. We, in conjunction with industrialization, we moved the bulk of the human population out of poverty. We increased literacy. We did wonderful things with that new machine of industrial production, or in conjunction with that new machine. But we had a lot of negative externalities, too. And if you look at the negative externalities associated with pollution, they made regulations immediately. about seven years after that picture, they started indicating that we're gonna, and as we all know, we haven't solved the pollution problem yet. But what we have found is that organizations can't wait for the right, and there's still regulations happening, right? still with that original act, we continue to do, and the most recent, ones around the Clean Air Amendment, the Clean Air Act, this particular amendment, we're still writing regulations, and we will continue to, right? So what's happening? It takes a long time for regulations to catch up, but particularly with sustainability, organizations have realized we cannot externalize this to regulators. We have to be ahead of regulators a little bit, right? Because we want to be good stewards of the environment. The people we hire, your generation, want to work for sustainable companies, for example. So we have to have a story about this and it has to be authentic. So these organizations are not necessarily waiting for regulators anymore to be ethical. And there are case after case of organizations that were following the law but were dinged for unethical activity, right? Including Cambridge Analytica with Facebook, if you're familiar with that case. There are many sustainability cases around sustainable practice and sourcing and organizations are increasingly internalizing ethics. Okay, so that's the first point. We can't wait. It takes a long time And organizations have to internalize ethical practice. And then of course, we're here, what we're talking about, right? Which is artificial intelligence. Which is this incredibly powerful potential for technology that we are unleashing at an incredibly rapid pace in the world. we did some research. When, sometimes people want us to define AI. And probably my favorite definition of AI, John McCarthy, one of the founders of AI, said is, AI is whatever we're doing next in computing, right? And once we know that, we give it a name, it's no longer AI. Now the next thing is AI. So it's this continually moving frontier. So for example, barcode scanners. Back in the 1960s, we're computer vision, right? We called that AI. No one's gonna call that AI anymore. And it's similarly now. The predictive models that we came up with 10 years ago, using neural nets, no one's calling that AI now. Now we're calling these generative tools AI, and that sort of thing. So it's, we can't think of AI as a thing. AI is a continually moving frontier of capabilities, right? So that's how you want to think about it. It's not a thing, it's a frontier of things. And, and oh, by the way, it's diffusing so fast. First of all, computing is diffusing so fast. for example, the telephone. Or the dishwasher diffused really quickly. it took a couple decades. digital technologies, digital capabilities, diffused much faster. And then we all know what happened with ChatGPT when it came out. It took five days to get to a million users. So think about this for a second. Where it's this frontier of ever increasing, powerful technologies. and they're diffusing at a pace that no one is going to stay on top of. And that's where we are, right? And as the, just like child labor and pollution were a result of the technology of the industrial age, here's a list of some of the ethical issues that come up from artificial intelligence, right? each of these. is an intractable problem that we're just trying to get into, right? And the technologies are coming out at a rapid pace, so even as we look at what's happening right now, the story changes tomorrow. So that's the situation we're in. We know we need to internalize ethics, we know we have to stay on top of, These technologies, we have to put guardrails around them. we have to make sure that we're ethical. And we as organizational leaders need to internalize that in our organizations. So this is the situation we're in. and by the way. I guess we should have mentioned it before. Heather Doman is one of the folks that we're working with. She's at IBM. She was going to do the presentation with me today. She's in Dubai, and if you've been following the news, Dubai is flooded. They got two years worth of rain in a day, and the airport's flooded, nothing's leaving, and she's stuck. So, you're not going to see Heather. Apologies for that. There are IBM people here, if anyone's doing the hackathon this afternoon, but, yeah, Heather's not coming. So if you came to see her, sorry, you got me by myself. okay, so anyway, that's the situation we're in, right? So what do we do? We need to deal with this. We need to make sure AI is ethical. So what are our tactics? The first tactic that you saw everyone do about a decade ago is said, okay, we need to make sure AI is ethical, we need to internalize it. What we're going to do is generate some principles. companies start having principles. These are the principles, for example, that Google came up with. And I think we can all agree to this. Be socially beneficial. Yeah, that sounds good. You can go on, look at some of these. If you look at the bottom, one of the things they said is we're not going to do technologies that cause harm. Okay, that's a nice principle. We don't want to cause harm. They don't want to make weapons, so Google's not going to make weapons. Clearly they think that's unethical. So this is what happened, right? People said, look, we're going to do this, and they came up with, typically, some fairly general principles. This is IEEE, which is the Engineering Society. They, very close to the same time, came up with their principles, and if you look, they're probably pretty similar. Human Rights, Benefit to Humanity. These look Google's principles, except this one's reframed weapon systems, right? I guess IEEE and Google have a different idea about weapons and whether there's, their ethics involved. But this is how organizations were dealing with AI, kind of 1. 0 AI ethics was principles. And of course the problem with these principles is that there are different levels of abstraction, And the way people, and this fellow right here, Luciano Floritti at one point looked at all the principles that were out there and he's It's getting to the point where people are just like shopping for principles, Compiling their own little set of principles together, putting it out there, and it's it sounds good. It's just window dressing. Now, you can criticize principles. There are certainly problems with principles, but it does do a good thing. It helps us locate, right? What do we believe in? What are the things that we as an organization believe in and what's the true north? so we don't want to dismiss principles entirely. As a matter of fact, I would ask you this. I'll call on you if you don't raise your hand. Name a principle that you think we can all agree with. What should AI be? Say, give me a principle. So prioritize development. That's a principle we might all agree with. how about you? Privacy. Preserve privacy. That might be a principle. Transparency. What is that? Beneficence. Ooh. Alright, how about in the green sweatshirt? Honesty. what would you call that? A vest. Black vest. Fairness. Fairness, thank God. It's usually within the top three I get fairness. alright, so these are all good things. And it's like a lot of these. Give us more of these, right? Privacy. Of course we need to preserve privacy. Beneficence. Yeah, we should be beneficent. Whatever that means. we know what it means. We're Notre Dame people. but fairness, right? Who is against fairness? Nobody's against fairness. Fairness is a good thing. Things should always be fair, shouldn't they? Fairness is one of those wonderful things. It's like we can all sit in a room. We can be Republicans, Democrats, we could be old, we can be young, and we can all be like, yes, give us fairness. And we can come up with other words like that, give us justice. Yeah, we all want that. The problem is not the principle. The problem is when we need to operationalize that principle for actual action. And this is and this is a paper and there are actually more definitions of fairness, but the problem isn't the idea of fairness. The problem is when you define fairness. And this particular paper has A dozen different definitions of fairness. These are computer scientists, and they say, Look, we can adjust your AI to any definition of fairness that you might come up with. But you humans have to come up with a definition of fairness. So what is fairness? Do we treat everyone equally? Is that fair? Do we ensure that the outcomes of people are all equal? Do we ensure protected classes have demographic parity? All of a sudden, when we start getting into the weeds, we start finding that, that fairness is not so easy to, to operationalize. And then all of a sudden we don't all agree now, right? I might say, fairness is equal opportunity. You might say fairness is demographic parity. And now we're in a discussion, okay? so that's the issue with fairness. and we've run into this. And if anybody, if you're at all interested in this with the MIT, it's out from a couple of years ago. The compass project was something that, we tried to, and it was implemented in a number of states, Wisconsin, Michigan, judges, we decided judges were biased. because they incarcerated black people, instead of letting them out on bail, at a rate higher than white people. and so they put a system in that would be, and then, long story short, they realized that the system was trained then. to penalize black people more than the judges actually penalize black people. And then they had to try and figure out what would be the appropriate way for dealing with, dealing with fairness, racial fairness. And they got rid of the system all over the place. So people are interested in the compass example. So that's the issue. Principles are good in the sense that we can agree, we can talk about them. we can start talking about it, but on their own. They're not, they don't imply the operationalization. They don't help us too much when we get into the weeds about what we're actually going to do with AI, right? So the European Union, which by the way has its own principles, said, You know what? We can't do this. We need Human oversight. The bottom line is you need humans. So that has become, you might say, the second wave of how to deal with ethical AI. We're like, okay, principles are cool, but now let's put a human in the loop. And the implication here being that if we have a human in the loop, the human will ensure ethical action. And then we have all these, and this is just from a particular paper, you don't even have to look at it. In depth, it's just a way of organizing. Okay, so what are the different scenarios? And this paper was talking about fairness. And they were saying, what are the different scenarios? Do we need it to be really fair? Is fairness a concern? Is fairness not a concern? Do we want the human to make the decision? Do we want the algorithm to make the decision? And then they came up with different kind of human in the loop. Human in the loop, so either the algorithm makes the decision and the human kind of checks on it. Okay. Or, the algorithm advises the human and the human makes the decision, right? There are all these ways that you might formulate humans and algorithms working together, and that's the way that you might ensure ethical behavior. There are a couple problems with this, and this is something that, it's an online experiment that they've been running for about 10 years, and it's still out there if you want to go play with it, but it's called the Moral Machine. And when they made this, they wanted to program ethics into self driving cars. So if you Google Moral Machine and you go online, you can actually program ethics. Because what it does is it says, okay, who are you going to kill? If you go straight, there are maybe older people walking across the street, but there are younger people over here. And if you turn, you kill the young people. If you keep going straight, you kill the old people. Sometimes they put pets out there. You go straight, you kill the pets, you turn. they put healthy people and fatter people. They did all these things, all these scenarios, and they were trying to encode human ethics for who these autonomous vehicles should kill when they have to decide whether to swerve or go straight, right? And of course what they found was that it was much harder than they thought. People don't actually agree. So that's the one problem with humans in the loop. They don't actually agree, particularly cross culturally. For example, in some, East Asian countries, they will essentially swerve to kill the young people in order to save the old people. In the West, we will kill the old people to save the young people, right? So that was one of the cases where it was directly contradictory, but everywhere else it was shades of gray and that sort of thing. about who was in the car, who was So the point is, we have different standards for ethics. So when we put humans in the loop, Different humans, of course, are going to behave differently. the other thing is, humans have different, and there's a whole entire literature on what they call, algorithmic aversion, humans have different kind of proclivities to working with technologies. And some will work with them well, some will not work with them, some will take their advice, others won't take their advice. So people will interact. you put a human in the loop, they're going to act much differently than each other. And then of course, as we all know, humans, not all humans, are going to act ethically. So you put a human in the loop, they might not necessarily act ethically. And this is one of our PhD students, Ryan Cook, he did a little experiment, and there's a lot of this out there, and we're presenting this at a conference, but one of the little findings of the experiment is that, it's this idea of, motivated moral reasoning where if you're a boss and your subordinate maybe does something unethical, but they're performing really well, you'll overlook their unethical behavior. However, if they do something unethical and they're not performing really well, then, you will penalize them. Okay, so humans are flawed. Humans have our biases and in this case, so that's the problem. We can't put a human in the loop. They have different ethical standards. They may not behave well, right? And, and they have different interactions with the technology, right? So human in loop. So ethics are fine. Humans are maybe better than the system by itself, perhaps. But, and that's where we are right now. We're at this, what we might call AI ethics 3. 0, which is governance. We realize that Hey, we need something more than just a human, and we need, so here's where it gets just a touch philosophical, we're gonna go in there, and it all starts with, this is a, one of the founders of the field of artificial intelligence, his name is Herb Simon, he's one of the most interesting people in the 20th century, I think, Nobel Prize in Economics. One of the founders of artificial intelligence and computer science created the fields of cognitive science and design science. He's a really remarkable human being. And he had this notion that too many times when we're trying to solve problems, we engage in what we call, what he called, substantive rationality. So he uses these terms maybe a little differently than lawyers do. But ThinkND. Thank you. But he said, and the problem is, if we really want to address the big issues and if we want to deal with it, we should probably take our procedural rationality toward these issues. and I think that's the key to AI governance is this procedural rationality. So now, if you'll bear with me, let's embark on where Herb Simon is getting his idea. Herb Simon, is one of many great thinkers I'll go over a couple of them. Herb Simon, C. West Churchman, they're both original AI thinkers, they were in procedural rationality, and then there's some American pragmatists, Juergen Habermas, that we're going to talk about these folks. But Herb Simon was a trained, actually a psychologist, originally, and he did like behavioral experiments, and he was of course inspired by William James, the father of U. S. psychology, and also one of the fathers of American pragmatism, the philosophical tradition. And, His thinking was very much rooted in William James thinking. So bear with me. this is how William James observed what later a guy named Quine called the problem of underdetermination of data. All right, we're the world. We, all of us who are investigating the world, have the world at our, The world is available to us, right? So when we're thinking about the world, we necessarily have to simplify it, don't we? And this is the hard sciences, too. This isn't just people in social science. This is chemistry, for example. A chemist will look at the chemical composition of materials at a certain level. A physicist might look at physical elements. A biologist, a sociologist, right? Different people. An economist will look at the money. They're all looking at the world. Is any explanation better than any other explanation? No, they're all simplifications of the world, drawn from the same world, and, none is better than the other. They're all just explanations, they're all thinking, it's generating knowledge for a particular purpose and in a particular way that's never a one to one map of reality. It's always a simplification. And since it's always a simplification, Now, within chemistry, maybe one explanation is better than the other. If you're trying to do the same thing, using the same data, sure. But in general, the world does not dictate our understanding of the world. We have to impose some structure upon it, right? the thing that flows from that is that any knowledge we have about the world isn't necessarily only provisionally true. Because we're taking a slice of the world, and it's always a simplification, we're never going to take everything into consideration. It's always going to be a partial understanding, and this partial understanding is going to be limited by our ability to analyze, our ability to measure, all of that. it's going to be the best we can do at that moment. But we haven't reached some universal truth in any of our investigations of the world, right? This is his epistemology. His epistemology, which is the science of knowing, is, is very humble and very tentative and very incremental. And that's what he said, that any knowledge we have is always provisionally true about the world, right? That was his epistemology. And the idea is that we need to always be improving it. And know that wherever we are at any moment, we're not done. We're always improving it, and that's how we know about the world. Now, I know you're all looking at me going, Wait a second. Dude. You were supposed to talk to us about AI ethics. Why are we getting some random 1800s education about William James and his view of the world and how we get knowledge, right? So here's why. oh, I did that one slide quickly. So here's why. One of, another American pragmatist, John Dewey, took what James was saying and is yeah, that's the right way to think about knowledge. The right way to think about knowledge and how we learn is through small experiments. We never have a final destination. We're always moving things forward. Sure, at some point in the future in some infinite amount of time, we'll get to the point where we Perhaps get a much better understanding, but we're always tentatively moving and that's how we are with knowledge but what John Dewey said was And you know what we're saying about knowledge? It's the same for moral judgments. Moral judgments and judgments about what we know, epistemological judgments, are the same thing. That's a sense, that's what he was basically arguing is that the way, the humble, fallible, tentative way We think through scientific knowledge is how we should think through our moral judgments. Let's not go to the world and act in our moral superiority that we have all the answers with some sort of substantive answer. Let's, we can maybe have universal principles we work from, but in the dirt of everyday activity, We have to navigate. We have to make moral judgments in the wild, at a moment, in a given context. The way we do that, and he had a, way of, it was essentially the scientific method. So what we have to do is come up with, and he had this idea of ends in view. We have to come up with short term ends that fit the principles, fit, Our conception of ourselves, our virtue, right? But at any point, we have to use our judgment. We have to make moral judgments consistently, and we cannot be too satisfied with our moral judgments, right? We take an experimental, fallibilist, tentative approach to our moral judgments and investigate them as we act. So this notion of fallibilism for moral behavior, I think is really critical to our understanding of how we're going to put guardrails around AI. So it took a long term to get here, oh, and one way to get at fallibilism, and this was C. West Churchman, another guy who can trace his lineage to William James, another father of artificial intelligence. He had this idea that he called the deadly enemy proposal, where he said what we have to do is always be. actively introducing questions to our models and questioning assumptions of our models. Not just wait for the world to, but we have our ends in view, we have our purposes, we have to evaluate them, but we also introduce things that undermine our model. And then there's another guy, Juergen Habermas, who was influenced quite a bit by John Dewey, and he has this idea for what we call discourse ethics, where one way to think through how to act here and now. is don't do it yourself. Maybe talk to a couple people. Have a community and you guys can, together, maybe think about what the appropriate action is. And that makes sense around AI. so how do we put these things together? We want to be fallibilist in our assessments, our moral judgments of what AI is doing. We want an experimental procedure, so it's always tentative experimental. And then, we want discourse. We want people to talk and make these moral evaluations together, right? So you're like, okay, so how do we do that? good news. In software development, we have something called Agile Development, which kind of inherits those tactics, right? In the old days of software development, we used to say, Let's, let's design it all up front and then we'll build the software. We realized we always designed it wrong. So we created this approach to Agile where it's no, just create some really bad software really quickly. And then iterate it. And then just keep iterating, keep changing it, and iterate it forever. It's never done. You're always improving software. it's fallibilist, it's tentative, it's community driven, it's all the things we talked about. And in Agile software development, we have this thing called test driven development, where we come up with those ends in view. The little things that, we want the software to accomplish, and we define those ahead of time. So this is test first development. And, and it's alright, so how can we do this idea of test first development? And we have a paper that literally just came out this month in Communications at the ACM, which is one of the biggest computer science magazines. This guy is Cam Cormilo. He's a postdoc here who's starting his faculty and, and a Notre Dame alum of a few years back. And he's starting his faculty in August. He's going to be an assistant professor in ITAL. And he and I, along with our collaborator, Christoph Rosenkranz, basically came up with a process for doing this. We're like, ethical? Let's generate ethical tests. ahead of time. Let's develop based on those ends in view, based on our definitions of fairness, based on multiple contradictory definitions of fairness, right? We build our tests ahead of time, we test whatever we've created, we talk, more than one person talks to the other, they make their moral evaluations, and this is a continual process. We're continually auditing, generating more tests, iterating, so we incorporate ethical tests and community discourse into the, And when I say software development, most software now is incorporating some, something that you might call AI into it, right? And then where we're going with this, and we're drawing this from this FairLearn. org, is a way of thinking, it's a process for, Okay, we have our, you have to come up with your types of harm, you identify tests for that. You do the testing, and then you come up with some mitigation strategy, right? and now the frontier is that mitigation. It's okay, so let's say we do our tests. Let's say we run it, and then we realize we're running into problems. How do we take care of that? We realize that there are many different avenues to taking care of that. We're not sure how, but we know that discourse and use cases have something to do with it. Discourse, we get multiple people talking. Use case is, we figure out, not some general, Substantive, universal. We have to say, okay, specifically, people are using it for this. How do we mitigate it in that particular use case, right? So that's what Cam and I are working on. And if you do this though, you might be thinking to yourself, dude, incredibly expensive, right? If I'm doing this for every single, and I have a million tests, and then I'm mitigating them all, and I'm getting a bunch of people talking, and then we're, it's whoa, who's going to pay for that? That's that's going to be way too much. Too much effort. Not worth anything. so if we're going to have this procedural approach to guardrails of AI, who's going to do it? How are they going to do it? What are they going to do? And this is where we're working with IBM. There's a bunch of stuff under this governance, and I'm not going to talk about all of it. IBM has their 360 framework. Uh, governance sometimes people think is like a board, but it also is process, it's standards. you can see this, right? this is pretty heavy, it's pretty expensive, and it's wow, are we really going to do this? And that's why one of the things we're doing really closely with IBM is trying to figure out what's the return on investment. If a company were to do this rather prohibitively, expensive. The task of ensuring, of internalizing AI ethics and following a process for it and doing it the right way. How are they going to pay for it? that's what we're doing. And we're doing it, with funded research. and what we're finding is companies are actually doing this. Maybe not as fully as we'd like. But they're starting to do it, and they're starting to allocate funds, and there are two reasons that they're starting to do it right now. One is regulatory compliance, particularly Europe. Europe has a series of about six regulations that have come out in the last, I don't know, eight years, around data, around privacy, around data governance, around AI, right? And, And companies are just like, hey, we have to be proactive, we have to be compliant, we have to set ourselves up for this. The other thing that we're finding, particularly with tech companies like IBM, is they're investing in this because they are a trusted partner with technology. And if they're going to be selling tools to, other companies, they need to show that they're ethical and, so this idea of brand, reputation and trust is one reason a lot of companies are, and then another is regulatory compliance, right? They need to avoid huge penalties in Germany, and that's what they're doing. But what we realized by doing some digging, this is with, Maria Elena, Another Ph. D. student here in, in Itaew. so Maria Elena has this idea that, wait a second, let's work with these companies and help them figure out how they can justify ethics. Those two ways, of course, your brand is important. And, Compliance, of course, regulatory compliance and fine avoidance is super important, but there's some really cool things that if you do this stuff, you might come up with some indirect, outcomes from, that are valuable to your organization, right? Some indirect return, and of course, one is reputational. if Build a strong reputation. That reputation will generate new business, right? It'll preserve existing business so that they don't leave you And it will impact a variety of stakeholders. The other thing, and this is something that's interesting, is when you're invested in creating ethical AI, what you have to do is create a platform. So a Technological platform. There are technological components that enable you to do this. And then, you need people to do it. So what are you doing? Someone just told me, not that long ago, that this whole ChatGPT generative AI stuff, there are about 500 folks in the United States that understand it deeply. Okay? 500! That's nothing, right? If you want a job, that's maybe your next step, okay? Know the stuff inside out, you'll be, so, 500 people in the entire United States that really understand this stuff fundamentally, so what are you doing? You're not only building some technologies, you're bringing people on and up skilling them. around AI, which may be the most valuable part of investing in AI ethics, is that you bring a lot of people up to speed in what you're doing. And then those physical platforms that you create, the data platforms, the test platforms, those you can use, and you can monetize those in a variety of ways, right? So there are a number of different ways, both direct and indirect, to get economic impact from AI ethics investments. And Maria Lena and our team, we have a broader team, That, and we're looking at the different ways that organizations can get economic impact, reputational impact, and build new capabilities when they're investing significantly in AI ethics. So with that said, I'd like to conclude today with the same way that I started. And that's that, technologies matter, and the core technologies of society matter. We are not at the end of some AI revolution. We are in the infancy. of an AI revolution right now. There are going to be a lot of negative externalities, associated with this. Just like there was with the industrial age, a lot of, but we don't have 200 years to figure it out. If we wait 200 years this time to figure it out like we did with child labor and sustainability and some of the other things, we're in trouble as a society. And I challenge you, your generation, to be active with this, be proactive with this, and get out in front of it because, I guess you're the hope of the future. Let me begin with one that occurred to me when you were talking to a few students here about these qualities or ideas they would offer up in a word. What occurred to me first was the Hippocratic Oath. First, do no harm. So if we look at medicine as both science and art, and I'm seeing some of that in AI, and given that no patient reacts in exactly the same way as any other patient, it's experimental as you move along. For AI, is some level of harm okay if the outcome is positive? Is desirable. That's a good question. I suppose if you're asking my personal judgment in a very general statement, sure, you're never going to eliminate harm to all humans in every possible scenario ever. so if it's a, a consequential standard that you're bringing to any ethical decision, you would need to be able to understand all possible consequences for all possible humans, I guess both direct and indirect, and then you'd have to somehow quantify them as harmful versus not harmful. And then you would, and I don't think you'd be able to find any course of action that doesn't bring harm somewhere somehow to somebody. I'm thinking about the development of drugs, pharmaceuticals. And clearly, they, the people who supervise phase one, phase two, phase three, the data safety monitoring groups, get the data literally every day. And they're in hospitals here in South Bend and everywhere. And they sit down at their desks and they look at the data. And they decide whether or not to continue with the drug. and I personally have assisted some folks in trying to explain this to the world. My facility is communication. I don't know a lot about drugs. But it, it seems to me that it was clear with a drug like Vioxx, which was a COX 2 pain inhibitor. It's not a lot better than aspirin or Advil or and those are good, but, if in the development of Vioxx, you discover that it's causing harm, the Data Safety Monitoring Committee reached over and pulled the plug. And said, you've either got a black box to this drug, or you've got to remove it from the market. And Merck removed it from the market. And the statement that Joan Wainwright made was This is just a low level painkiller. so Jim, let's think about some of the language you're using. You're talking about a drug that's an entity. It's a particular chemical compound that exists in the world. Yeah. You're talking about moral decisions they made about quantifying harm, and they decided that the harm was too much for that identifiable entity. So they were able to pull it off, right? The big difference between that and AI, as I said, there's no thing called AI. it's like the Hydra, where maybe you can say, we're going to eliminate chat GPT, but there will be three essentially similar things that pop up in its place. And it's a moving frontier. So we can't have this idea that we are removing things from the market. That's why the procedural, tentative, continuous process needs to take place. It can't be substantive that we did a calculation, the calculation is the net harm is too much, therefore we remove it. we're never going to be in that situation again when it comes to software. Software is, Pandora's box in a sense. It's out of the box, it's here, now we have to deal with it. We can't put it back in the box. It's never going to happen. And it's not a thing. Like the drug you're talking about is a thing. AI is not a thing. It's a trajectory. That is well on its, with a bunch of directions. and we're well on our way there. We haven't even talked about quantum. what, one of our chemists here at Notre Dame was telling me is that for particular chemical analysis issues, Something that would have taken us using conventional computers, 70 years to do, we can now do, theoretically, I guess they're not doing it quite this fast yet, on a quantum computer in five minutes or something. When we're starting to incorporate quantum computers in these, which we're starting to, the, the calculations for a particular class of problem, are getting, Exponentially faster. so I think that's one of the big differences in thinking about the idea of harm. But you involve human beings in the decision chain at some point. And that was the analogy I was thinking about these doctors. very diffused, making their own decisions based on their own experience. and so you don't see, Collections or groups of people like that for AI. Voting on development, employment, regulation. They do. You have governance boards and you absolutely do. Anna Bradford talked about regulation and how you control this. And she said the motive in the U. S. is to allow the companies that created it. to regulate it. Europeans want a more state, multi state driven. The Chinese want you know, one state to control it. Are you in favor of the companies that are developing AI to control their own content, direction and velocity? So now you're asking my policy thought and if policy makers were capable of doing that? I would be in favor of policy makers staying ahead of this and keeping the guardrails for organizations. They're not capable, period. They're not capable in Europe, they're not capable in the US. They might be capable in China because of their ability to control things in their country, but, China's not capable of controlling US AI. Really the only organizations that even have any hope in of controlling, keeping ethical bounds on AI are corporations. So you can look at that gloomily and say we're in trouble, but yeah, I don't have much faith in regulators being able to control this. They can look in the rear view mirror pretty well, and they can address issues from a few years ago pretty well, sometimes, but they're not current. You, you thinking only publicly traded corporations? Oh, I don't know. No. Any corporation? no. We work with, a big private, or one of our companies we worked with was a large software organization that's totally private and they're taking a very ethical approach to their, so no, I think, but organ really, corporations. they have the strong incentives of the market, they have the strong incentives of competition, and they're always going to move quicker. As long as transparency is part of this. it would be nice. So there are some things, that regulators can do. To help corporations go in that direction and to set some ideas and some policies and maybe some operations. Sure, some oversight, some processes, But yeah, I think companies have to internalize it. Nick and I will continue much of this at lunch, but I want your ideas, your questions at this point. what's on your mind Yes. so these corporations, in control of their own AI models, both for public use and non, have released a lot of these to the public, like ChatGPT, like different stable diffusion models, and all kinds of generative AI that is now available for public individual use. included in this progression forward in the development of ethics for AI, What guardrails, if any, should these corporations be focusing on specifically for the, individual level? Alright, and I want to go back to what I said to Jim. So this is what's happening in corporations. and I hope this answers your question. But no one wants to get behind. So Google, Microsoft, Open, all of the companies, right? No one wants to fall behind too badly. Because of the strong incentives for market. So what they end up doing is going open source with a lot of their technology. Now, open source is an interesting thing because it's one of those things where anybody can freely download it, but it's in a corporation's best interest to open source things that aren't their unique particular advantage because then they stay current. They don't want to be left behind. these open source communities, I do a little research with open source, these open source communities and the oversight with open source communities, I think, is maybe the other way. So it's not one corporation, it's a group of corporations, often times, that run these open source, and individuals, who can get quite powerful in open source communities. and I really think that, so when I said it's in the corporations hands, I meant not regulators, but regulators. But I don't think any particular corporation. and then as far as individuals, yeah, I'm not sure, I'm distinguishing between society at large and individual users and individual actors. But I think that your emphasis of the individual made me realize, wait a second, individuals actually have power and they have power in these open source communities quite a bit and they can, right? so that's a way of empowering people outside of corporate, users and regulators. Question. Yes. My question is leaning around the race to get the ethical implications of this software being greatly impactful to knowledge workers in the So if businesses are looking at Reasons to invest in AI technology. One of the greatest reasons for their investment is going to say okay, I can reduce the amount of labor or knowledge workers that I need. And so now if we're here trying to advance ourself, most likely in the space of knowledge work, I'm just asking for your like guidance or framework to think about it differently, because naturally I think For me, I'm thinking about how. Is this going to be the end of jobs in a lot of way, or we have to think about it in a new way, and also what do we do with this time that we'll have on our hands if we go taking processes that took 70 years and now compressing it into five days? great question, and that's one of the ethical issues that people, that's still open ended, and it's the labor replacement issue. It's, okay, if this stuff is so good Why do we need accountants? Why do we need, software engineers? the ChatGPT can do my accounting, it can do my software engineering for me. what am I going to do as an accountant or a software engineer when I don't have a job anymore? Because ChatGPT is doing it. my, I guess I have two answers to it. The first is, be careful about your assumption that we're not like, about accountants and software engineers and others because, for example, let's again go back to the industrial age and learn from it. Machines directly replaced human labor, didn't they? it took a whole bunch of people to do whatever. Clothing and it took just a loom with a single person to do way more than a whole bunch of people. So what happened in the Industrial Age? More people were employed over time than there were previously. At any given factory, perhaps a machine replaced labor, but across any given, like in the England, in the U. S., aggregate, You actually increased employment over time in conjunction with the technology. So people, the people who make the technology hire people. there are all sorts of new demand. So demand, grows for other products, right? There are always innovations. So don't, so be careful about the assumption that just because at one you're replacing a task, it That you would replace a job. History hasn't necessarily, shown that happens. Now there are people that say computers are different, of course, so this is a continued debate. But just don't assume that people are going to be out jobs, alright? That would be my first thing. My second bit of advice for you is, be an insider rather than an outsider, right? it's like Harry Potter. Did you ever read Harry Potter? Or watch the movie. I watched a couple movies. Yeah. Yeah. You saw the movies. So There's Harry Potter is the one who knows magic and he is fighting Voldemort and he's an insider. and then there are these muggles that are like out here and don't know any, just make sure you're a wizard and not a muggle. Okay. And as long as you're a wizard. You'll be fine. You'll figure out something to do. Okay. If you're a muggle, yeah, you might get replaced. I would say there are not enough good restaurants in South Bend and more people need to learn to become chefs. There you go. There, this individual contributions are not gonna go away. They're just, they're gonna shift slightly. Other questions? I have a question related to, so on one of your slides you mentioned misinformation as one of the problems caused by, these generative AI, tools. So my question is more, about how do we, deal with the, the level of misinformation that is being, a product of these generative AI tools on social media platforms. And how do we learn, how do we avoid normalizing unreliable information? Is it that, is that through, policymaking? Is it through education, digital literacy? Or maybe, some, New technology that social media platforms should implement to maybe authenticate users to control the spread of misinformation better. Thank you. Great question. I'm not a misinformation expert, so I'll just give you a general, answer to that. The question was about, alright, so misinformation's all over the place. A lot of it's being generated. There are, of course, hallucinations, which are randomly generated misinformation. And then there are ways of intentionally generating misinformation in order to achieve some purpose, right? what do we do about misinformation? And it seems like AI and the digital age, digital platforms, propagate this idea. two points. The first is, I guess we've always had misinformation, and we've always had intentional misinformation. it's not as new as people think it is. As a matter of fact, we have a lot of sources, if we were to have some effort for accurate information, with just a little bit of effort if people were to put themselves in it. with that, I'm not sure where to go with that. firms are working on it, of course, and they're trying to figure this out, but this does open the door to what I think the next paradigm is in computing. Okay? And that's, right now, we have a fairly linear view of And it can be parallel, but it's like we want something to accomplish, we have an outcome. And that's how we think of computing, right? Misinformation, along with a number of other kind of use cases, is making us realize that wait a second, we can't just have this kind of linear view of computing where we have some goal and we accomplish the goal. We need more of an iterative view of computing, of what computers do. One way of doing it, and one way we're talking about it, is that we need adversarial approaches to computing. Now adversarial, is this idea, so first of all, no one ever creates new knowledge or establishes the validity of knowledge without an adversary. If we all agree, On everything, all the time, no one learns a thing, and we never create new knowledge. The only way we create new knowledge is when there's some disagreement. When either reality doesn't agree with my view of the world, when someone else's view doesn't agree with my view of the world, and then we have to reconcile it, right? So there's always an adversarial relationship to new knowledge and to testing knowledge, right? So we need adversaries. to continually test knowledge claims. And one of the things we're doing, that we see with like security and stuff is that we have these white hats. You get that from the hacker world, right? You can, they also use blue teams and red teams. There are a variety of different ways of thinking about this, but you have your friendly adversaries that keep you honest, and then there are external adversaries that you have to accommodate. And I think this kind of arms race, adversarial intuition is the one we need to take with misinformation. We need to take with, with so many different computing applications. I'm not sure that totally answers your question, but I think it's, for those of us interested in things like knowledge, that adversarial way of thinking through, and information and misinformation, that adversarial way of thinking through it, I think, is the new paradigm, If you didn't see last week's presentation from Professor Zico Coulter from Carnegie Mellon, he talks about adversarial attacks on large language models. He said it's the only way to improve them is to figure out what their weaknesses are. And as you prove those, you apparently can improve. Given the number of updates I get for the apps on my phone every day, Apparently people are probing to see what the failures are in those apps. My question is over the AI ethics waves that you were talking about and, trying to make them ethical. One of the things that I noticed on the slides was that, there wasn't much or any mention of privacy. and recently there have been some concerns about, OpenAI's ChatGBT, Privacy concerns. So I know it sometimes is trained on personal information that is not, signed for, is not compensated for. my question is knowing, for example, like companies like Facebook have been reamed over selling, personal information. Why do you think that, a corporation like OpenAI has not seen, backlash? what you're saying is privacy. First of all, it was in my list of random things, but maybe you didn't catch it. But yeah, that's one of the issues, is privacy. Violations of privacy is one of the Because, AI is always trained on data. And data comes from somewhere. And what if it's someone's personal data? that they don't want shared. It's their Facebook account, it's their whatever. They don't want it to be training a model. These companies are doing that, using our private information, and no one gets penalized. That's your question, more or less? so what's the deal? And I think you're right. And here's one of the areas where I was a little unfair to the regulators. particularly in Europe, because they are, if you look at it, again, if we're looking in the rear view mirror, regulators can do a good job, right? and in Europe, they're doing a lot with data and with data privacy. Sometimes they make missteps, but overall, I think they're doing some good things. and I think we in the U. S. are going to follow suit. And we're going to have some data privacy regulations that are going to put those guardrails around these corporations. Because corporations, if they can scrape the data, if they can get the data, they will, and they'll train their models with it, right? so it's one of those things where they haven't really internalized the ethical behavior to the degree that they should have. at least at this point, right? so yeah, and then why haven't they? Why haven't they, why aren't we upset? And I think the reason why we're not upset that they're training on our private data, there's something called the privacy paradox. Have you ever heard of this? It's we all claim to care about our privacy, but none of us actually do. Evolutionary psychology people say that throughout most of human history we didn't have privacy, so it's not actually in our, we're not instinctively worrying too much about privacy, we expected that. And if you look at what, people are doing now, posting their, I belong to this gym, And it's like everybody puts their workouts on videoing themselves half the time. It's like people, they want the opposite of privacy. They want people to pay attention to them, right? So that privacy paradox is that if I ask you, do you care about your privacy? You'll say yes. And then you'll go ahead and do a whole bunch of things that, maybe not you, but a typical person, if I ask you, you say, I care about my privacy, and then you'll do a whole bunch of things that give away your privacy. That's probably why there's no backlash yet. Okay. The fellow who founded DoubleClick, which drops cookies into your computer so it can track you, once famously said, if you give me something private, I'll give you something better in return. Yeah. And, in a very real sense, what he was talking about, the deal he wanted to make was, if you give me Your daughter's social security number. I will tell you, I'll alert you and you alone if she's ever admitted to, an emergency room, in the United States. Is that a fair deal? if you trust him, it might be a good deal, but if you're not really sure what he's gonna do with that, then, You might think it over and conclude, no, that's not a deal I want to make. will AI allow us to make decisions about how our data are used or will it simply scoop it up without our knowledge? I think. So far, it's just scooping it up without our knowledge. I've, I don't know where this is, but I've always, over the years, I've run into a number of startups. They're trying to, they're premised on the idea that they can help people monetize their data. Jim, it would be like, alright, you're this demographic of a guy, this is what you do, I'm gonna, Take your stuff and sell it, but I'm going to give you some return for that, right? and I don't know maybe someone knows in the room But I know of a bunch of folks who have tried they've gotten investment capital all that I don't know if anybody's making money doing that and it's because Our data is only valuable in aggregate. Jim, as much as you're a valuable person, your data is not that valuable. I need a million Jims, right? And then all of a sudden the aggregate is valuable, but the individual, any particular individual is not that valuable. so that's the tension that we run into with, with kind of solving it through monetization of your own data. And is AI going to help us with that? I don't know. Maybe. Maybe there's some, approach there. But yeah, I can't think of one. It's a shock knowing that my data just aren't that valuable. I'm glad my mother's not here. George. thank you very much, Nick. A question about the AI Ethics Initiative, which you presented with IBM and Notre Dame. Mike, you have those three areas, the economic ROI and reputational ROI and capabilities. And can you explain it a little bit more, what you mean by capabilities? in, in, economics, and, Amartya Sen talks about human capabilities. Are these human capabilities, or capabilities of machines, or what are they? Yeah, the combination of those two. So they're, at least for the purposes of our analysis, we identify two types of capabilities. One is the technological capabilities. So when you build out a platform, and you create a test library. And create a staging and you do everything that you're doing, technologically. You've now created a platform that, and you've written a bunch of scripts that do things. You've created something. That is potentially useful for other purposes. you can monetize it. We have stories of people taking something that they built for themselves internally and they could make it outward facing, right? so the actual technology you develop, we could think of it as a technical capability. And then there are the humans doing it, right? these are the, not just the developers, but the managers, the people thinking through it, right? These, there's a whole bunch of people around the AI ethics initiative, right? Who are working alongside of the folks doing, and sometimes they're the same people, right? But you're building the capabilities of these people and the skills and the knowledge of these people for how to, ensure ethical AI and in doing so, move AI forward. Move ethical AI forward, just build these necessary skill sets. So I think both of those is how we're formulating, capabilities right now. So it's a human and a technological dimension, right? skills and technologies. Ladies and gentlemen, Professor Nicholas Berente. Thanks, Jim.