The ThinkND Podcast

The New AI, Part 3: Generative AI in the Wild

Think ND

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 55:23

How do we work through the extreme excitement, confusion, and fear that result from the rapid evolution of generative AI to understand and embrace these tools across the arts and humanities?

The third event in The New AI series where we discuss what’s happening in the world of AI with cutting edge thought leaders is another step in our journey of understanding the opportunities and challenges of AI with the goal of empowering ourselves to be stewards of, rather than victims of, these new technologies and these new changes. While universities are often criticized for being slow to recognize the pulse of society and the speed at which technology changes in the for-profit sector, the University of Notre Dame is leading the way with a new course called Generative AI in the Wild. John Behrens ’83 and Ranjodh Singh Dhaliwal, the Notre Dame professors who co-teach the class talk with Jack Slattery ‘24 and Ahana Sood ’21, ’24 M.A., who are both Notre Dame seniors and students of the first Generative AI in the Wild class, about the challenges, surprises, and takeaways of this groundbreaking course.

The New AI is sponsored on ThinkND by the Technology and Digital Studies Program in the College of Arts & Letters.  This program collaborates with the Computer Science and Engineering Department and other departments around the University to offer the Bachelor of Arts in Computer Science, the Minor in Data Science, and the Idzik Computing & Digital Technologies Minor.

Featured Speakers

  • John Behrens ‘83 is a Professor of the Practice of Technology & Digital Studies and Concurrent Professor of the Practice in the Department of Computer Science and Engineering. He serves as Director of the Office of Digital Strategy in the College of Arts & Letters and Director of the Technologies & Digital Studies Program.
  • Ranjodh Singh Dhaliwal, Ruth and Paul Idzik Collegiate Assistant Professor of Digital Scholarship and English, Concurrent Assistant Professor, Department of Film, Television, and Theatre; Affiliate, Lucy Family Institute for Data & Society; Affiliate, Notre Dame Initiative on Race and Resilience; Affiliate, Idzik Computing and Digital Technologies Program; Affiliate, Notre Dame Technology Ethics Center; Affiliate, the Program in History and Philosophy of Science
  • Jack Slattery ’24, University of Notre Dame senior student majoring in finance in the Mendoza College of Business with a Minor in Computing and Digital Technology
  • Ahana Sood ’21, ’24 M.A., received her masters degree in English at the University of Notre Dame and graduated in May 2024

Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.

  • Learn more about ThinkND and register for upcoming live events at think.nd.edu.
  • Join our LinkedIn community for updates, episode clips, and more.
john-behrens_1_05-07-2024_102456

Hi, I'm John Behrens, Professor of the Practice and Director of the Technology and Digital Studies program in the College of Arts and Letters. I'm also pleased to say that I'm your host for the new AI series brought to you with ThinkND. universities are sometimes criticized for being slow to recognize the pulse of society and react to change, We found at Notre Dame, in the area of AI, there's actually quite a bit of ongoing change, reaction, and leadership. In the session we're going to have today, we're going to talk to you about the leadership that we are undertaking in the area of teaching about generative AI, preparing our students for the future ahead. I'll start by telling you about Ranjod Dhaliwal, who is my co teacher in this class. Ranjod is the Ruth and Paul Itzik Collegiate Assistant Professor of Digital Scholarship and English. He's also a concurrent assistant professor of digital scholarship in English, a concurrent assistant professor in the Department of Film, Television, and Theater, and an affiliate of the Lucy Institute for Data and Society. Ranjot is truly a renaissance person, and we are so excited to have him both in the college, And on the podcast today, we also have two students as guests today. Jack Slattery. Jack is a graduating finance major with a minor in computing and digital technologies. Thanks for joining us, Jack.

jack-slattery_1_05-07-2024_102502

Thank you.

john-behrens_1_05-07-2024_102456

Our second student guest is Ahana Sood. Ahana is also graduating with her master's degree in English, and she is an undergraduate alumnus from the University of Notre Dame. And she let us know that she lived in Kavanaugh Hall, which is also where I lived when I was an undergraduate. Different eras, obviously.

ahana-sood--she-her-_1_05-07-2024_102456

Go

john-behrens_1_05-07-2024_102456

there we go. Alright, as we are apt to do, we're going to start our session today with a little bit of example from Chachi Bt. I'm going to pull up the ChatGBT app on my phone, whatever I say will be translated from my speech into text, brought back to the phone and the answer will be given. Okay, you are a podcast producer with years of experience studying AI and public communication. I am interviewing someone I know very well about their teaching a class in generative AI for a wide range of technical and non technical students. The title of this class is Generative AI in the Wild. I want to emphasize how this is innovative and forward looking for a liberal arts college, so the name of the podcast episode that I am interviewing them for is Riding the Wave in Real Time Preparing Our Students for the Future. Give me 10 questions that will be most compelling for our audience of alumni and friends of Notre Dame. That sounds like a fantastic topic for your podcast episode. Here are 10 compelling questions you could consider asking your guest. 1. Can you explain the concept behind the title Generative AI in the Wild? What does it mean for students to engage with AI in this way? 2. Why do you think it's important for students at a liberal arts college to learn about generative AI? 3. How does it fit into their broader educational goals? 3. Could you share some examples of how students from both technical and non technical backgrounds interact with the course material? 4. How do you adapt the teaching methods to cater to such a diverse range of students? 5. What are some of the most surprising or innovative projects your students have created using generative AI? 6. In what ways do you believe this course prepares students for the future job market? 8. Could you discuss any partnerships with tech companies or other institutions that enhance the learning experience for your students? 9. What feedback have you received from students who have taken this course? How has it impacted their perspective on AI? 10. Looking ahead, how do you see the role of AI education evolving at Notre Dame and in liberal arts education more broadly? These questions should help highlight the innovative aspects of the course and how it's preparing students for future challenges. Well done. Thank you. So there you see ChatGBT being used as a tool in our daily work. Obviously, I'm as smart as that, and I would have asked questions just as good as that, but it's nice to have the help. Ranjot, we're going to kick off with you and ask you about the title of the course, Generative AI in the Wild. How did we end up with that title?

ranjodh-singh-dhaliwal_1_05-07-2024_102456

I think the best way to think about it is that, generative AI, which is this subset of new technologies, new artificial intelligence technologies that are equally adept at producing content as they are at analyzing previously existing content. Generative AI, when it burst onto. The scene, demonstrated a sort of openness that, was not yet completely codified, which is Oh, my way of saying, in our initial conversations, you and I discussed how, there's a lot happening that we don't know about and that there's a lot that can be done, which is still an open question. And so generative AI in the wild, I think is, correct me if I'm wrong, our attempt to capture. How generative AI works in the world more broadly. this includes the possible technical manifestations, where all it gets used, generation of images, generation of text, sound, videos, and so on and so forth. But also, it's sociocultural implications, it's philosophical implications. and taking both ends of the technical and the social together. was vital to us. And so generative in AI in the wild, suggests like there is, it's like an animal that you're studying out there in the wild, one that has a lot to tell and one that you may never be able to completely understand because it's not you.

john-behrens_1_05-07-2024_102456

Yeah, so that's super awesome. I love the idea of the animal in the wild that we're studying, the way biologists might study, an animal because, often what people forget is that technology is, we also often think of as things, but the technology affects the culture. The culture affects the technology and the use is always human behavior. So I really liked that. I think another thing that was going on at the time was that there was this shift These kind of technologies as things that computer scientists did in the lab. And then when chatGBT went, was GBT2, and then it went to chatGBT. And when it did that, it became so useful for everyday folks that it became consumer facing and widely used, and really in the wild. And so from the lab to into the wild. so all that, let me just tell the folks that, When this was early days in, in these systems starting to be, become usable by non, computer scientists, I immediately thought of Ranjo for somebody to co teach this class because Ranjo, as you saw in the introduction, has this huge breadth of how to think about all the things that humans do from a cultural perspective, from a, a kind of systems perspective, because he has both that kind of background, as you heard, in all this aspects of culture and media studies, but he also has an undergraduate degree in software engineering. So there's nothing you could throw at him, or that I've ever been able to throw at him, that he didn't have a deep answer and response for. and let me just say one other thing about Ron Jodin, then we'll switch to the students, and that is RunJode is funny as heck. He's not as funny as me, but, he's really funny and in class we had a lot of fun together and it was really a delight, to, to interact

ranjodh-singh-dhaliwal_1_05-07-2024_102456

we did have a lot of fun together? But I see you, John, trying to sneak in. He's not as funny as me in there. And that's a statement that I strongly disagree with, but

john-behrens_1_05-07-2024_102456

I think we have some data on that.

ranjodh-singh-dhaliwal_1_05-07-2024_102456

yeah, maybe we'll go back to the data and take a look. but strong disagreement there. Sorry to interrupt.

john-behrens_1_05-07-2024_102456

Hannah and Jack, let's turn the conversation to you for a little bit. I want to ask about each of your motivations for taking the class, what you thought you were going to get out of the class and what you actually got out of the class. And what was a fit and what was a surprise? And I just want to point out that Jack was the victim, the collaborator in the first class where we're just figuring out what we would think the class was. And Ahana was the participant in the second class when obviously everything was perfect because after you teach it once everything's perfect. But let's start with you Jack since you had the first class. what were you thinking it would be about? and how did the class either match or mismatch your expectations?

jack-slattery_1_05-07-2024_102502

Yeah, so my motivations for taking the class, being a finance major and doing a bunch of, or being part of the CDT minor, and then messing around a bunch with technology on my own, I was, given the tool had just started to gain popularity, I thought it would be a super interesting class to take, learn, what it could be used for, and, how it could be applied in the modern world. what I didn't expect to learn as much about was. We've built into both how it works, so some, what more of the technical side, but then also went into more of the humanities side of it and how it actually, like, how those both actually tie together. Because I hadn't been thinking about given it's a language model, it's based off of like just tokens. So in reality it's based off of words and it's combining both statistics and essentially it's reading. and so learning that side of things and then seeing how other people approached the same projects, and took a completely different approach than I would. it was interesting to see like the wide breadth between how people were going to use it. but yeah, that was the most unexpected part.

john-behrens_1_05-07-2024_102456

All right, cool. Now, Ahana, you start on the other end. You're deeply embedded in the humanities. What were you thinking the class would be about and what was the big surprise for you?

ahana-sood--she-her-_1_05-07-2024_102456

yeah, I initially joined the class because I was curious about generative AI, but just not motivated enough to figure it out on my own, and I guess it becomes really hard because in a lot of classes, like most of my time is going in classes, and in a lot of them, you're not allowed to use generative AI to a certain extent. So you don't really have the time to, figure out how to use it to be more efficient while working if you can't use it for class. so I really wanted to take the class really just to figure out what generative AI is and learn more about it. And I think I, after the class, I definitely got out far more than I expected that I would. It's, I didn't realize what a marketable skill it is, especially when you're doing job interviews. Just as that's one quantifiable thing that I can think of. But overall, it's really, it's definitely changed my perspective of where the world is headed and how important it is simply just to be articulate. and I also really liked that the class was not just CDT minors or people involved with software engineering because Someone like me, I have no idea how to code. I'm frankly scared of looking at code a little bit. just because it's jargon to me. It means nothing. And through this class, the fact that I can somewhat use my words to create some kind of code, that itself is huge to me. and yeah, just the fact that a lot of different people come into the class from different backgrounds tells you. The reason that, that this class changed my perspective was because. Again, all of us were from completely different backgrounds and the fact that generative AI was useful to us in each of our projects that we did and how different they were just tells you where the world is headed.

john-behrens_1_05-07-2024_102456

Cool. Ranjo, what, what do you, what are your thoughts about approaching, thank you very much, Hannah, what are your thoughts on approaching that kind of complexity of a class where we had everything from English majors to computer science majors and all kinds of things in the middle. We had science, other physical science majors. We had all kinds of different folks from the humanities and the social sciences. What is, what was some of, the challenges of teaching and the, and opportunities?

ranjodh-singh-dhaliwal_1_05-07-2024_102456

Yeah, as someone who's trained in, the sciences, humanities and social sciences, I have always firmly believed that these challenges are, More challenges of perception, than necessarily challenges of innate expertise, which is to say that for things that are complex, you do actually need, multiple approaches, borrowing from different, disciplines and perspectives and viewpoints and, One of the things that was, vital for me and, in our initial conversations when we were thinking about this course, this came up, I think, was the fact that we wanted people to come out of the course less afraid and this moved in all different directions, a little less afraid of what code looks like. It doesn't need to be your life, but knowing it is vital in today's world and a little less afraid of how to think about The social, ethical, philosophical, historical dimensions of technology that you are immersed in. Again, you don't need to be an expert in it, but you will be missing something in your life if you are doing things first and asking questions later. and having both those elements was vital. And sometimes all it takes is trusting, having people trust you. the subtle shifts in the vocabulary that I have to deploy when I'm speaking to a computer science major versus an English major, and I'm happy to do that. but sometimes it takes, trust at the class level. this will be over soon. We can get through this so that we can get to the other side that you're more familiar with. And familiarity is a good thing, but it's also sometimes, not very helpful for learning about the world as such. the world doesn't come with neat boundaries. technologies don't tell themselves that I am a computer science production. and I thought that's something that we consistently try to do in the class. have one session, On this and the other session on that, bringing perspectives from art history and philosophy and, computer science and human computer interaction and management and, political science, and literature and history, and all of those things, are vital because how do you study large language models without an adequate understanding of, Language, or largeness, or what models are, or, what computing is, or what technology is. Using a chatbot is only peeling one layer off a multi layered object, and the only way we can study and learn and do things with it is if we all come together, and that was the goal.

john-behrens_1_05-07-2024_102456

Great. Ranjo, can you ask, the students about what some of their takeaways were from the class? what were, some of the specific kinds of things they learned and how they changed.

ranjodh-singh-dhaliwal_1_05-07-2024_102456

In fact, this is a, good opportunity, I think, to, ask the students that we have with us here, Jack and Hannah, about what some of their takeaways might have been from the class, what are the kind of things that you think you will carry with you forward from the class, and how exactly, or what parts of the class, were they, influenced by. Jack, Ahana, I don't know who wants to go first, whoever wants.

jack-slattery_1_05-07-2024_102502

Yeah, I'll go first. a few of the main things I learned. one of the things I was super happy to get exposed to was the API and using, basically developing things with the, the same language models that ChatQVT is built off of. But also like the image generation and putting that all together. but I think one of the biggest things was just learning how it worked and when I could use it. Just the fact that we got more exposure to actually using it in a class. And it wasn't, because most of the time it's a little frowned upon to be using ChatGPT. But the fact that we're told to use it, we figured out what worked, what prompts are started working, how to structure them, what we might want to, basically figuring out how to evaluate what you might want to change to get the actual results you wanted. But also just using it for normal class tasks like writing a paper. and having it look back over that and double check your work or, just things like that are coming up with ideas of what I should do for my project. stuff like that where I actually got exposure to it and I knew no one was going to come after me for it. it made me a lot more comfortable using it. And so now I can even use it in, I can know I can use it in another class without just to at least help me with my process. But not necessarily do anything for me. but whereas beforehand I would have been more scared of, or is this going to seem too chatchy PT'd or, all that. So I like, like getting the exposure and actually figuring out when I would use it.

john-behrens_1_05-07-2024_102456

So are you saying you evolved in your relationship with it so that rather than just be some kind of plug in thing that you don't really understand, but you might use to replace you for something. You actually have a, a different kind of interactive relationship where, sounds like you, your whole relationship to it changed so that you, you think about how to use it as a tool and you have a sense of. When it's going to replace you, or it might be even too much work to fix it. And when it's really useful, is that right?

jack-slattery_1_05-07-2024_102502

Certainly. Yeah. Like beforehand, I especially being in a dorm and in the study areas, I had thought of it as more of a do my homework for me tool. And thought whatever you threw into it, it was basically going to generate the right answer. so especially with projects like finding the limit, realizing, Oh, it's actually really bad at certain things. and I guess figuring out what it can't do, but also what it can do, was super useful and I no longer see it as a one thing fixes everything solution.

john-behrens_1_05-07-2024_102456

Okay. And then just for our listeners, finding the limit was one of the early assignments that the students have, where they have to get chat GPT to do something, maybe a math problem or write a paragraph or a kind of poem, but then push it to a place where ask it to do something more complicated in that genre. that it can't do well. for instance, ChadGBT is really good at poems if you want a rhyming poem, but if you want a poem that doesn't rhyme, a more advanced poem, it's been trained on, it has seen, it's been exposed to so many zillions and zillions of rhyming poems, it has a hard time. figuring out how to use its statistical model to make a non rhyming, poem. Ahana, what was your kind of interesting takeaways, surprising takeaways, how did you change through the class?

ahana-sood--she-her-_1_05-07-2024_102456

Yeah, for sure. I went from having no relationship to ChatGPT and LLMs and to having a very positive one by the end of it. I think what I learned was, again, API keys were a big one, which I think will be useful for me in a lot of different things going ahead in life, whatever I try to do. the fact that I can actually create some models on my, I was also a finance major undergrad, so I, for example, always had this idea of trying to go over a bunch of, Just, manager meetings and the transcripts of that, of different companies and seeing what words were used for the companies that went bust and trying to see if, if, there's some correlation with certain words and what, as someone that can't code, that was always an idea in my head. And since I come from both disciplines, I think API keys in chat GPT allow me to do something like that. And so I think that's some big takeaway for me. Second would be, just the fact that it's a predictive model, which I did not know earlier. And it can't really think and you just have to find different ways to get it to do what you're trying to make it do. And yeah, I think the third thing would just be how useful it is as an aid in a lot of different things. for example, I, researching for different papers. For me, it is something that takes really long, because I have to actually read all of these articles that are mostly not useful to what I'm saying. And thanks to LLMs, now I can just get it to summarize it for me in three sentences and judge if this is something I want to read or not, which has reduced maybe what took me around, let's say 20 hours. In a week to three, which is huge.

john-behrens_1_05-07-2024_102456

Okay, but as an English major, do you, I have two concerns. One is, are you worried about going to English major hell? And number two is, Do you have any concern about it not doing such a good job or the summary being so compressed that you've lost the essential information? Sounds like you've been doing it and it's working. what, tell us about your experience a little bit.

ahana-sood--she-her-_1_05-07-2024_102456

Again, I see it as an aid, not really something that will do everything for me. So the thing is that I'm asking it to condense stuff for me in two to three sentences. I can also ask it to do that in 15. I just don't want to read 15 at that time. In the 2 3, it's more than enough to judge, do I want to even go over this article or not? And then I just go over the article. So at least, it streamlines the process for me. English major, hell, not so much. we'll see where I reach. to answer the other question about if I'm worried about what a bad writer it is, I think it's great that it's a bad writer, it means I'm more useful than it is. Personally, but, it actually, it is a decent writer for some things like emails, short emails that you don't really want to phrase or slightly long emails that are just time consuming to think and then write about. it's pretty good at drafting those and then you can make your edits.

ranjodh-singh-dhaliwal_1_05-07-2024_102456

I have a couple of clarificatory questions, probably for John. John, two terms came up in the responses that Jack and Ahana were giving. One was API and one was prediction and predictive models. Could you explain to our listeners what those two terms generally mean?

john-behrens_1_05-07-2024_102456

Yeah. So I'll give some, some how I explain it and then I'll ask them to fill in some details. first the API. So API is a technical, jargony term means application program interface. Which just is fancy terms for how one piece of software would talk to another piece of software. So at the beginning of the show, we showed, ChatGPT on my phone. And what's going to happen is the app on the phone is going to have an API, a program application. It's an application. It's going to have a program interface, which is basically the logic of how it's going to communicate to other computers. And it's going to send that information back to OpenAI to get processed. And then that API will come back. And those APIs are actually, little pieces of software code. They're commands. And you can, OpenAI, you can write your own software code. Just a few lines of code, maybe 10 lines of code to send your text. open AI to get it run through chat GPT, and then the answer will come back. So there's lots of ways you can do mix and match different kinds of things. And we give the students, exposure to how that works in, either code or no code. And I'm super happy to hear Ahana. Talk about how it was, valuable to her. The, the business on the predict. let's just stop there for a second, Jack and Hannah. Do you guys have anything to add about how it works or the usefulness or how you guys used it in the class?

jack-slattery_1_05-07-2024_102502

To me, as someone that likes coding stuff, it's super useful. so getting exposure to what the different tools are through the API, was super interesting to me. So like they have the whisper tool, I think, which lets you transcript or transcribe audio. They have the Vision model, which basically gives you a description of pictures that you send it, or you could have DALI 3, you could generate pictures, or even just the GPT models. You can just generate, words based off a prompt. so I used it mainly in my final project since we hadn't come across Chainforge quite yet, but, I had it automate an Instagram account, it would get, I connected to the Instagram API as well, or meta API, and then would have it, get the comments from the most recent post. it would then put all of those together, through, I would send all of them over to GPT 4, or I think it was GPT 3, but, send it over there, had it generate an image prompt, which I then had it send to Dolly, and then if, as long as that image got generated and there were no moderation issues, It would then generate a caption for that image and then would post that picture and the caption on Instagram and then run every 24 hours. So getting exposure, being able to do something like that was super fun. And even now, like still thinking about some of the tools I could use, like the embeddings. It would be a lot to explain, but, like having a way of doing that project that Ahana was talking about earlier of the what makes something more similar or like a, what like an earnings call or something more similar to What another one that dropped? Like knowing how I could build that out. It might take a while, but knowing how I could do something like that would be would like is super helpful now, especially going into the workforce, so

john-behrens_1_05-07-2024_102456

So you, you strung together a number of different computer systems. You had a little program that went to Instagram, got the data back, send it to chatGPT, got the data back, send it to DALI, the image generator, it got the data back and then send it all back to Instagram. And so it was all happening through these, this communication channels of these APIs. And sounds like one of your takeaways is, yeah, that was cool, but it gives you a vision for how business runs in general, which is moving information between different systems. And sometimes you use AI to manipulate that information. And just like Hannah was talking about, summarization is one of the AI things that these systems are great at. So there's lots of ways to mix and match those. You were saying you don't do code, but you like the whole API logic. Can you talk about how you're thinking about it?

ahana-sood--she-her-_1_05-07-2024_102456

Yeah, for sure. Again, the fact that it was something I didn't think I was capable of doing and I always thought that maybe, I'd have to get someone else to do this for me and use this idea. The fact that I can do it myself and just, again, the knowledge of, The ability to do certain things that do require code, and me not being scared of code anymore, and just being able to use ChatGPT or any other LLM to do that is, it really expands your vision as to what your abilities are, and again, just a general sense of where the world is headed. And it's great that we have this class. Otherwise I would not have known that. And I don't think I was ever motivated enough to figure all of this out on my own, the fact that you guys did it for us and put it all in a syllabus and taught it to us in a semester is what makes a huge difference. I think honestly, if I had to recommend something, I would recommend a lot of professors also be taught the same class because. I've seen a little slightly more towards the humanities that they're not very friendly so far with chat GPD, not all of them, just at least some of them are not. And I understand where they're coming from. It's like me being scared of code and not knowing what it is. So I think maybe something like this for them would also be helpful to integrate it in our ecosystem. And I feel like this class really puts Notre Dame students far ahead of other schools. when it comes to job market and just the world overall.

john-behrens_1_05-07-2024_102456

Awesome. Awesome. for sure, one of our main goals here at Notre Dame, especially in the computing and digital technologies minor and the technology and digital studies program in general, is to make tech savvy leaders. And no matter what disciplines folks are going into, whether they're going into the humanities, or the social sciences, or business, we want everybody to know how the world works and to act responsibly. Ranjo, you asked me to explain the predictive model, but I spent so much time on API. How about you explain the logic of the prediction, the predictive nature of ChachiBT and what the implications are for what it's good at and not so good at.

ranjodh-singh-dhaliwal_1_05-07-2024_102456

Sure. I think it's a vital thing worth thinking about, which is why I posed that question to you because some listeners might be interested in knowing. but by and large, I think the, model in our heads of how any AI system or computing more generally works is that we give an input, something happens in the middle that we may not know, and then we get an output. there is in the old deterministic sense of modeling, which is the old term for how earlier systems used to work, you have A predictability. If I give it a certain question, a certain answer will come out, and I can already predict what that would be. But in the predictive models, the predictability becomes probabilistic, instead of deterministic, which is, again, instead of using big words, let me just put it simply, which is to say that What comes out can change even if the input is the same, even if the question asks is the same, different answers can be given, in multiple iterations of running that system. So if you ask an AI to write, I don't know, like a, Short story, about a crow, and do the same thing five minutes later, you'll get two different short stories. the predictive nature of these models, which is because they're just generating often the next word, they're predicting what are the chances in the whole wild world If I get asked, to write a story about a crow, what are the chances of a pawn being the second word if once is my first word? so it's basically predicting the next word and as a result, it's giving by probabilities in English language, it's giving different kinds of probabilistic answers in different iterations. Now that changes things significantly. in some ways, while retaining other elements, of previous iterations, the changes include, for example, how we interact with the computer, right? what does it mean for you to get a right answer for something? Maybe you try it the second time and it doesn't give you the right answer for the thing, or whatever the right or expected answer was for you, right? For some things that would be, if I ask it two plus two, it should give me four. But if I ask it, talk to me about significance of flowers and North American Ecosystem. It might have different answers based on different schools of thought or expertise, and it might give something right once and give something wrong the other time, or vice versa. So to know which one is probably going to happen more number of times, the right one or the wrong one, and which one of those is an outlier, requires a completely different approach, to artificial intelligence. And to computer programming and actually also to social interactions with machines at large, right? So we use machines all the time because we are certain that if I click on this thing, this following thing would happen. But if that ends up becoming One, more powerful, but two, also at the same time, less certain than we are in a completely different domain of human computer interaction. And noting that there are these ways of testing, evaluating, figuring out what's good and what's bad, was something that, I thought we definitely tried to keep front and center, in our class. does that map onto what, your understanding of the Predictive nature of these technologies is

john-behrens_1_05-07-2024_102456

Yeah, I'm sorry. I think I just asked you to, cover everything that we cover in 16 weeks and, 16 seconds. So I think I, but there's, let me just unpack a couple of things I'm hearing here. Cause, for sure. So the, just so the audience knows the way these things work, like this is over, this is an oversimplification, but what Ronjo is saying is, What they do is they take billions of documents and they feed into this giant statistical model so that these systems, as he said, just predict the next word. So it's basically trying to imitate how humans construct sentences. That's it. It's not reasoning. It's just word recommendation based on the data it has seen. So this is super interesting because it's data based and like now we don't know. We used to, in the early days, people would publish the data they used. Now we don't know. So OpenAI, ChatGPT have some data, other companies have other data. So very super, as Manju had said, interesting things. But then you got this whole prediction thing because it's statistical, as he was saying, fluctuates from time to time, different answers. But I think there's so super interesting, so many layers here. Another really fun thing that I hadn't really realized until I started teaching the class, I had a sense for it. It's good at some things. It's not good at other things. I'm good at some things. I'm not good at other things. But one thing that's super interesting is that there's some things that are appropriate. what, one of the dangers of these systems is that we think it's a magical brain that's going to do everything. And when it doesn't do everything, then we're like, Oh, it didn't work. And sometimes that's because we didn't prompt it, right? But sometimes it's because it's not designed to do the thing you're asking it to do. And other times it's just the way We structure our world, there's just no right answer. So for instance, when, if you ask it a question and you think there's a factual answer and it doesn't give you that answer, we call that hallucination and people get all upset. Oh, they hallucinate, blah, blah, blah. But one of the things we talk about in class, which is super fun, was a blind spot to me in the beginning, was for lots of tasks, There is no right answer and everything will be a hallucination. The poem, like what's the right answer? Write me a poem about how great a teacher Ranjot is. What's the right answer? There is no right answer. there's just, my brain says anything could be the right answer for a poem about Ranjot because there's no fact it's, and, sometimes when I give talks to business groups, I say, one half of the Everything's a hallucination, in the art department, they're making everything up from scratch. There is no right answer. They're trying to teach hallucination. So it's super interesting, all this kind of cultural expectation and how that feeds into, the technology. But more importantly, the technology interaction.

ranjodh-singh-dhaliwal_1_05-07-2024_102456

the only reason I think, John is able to say this so strongly is because he hasn't seen my teaching evaluations. there may be factual answers to how good or bad of a teacher and Joe this, but we'll leave that for another day. one of the things I think, that's vital here. John, and what you're talking about is, fact that it, changes the. Expectations and understandings of not just the machine, but also yourself. one could very well ask, how is it that we dream or how is it that we think of things, in a non factual way, which, different human and cultural systems do all the time. And I'm just thinking of the robust, back end of, psychoanalytic or social cycle, psychic or, like theorizations and understandings of the world that people have been thinking about for hundreds of years that dovetail nicely into social and cultural production. and so some, maybe the question sometimes when Something goes wrong is not to say that it went wrong, but it's to say Is there something right about the wrongness or is there something wrong about the rightness? and I think those certainties and uncertainties are where the machinic systems and the humans can best understand each other And that's what I think we're trying to do with the generative AI in the wild course.

john-behrens_1_05-07-2024_102456

Hannah, you mentioned job, the job market and, the usefulness of the class and the ideas in general. We're always love to hear that we're not, we're not, only about jobs, but we always love to hear when people are successful in the market that they're looking at. Can you talk a little bit about, how you see that playing out? these kinds of things that we talked about in class, because sometimes, we did very practical things, but other times we did big picture things. can you talk about how you see some of that playing out in the job market?

ahana-sood--she-her-_1_05-07-2024_102456

Yeah, I feel like especially with interviews and stuff, if you can talk about just generative AI and your experience with it overall, and just given that we spent a full semester learning about it, people are slightly surprised at the depth of where the conversation can be headed, which is always a good thing. And I think for the most part, there is a certain realization that someone who is familiar with all of this already will be relatively more efficient than someone who is not just in their day to day working. So that I think is overall huge. And it's one of these rare things in the world that can be applied as an aid to pretty much anything that we can think of, at least so far that we've thought of. So it's really useful for everyone.

john-behrens_1_05-07-2024_102456

Awesome. Jack, any comments following up on that?

jack-slattery_1_05-07-2024_102502

Yeah, so I mean I tend to think more of the like bigger picture of how it could be applied in terms of companies. but I feel like it's just going to take over a lot of the more menial tasks. so for example, like we have automatic meeting transcriptions. Now, you could do say you train it on like your file base and you just type in something, okay, I want something that'll tell me about this. Okay. It could search all your company's files and tell you something about that. or even just like generating a PowerPoint for you so that your analyst doesn't have to go do it. Or, it's specifically in finance, like building out, like a different, some financial model, it could probably build that out for you, but things that usually take a lot of time that are. There's a lot of evidence of how to do it and where to get data from and stuff. As long as you can connect it the right way, it can do that stuff for you. So I think it might not necessarily just be a chatbot of how it's used. Like people won't necessarily all just be going to, chat GPT and open AI's website. But it'll be in the background. that just makes everything overall more efficient. and so having an understanding of how to leverage each of those different tools is super useful. And if you can explain that to a, if you can explain that to a recruiter, then that's always a good thing for them to hear,

john-behrens_1_05-07-2024_102456

awesome. Thanks. Thanks for that. Now, one of the things that we spent a lot of time on, I, that my perception is that we spent a lot of time on, that we haven't talked about is evaluating the systems. And because we're talking about how great they are and all the things they can do. But they can also, and we talked a little bit about, when they don't do what you want, but, I'd like to hear your guys thoughts about how we approach the idea of the nature of tasks and how to structure tasks and how to evaluate tasks and how to come up with. criteria for that. And Ahana, we did a little bit more, I think, in your semester than we did in Jack's semester. But I'd like to just, hear your guys thoughts on that.

jack-slattery_1_05-07-2024_102502

We started using Chain Forge, essentially a, no code drag and drop version, or version of accessing the API, that basically lets you run the same test over and over again without having to type it in each time. So say I wanted it to give me the next move in a chess game. And I have a hundred different, word setups. What would you do? And maybe there's probably just one right answer. and seeing it basically have, I don't know, GPT 3 respond and then have like Claude respond. And then running those results through code, you would have to write it yourself, but in ChainForge you can just drag drop, say, does this equal this? And then build a little chart out of it. so that lets you run the test a lot easier. you'd have to make the prompts, what you actually wanted it to do, and then what your actual answers in, but it was a lot easier than just going to or having to do everything yourself. but the class also helped us understand like, understanding what a context window is and how GPT three or GPT four might not answer something well if given a massive file, but Claude three would with it, with its massive context window. and then it might understand something that GPT 4 might, but then again, if I wanted to go, I don't know, do math or something, then I'd probably want to go to GPT 3, the chat GPT thing, because it has the code interpreter. But stuff like that of understanding when something would be useful or what should I use for this specific task, was super helpful in the class.

john-behrens_1_05-07-2024_102456

Cool. And then just for the listening audience, the context window is essentially how much information the model can, handle in its predictive work, at a single time, and that varies by different products you mentioned. we've been mentioning ChatGBT, which is the most kind of, known, but they're also, there's, he mentioned a model called Claude, which is, very good for many, many tasks, especially in the area of, writing and summarization. one of the things we do in class is require them to actually use different, models to get a sense for the variability and the different kinds of products. any comments about the role of evaluation and, value and assessment that we talked about in class?

ahana-sood--she-her-_1_05-07-2024_102456

One thing just talking about evaluation that you did was for when the class had to write papers and stuff, you had designed a GPT for them for us in the class to check just how good the paper is, or maybe what changes it would recommend, and sometimes it was useful and sometimes it wasn't. And we could mostly judge by the quality of the output, if it was useful or not. that, I think it's like a live example of how important it can be in evaluations and how also it can, like that human element is still required to an extent to recheck and evaluate what ChatGP has also done.

john-behrens_1_05-07-2024_102456

And then just to let the, again, the audience know the reference there to, a GPT. A GPT is a little, bundle of chat GPT information. And what we did is we wrote a essay scoring GPT, which is the chat GPT program. And then the students could check their homework against it. And as Ahana said, sometimes it was useful. Sometimes it wasn't. Run Jode, I want to step back to the big picture. We had a year together. Thanks so much, bro. It was super fun. We had a lot of fun. We kidded around a lot. We. Complimented each other's perspectives quite a bit. I tried to bring in a little bit more of the business and the social sciences and, you brought in for sure the humanities and cultural social issues, as well as history. You brought so

ranjodh-singh-dhaliwal_1_05-07-2024_102456

And the jokes. And the jokes.

john-behrens_1_05-07-2024_102456

Jokes. And the jokes, the better jokes sounds like. But how, how would you do things different next time?

ranjodh-singh-dhaliwal_1_05-07-2024_102456

I think, one of the things that's, increasingly becoming the case is that, there are more and more students who are coming into the class with some understanding of how, some basic understanding of how these things work. So I think one of the pathways forward perhaps would actually be to have these kind of offerings at different levels. for students with a technical background, I might be able to get to the, sort of social stuff or the advanced technical stuff faster. And for the students with a humanistic background or historical background or philosophical background or, cultural and social backgrounds, I might be able to spend more time on the technical basics so that they feel more comfortable, and are actually able to cover more advanced stuff in terms of, how precisely to understand the relationship between the new AI and the old AI or the new AI and, historical frameworks or how exactly the world works more generally, in more precise terms. I think everybody gets, the basic toolkit that we, wanted to offer them, but, especially if students, and, high schoolers are now working with these, systems already, I can imagine, for example, freshmen coming into the class next year, who may already have somewhat of a better understanding and as a result may be able to do more adventurous things or learn more or push themselves more and I think that might be one of the ways in which I could, think about this. yeah, I don't know. What would you do differently, John?

john-behrens_1_05-07-2024_102456

let me just, before we do that, let me make a comment about what you said, because I'm a little concerned that, see more students and faculty, both, that, they have more experience with it, but sometimes the isn't always more informed. so sometimes we get people with a year ago, the way Jack was talking, nobody had a mental model of it because they didn't have a lot of, interaction with it. So we didn't have to fix the mental models. we just say this is how it works and this is, these are the implications. But now often it's people have these kind of bad mental models and then we have to undo them a little bit. So in some ways it's easier, but in some ways it's harder. And one of the changes we did make between the two classes is. And this is, it's hard to remember, like way back, like last August, the world was still barbaric. We didn't have chat GPT, we didn't have web browsing and chat GPT. it was like living in the middle ages, wasn't it? Just kidding. It was super advanced, but it wasn't as advanced as we are now. and so much has changed, but one of the things when we started last August. I don't know if you remember this, Ranjot, but we didn't even know how the heck we were going to get the students access to the software. Because there was only one, basically one version. It was 30 per person. And, but there were issues around how do you even get it set up? there weren't group accounts, and we thought about, maybe we can set up a system where we have our own API and it goes to things and we can control the data and then data privacy was an issue, a huge issue. And then, in, late, maybe mid September, we found this product that we ended up using called Magi. at magi. co, M A G I. co, and it allows us to set up teams and to have interaction with lots of different, systems. So just want to call out, there's a lot of issues around just the pragmatics of making that

ranjodh-singh-dhaliwal_1_05-07-2024_102456

just because you have worked with it doesn't necessarily mean that you would have a better understanding of how to think about it or how to work with it going forward. And I think that unlearning is as equally important. and I like that idea a lot, John.

john-behrens_1_05-07-2024_102456

I want to double down on this idea of the evaluation and how to evaluate, how to set up tasks and evaluate them because these systems are always changing. And I think you and I did a good job if I won't pat myself on the back. I think we did a good job. and of course, Jack and Hannah, just raise your hand if you don't agree. But, these, because the systems are changing all the time. We need to, We need to teach principles and conceptualizations that are going to stand that test the time. And one of those is how do you test any kind of computer system? How do you understand what your expectations are? How do you document it? How do you test that? and then we have very, practical hands on, hands on kinds of, activities that I think were great. Jack and Ahana, you guys have any suggestions about what we might do differently?

jack-slattery_1_05-07-2024_102502

I like the idea of splitting up the class into technical and non technical, if people wanted to get more, cause I, given as a coding person, I had a ton of fun messing around with the API, and learning about the tools available there, but that might not necessarily be as interesting to someone in the humanities. But at the same time, I did really enjoy having the mix of projects, a mix of perspectives, like the day we were talking about the implications of generative AI and like the design profession, that got a little heated. but yeah, so I loved the different perspectives, but having like different focuses could be interesting. But overall, I still think it's very useful for everyone to understand the basics of how it works. because I do think a lot of my friends see it as a glorified Google search, and just throw everything in there and see what it gives you without understanding how it actually works. And if they just paste it in or their question in, then it might not come back correctly. But if they understood how it worked, And what they should actually target for, similar to in a Google search, if you include the right words, it might not show up, but if you phrase it correctly, or slightly change one word, then it might show up perfectly. understanding what to do, just based off of what you already know of how it works is huge,

john-behrens_1_05-07-2024_102456

Great. All right, Ahana, we're getting toward the end here. any recommendations for Ranjot and I?

ahana-sood--she-her-_1_05-07-2024_102456

Nothing in particular. I think it was great. I really enjoyed the difference of perspective as well. I guess maybe another way you could think about it is if someday you'd want to have a minor in generative AI so that it offers different levels of, or different, yeah, different, with different specifications and stuff that, depending on people's interests. That could be a path forward, but no, I really enjoyed the class overall. And I think, I didn't mention this earlier, but I was telling my dad that I'm doing the class and everything. he runs a hedge fund. And so there's a lot of coding involved in whatever is going on in that space. and apparently he said that it takes him a really long time to write basic code and then getting models on it and everything takes a lot of time. A decent amount of time as well, but the basic is supposed to be pretty simple. So he's trying to now get someone in his company to design a class for the rest of the company so that they can at least do the basic stuff faster through LLMs. Hopefully. So that's a big, I think something random that came out of this class, which is helpful.

john-behrens_1_05-07-2024_102456

Awesome. thanks for that input. Ranjot, I want to wrap things up by just thanking you for being the best teaching partner I could ever imagine. I've only been back in academia after 20 years in industry for the last two years. And when I thought about coming back to academia, I thought about lots of great things. I thought about, the students. I thought about the culture, the environment, time to think and work on deep things. What I didn't anticipate was finding, friends and colleagues that would be so great to work with and to learn so much from. I just want to give my last comment as how delightful it has been to work with you and to thank you so much for sharing this journey.

ranjodh-singh-dhaliwal_1_05-07-2024_102456

Thank you, John. Likewise, we can take our mutual admiration fest elsewhere, but it's been a delight.

john-behrens_1_05-07-2024_102456

Great. I want to thank our guests today and I want to point out that while we have a great ThinkND series, ThinkND has lots of things going on. We just finished a 10 year, hence, wonderful series put on by Jim O'Rourke in the Business College. I encourage everybody to look at all of those. They were fabulous. And the Lucy Family Institute is starting a new series around AI and society. So much going on in ThinkND. Please, keep in touch and look for new things. We have things scheduled for the summer, some of which are already recorded, and you'll have us to interact with all summer. Thanks so much.