The ThinkND Podcast
The ThinkND Podcast
Soc(AI)ety Seminars, Part 1: AI and the Very Old World Order
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Over the last few years, a growing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. If European colonialism was characterized by the violent capture of land, extraction of resources, and exploitation of people for the economic enrichment of the conquering country, the AI industry is now using more insidious means to capture our behaviors, extract our data, and exploit our labor for enriching the wealthy and powerful at the great expense of the poor.
Award-winning journalist Karen Hao looks in-depth at just one dimension of this AI colonialism: the labor exploitation. The AI industry has long thrived off of a model of building billion-dollar products off of a vast economically precarious workforce. Now it is refining its playbook, seeking out workers in countries of crisis to drive down their labor costs even more.
Brought to you by the Lucy Family Institute for Data & Society and the Notre Dame Alumni Association.
Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.
- Learn more about ThinkND and register for upcoming live events at think.nd.edu.
- Join our LinkedIn community for updates, episode clips, and more.
What does it mean for AI in society? What does it mean for responsible ai? it's not just about, can we the next cool algorithm, but what could be some of the challenges that may come? We start with, Karen Howard. Correspondent at the Wall Street Journal, which is focused on the AI planning effect and society. She was the world's first senior editor of the MIT Technology Review. She has an amazing newsletter that's called The Algorithm. And it was one of the best, newsletters on the internet. Her writings have been, have been cited by Congress. We're talking about all the wonderful pieces and works you have done, but thank you again for joining us today. She's amazing. She loves it there. she's made a long journey to join us,
Karen HaoHi, everyone. Thank you so much for being here today. It's really awesome to see so many of you in the crowd. And thank you so much to NRJ and to the Lucy Institute and the TESH for originally reaching out to me for this opportunity. Today I wanted to project into the future that we're building with AI by examining our past and our present. I've been reporting on AI for more than five years for now. and my work has spanned both the cutting edge research that's been happening within academic and corporate labs, as well as trying to understand how that interfaces with society. so I've profiled people that are creating the technology, people that are fighting to change the technology. And in the last few years, I've had, some wonderful opportunities to also travel the world and profile people that are affected by the technology. So I've had a really front row seat to its development, even not as someone who is developing the technology, but as someone who observes a broad swath of the developers and, the people that are affected. And from that position, I have been pretty worried about the direction that AI is headed. and to be clear right up front, I really want to distinguish that from being anti AI. I'm not anti AI. I really do believe in the profound potential of this technology to affect positive change. I just don't think that's the version that we are working towards right now, or at least the one that is really starting to dominate the public perception and also the policy debate, which would ultimately, if we don't really shift our public perception and that debate would get codified into the laws and regulations around the world. so what we're seeing today with generative AI, which is the buzz word of the day is this escalation of a rapid concentration of power into the hands. of the few, and that's something that we've seen within the tech industry already, but I really do think that we are seeing, an even more rapid concentration than ever before. and this is in particular happening at the expense of dispossessed communities around the world. So to understand the depth and scale at, which this is happening, I like to use historical metaphors. I call this AI colonialism, or this idea that AI is recreating a colonial world order. And the reason why I use the term colonialism, or I use this metaphor, is not because what we're seeing with AI is as violent as it was with historical colonialism. It's happening in a much more insidious way. but it helps us to illustrate how AI, instead of advancing us into the future, is starting to regress us back into the past. And a past that was deeply unequal, and for the vast majority of the world, really unpleasant. So it, that seems to be antithetical to this promise that we're given that AI is going to be beneficial for everyone, it's going to move humanity towards greater and greater progress. and so by trying to contextualize what's happening today with this parallel to colonialism, my argument is that we aren't actually getting closer to that goal. We're actually currently getting further away from that goal, and that can tell us a little bit about how we could then get closer to the goals that we share. let's unpack this, starting with what do I actually mean when I say that there's been a profound concentration of power that's happening? So when we look at the history of AI, going back all the way to 1956, Which is when the term artificial intelligence was first coined and used at Dartmouth University. It was actually a group of researchers that came together from different disciplines and felt that their disciplines just didn't have very sexy names and they weren't really getting a lot of funding. Things like cybernetics and statistics and they wanted literally a rebranding and this is explicitly written in some of the documents that they had at the time that they wanted to find a term that somehow really excited people more and got more money. So they coined this idea of artificial intelligence, this idea that they were going to recreate human intelligence within computers, and that suddenly started bringing in lots of money from the Department of Defense, from corporations, to really try to promote this idea. So since then, since 1956, we've had these two phase shifts. within the field of AI. The first one was in 2012, and this was a phase shift towards deep learning systems. And deep learning is the category of AI systems that we hear about all the time today, the ones that learn from our data. and the second phase shift was, of course, with the release of Chattabooty very recently, and suddenly the explosion of excitement around generative AI, and generative AI is just an even narrower category of deep learning systems that are not just learning from our data, but also then trying to synthesize that data. So learning from our images to generate more images, learning from our text to generate more text. and what actually happened at these two inflection points? really it was the difference between the amount of data and compute that is being fed into these systems. So data, we all know what that is, but compute, this is like the industry term to refer to computational horsepower. How much of the, how sophisticated and how powerful of a computer are you putting behind developing your deep learning system? and if you look just at the, this is a chart of data use. If you look at it through each of the three phases where I've blocked off, in gray, phase, one, two, and three, you can see that there's been just a very steady increase in the amount of data that's being used to train AI systems. and this chart is very misleading because it's log scale. This is the linear scale. So you can see like an absolute explosion of the amount of data that is suddenly being used to train systems in the last few years. That has completely dwarfed anything that we've seen before. And it's the same story with compute. This is a compute graph. Also in the log scale, you'd see that there's like a steady increase. in error one of AI, and then there's like a sharp uptick in error two of AI, and then error three is not even on this chart because this chart was made beforehand, but you can imagine it's going even sharper. and if we zoom into error two, the deep learning phase, right when deep learning was introduced, this is the linear scale. And you can see again, it's just this really sharp swoop up in the amount of compute that's being used to train these systems. Here's a different way of looking at it, just through OpenAI's technology. So we all know GPT 4, you may have also heard of GPT 2 came out in 2019, and the size of the circle is, it's correlated with the size of the model. And at the time when GPT 2 came out, it was already pretty stunning. To the AI community, and now we're seeing just an explosively larger, and this is of course estimated because OpenAI doesn't release these figures, but this is an estimated size for GPT-4. So what is, what am I trying to say here? Basically, generative AI is the most maximalist version of ai. It's not actually a new idea or a new technology. It's actually a very old idea and a somewhat. old technology that's just had an enormous amount of resources thrown at it, to try and max out its capabilities, try to push it through just this sheer force of data and computation. and what that means is that we end up in a situation where there are only a very few handful of companies in the world that actually have The financial resources, the computational resources, and the data to develop these systems. So this is what I mean by a massive concentration of power. but the second thing that I said was this is also happening at the expense of dispossessed communities around the world. And why is that happening? What do How is that happening? so we're in a situation right now where generated AI is so greedy, it's so resource intensive, that we're literally running out of the data and compute that, in the universe, to upkeep its development. these are just two very recent stories, one talking about how we're running out of internet, like there's not enough data. Text data to scrape anymore to continue advancing the system, and also there's just not enough chips. There's not enough computer chips in the world to continue perpetuating, the explosive growth of the data and compute charts. and companies have an enormous profit incentive to find ways to create more of these resources. This is like an existential thing for them. If they don't find more data and find more of this compute, they're not going to be able to continue progressing, along with this technology that has become a core part of their business models. So how do they actually create more of these resources? If we return back to this formula and we think, what is actually data? Data is people. to generate more data, companies need to find ways of continuing to extract more of that data from us. And this is what Harvard professor Shoshana Zuboff has called surveillance capitalism, which was a term she coined even before the AI revolution. but generative AI has made surveillance capitalism even stronger because of this existential need now to get more of that data. That data also needs to be cleaned, it needs to be prepared, for those of you in data science, you know that 70 percent of your work is not actually training the model, it's like butzing around in Excel or in Python to try and get your data in perfect shape because the real world gives you back messy data. and that requires a significant amount of labor. There's a lot of labor that goes into that cleaning. and there, this has been before terms, ghost work by, Mary Gray and Siddharth Suri and, companies, once again, gross work was a thing before, but with generative AI, because there's infinitely more data, they need infinitely more labor, and there's a huge profit incentive to find cheaper and cheaper forms. of that labor and find broad basis of that labor where they can continue doing this work to perpetuate AI development. What about Compute? when we talk about Compute, like the industry loves to use these euphemisms, the cloud, but of course, the cloud is actually these like big sweaty data centers that use a lot of computer chips. And those computer chips are made from rare earth metals, from other types of resources. So we literally need to draw down more of the earth's resources to support this data center development. I saw a figure recently that Microsoft currently has around 200 data centers globally, but in order to support the development of OpenAI's AI technologies, they have a plan to build around a hundred new data centers per year for n number of years to continue that support. we're also talking about enormous emissions. Like these data centers, when they're running, they use a lot of energy. They also use a lot of water. and data centers now account for between 2. 5 percent of global emissions. if we were to rewrite our formula, data plus compute equals AI, what are we actually talking about? Surveillance capitalism, labor exploitation, resource extraction, climate change, and what is AI but wealth, control, and power? And when you think about the central question here, it's really who gets exploited? Who gets surveilled? Who has their resources extracted? Who bears the disproportionate brunt of climate change? And then who actually gets to accrue the wealth, the control, and the power? Universally, you end up finding these patterns where it's global north companies. and global north countries that are gaining this control, this power, and this will, but it's the global south countries and global south communities that are being subjected to a lot of this extraction and this exploitation. So that's why I like to call it AI colonialism. AI colonialism, ultimately, it's not just a metaphor for understanding the dynamics and how they parallel the past, but also how they continue from the past. It's the very same communities, the very same countries, that were exploited and dispossessed during historical colonialism. that are now exploited and dispossessed by AI colonialism. there's wonderful existing scholarship that looks at this phenomenon. I would highly encourage anyone who's interested in doing more reading to look into both of these. The Cost of Connection is a book that really inspired me. And then this was a paper, with deep mind researchers and a researcher at Oxford who presented this idea of decolonial AI that was looking at how do we actually know the way from these patterns that was also a huge inspiration for me. So what does this look like on the ground? and I'll share with you just one slice of it, the labor exploitation. and I wanted to start with the story of this woman, Oscarina Fuentes Zanaya, who I had the fortune of meeting in 2021. And Oscarina, she is Venezuelan. She actually studied to be an engineer in Venezuela. Venezuela used to be one of the richest countries in Latin America and a lot of people studied engineering because that guaranteed them a job with the state oil company that was the main source of the country as well. And she was on that path as a petroleum engineer, about to graduate from her master's program. When all of a sudden, Venezuela's economy just started absolutely spiraling out of control. So if, if you weren't aware, Venezuela has in the last few years experienced one of the, sorry, the worst peacetime economic crisis in decades. And this is a chart of, hyperinflation of their currency with a peak at around 65, 000%. It's, I've seen figures that have gone much higher depending on, which organization tabulated it. and unemployment in the country hit 40%. There were millions of Venezuelans that were really suddenly desperate for work. They were literally lining up in the streets, at grocery stores just to get a piece of bread. and these were well educated, individuals that had originally had really, stable jobs that, also had good internet connectivity. And it's by freak coincidence during this time when the country was spiraling out of control, that the self driving car industry took off. So a lot of old school tech giants or auto giants, like Volkswagen, BMW was right around the 2016 year started realizing that they were being threatened by Tesla and Uber in their self driving business. And they thought if we're going to survive this new wave of change, we need to develop our own self driving cars. They threw billions of dollars into that. And it suddenly supercharged the commercial production of self driving vehicles, which meant that they needed an enormous amount of training data, and they needed an enormous amount of labor to label this training data. And beginning in 2016, Venezuela became one of the biggest sources of ghost work for the AI industry, for the self driving car industry. again, the population was well educated. They were well connected to internet, but they were suddenly willing to work for abysmal prices. So hundreds of thousands of Venezuelans started doing data annotation work for American companies. So Oskarina was still in university when she became one of these people. She continued doing her master's program like, she, she was still optimistic, but like everyone around her, she was also realistic and realized that there was a lot of shift and changes happening around her and she needed a backup plan. So data annotation, Work became her backup plan. And she was, you could call it that, like one of the lucky ones in Venezuela. and the reason I was even able to meet her is because she's a dual citizen. in Colombia, and she's a dual citizen because her parents are Colombian, they grew up in Colombia, crisis in Colombia, fled from Colombia to Venezuela when Venezuela was the better off country, and then as Venezuela started collapsing around itself, the entire family fled back to Colombia, so there's a lot of intergenerational trauma that happens in, in a lot of global South countries like this. So they ended up back in Columbia. and they were in this town about an hour South of Medellin, which is a budding, tech hub in Latin America, in the department of Cal Bass. This is Calbas. It's nestled in the mountains, and this is where I ultimately ended up going to visit her. I wasn't able to, safely, as an American journalist, go to Venezuela, but I was able to go to Colombia. And I really wanted to understand, and she was the first person to show me, up close, what life is really like working for the data annotation industry. The thing is when Oscarina originally moved to Caldas, she had actually planned to find a new job. Like she, she was moving for a better life. She wasn't trying to stick with the state of annotation work that she'd been saddled with in Venezuela. And for the first six months, she did exactly that. She got a job at a call center. She was doing really well. And then she ended up in the hospital one day with debilitating pain and a sudden loss of vision. And the doctor diagnosed her with severe diabetes. And part of the reason why it got so severe was because she had been through significant, significant instability and turmoil. And she hadn't gone to the doctor and ignored all of the symptoms because she was too afraid, that she wouldn't be able to pay for it. She didn't want to really, she wanted to ignore the problem and hope that it went away. And the doctor said she waited so long that if she hadn't come when she did, she probably would have ended up dead. so her diagnosis meant that she had to immediately start taking medication multiple times a day, could no longer leave the home for more than, long periods of time, which made it impossible to, for her to continue working at the call center. She couldn't stay in the office, she also couldn't commute to the call center, which was, about an hour away. And that meant she had to end up going back to data annotation work. So she ended up logging onto a data labeling platform that is known as appin and data labeling platforms are companies that kind of act like an Uber for data tech. Giants will post jobs on the platform where they need some kind of data that needs to be annotated, and they'll provide all these really detailed instructions on how the annotation should work, and then they assign a certain price to it. And in theory, workers then join the platform and if they're willing to work for that price, then they will pick up the job and they would do the work and they will earn that money. But the issue is when you have very little work and too many workers on these platforms. The workers are willing to work for any price. So what happens, and this is true of basically every data annotation platform that I've ever seen, or talks with researchers who have studied this, is as time goes on, the tech giants are willing to pay less and less and less money. And there's more and more workers that are scrambling and actually competing with each other for this, these tasks. So the platforms not only have dwindling tasks, they pit the workers in competition with one another. So you don't just get to do the work if it arrives on your dashboard. When the work arrives, you have to be the first one that then clicks and claims the task. among the tens of thousands of other workers that are also waiting for their dashboard to show a task and claim it as well. So that meant, day in and day out, Oscarina never left her computer because she was too scared that if she just left and went on a walk, she That a task would arrive, she would miss the opportunity to claim it, and then all her earnings would be gone for that week. And in fact, that wasn't just a theoretical fear, it had actually happened. She was on a walk one day, a task arrived, she missed the opportunity, she didn't get more work for three weeks. And that was her entire family's income. and so she would just sit there waiting and waiting and waiting. And by the time I went to visit her, she was waiting as long as three or four weeks just for a single task. And then she might earn 10 for doing that task, which was absolutely not enough for her to support herself or her family. and the reason why she never moved on was because, of course, First of all, there are very few other opportunities. But second of all, when she first started, it wasn't like this. When she first started, it was around 300 a week that she could earn, which was an incredible salary for her position and her, country. And so she kept staying with the platform. Because of this promise of a 300 a week, she just really believed that perhaps if she stayed, And waited a little longer, maybe the work would come back. And this was consistent across dozens and dozens of interviews that I ended up doing with workers where workers will see an initial promise of the platform. And then as more workers join the platform, the opportunities dwindle, but they still hang around because they really hope that it will change for the bettEr again. so this ends up being what I wrote about in a story for MIT technology journey for you. that used Oskarina's story to just paint a picture of an entire industry. Like I said, I interviewed dozens of workers around the world and it was like deja vu hearing their experiences over and over again. Oskarina's situation is painfully common. It's not unique in any sense of the word. And what's true about all of these workers that I spoke with, they're almost always from the global south. And these countries in the global south, the reason why they end up within these situations where there's so many workers willing to do this kind of job is because they have underdeveloped economies, weaker political institutions, the labor is cheap, there are fewer labor laws. and this is, again, how you can see the, that it's a reflection of an extension of colonialism. These countries are underdeveloped precisely because their resources were used by other empires to build them. Developed those empires and those empires often were the ones that actually designed the political institutions in these countries in service of the empire rather than of the people, themselves. And so that's why we're now seeing the same pattern entrench itself when we have this new technology that can be an incredibly resource intensive and massive power concentrating. Technology. so this was pre generative AI when I spoke with Oskarina. Generative AI has now changed the game and made things worse. there's more of a demand for labor than ever within the AI industry and that compounds, the incentives for companies to pay these workers even less. And, I wanted to really understand how things have changed now that we've shifted to generative AI. So in May, I ended up going to Kenya, which has become a new hub, for this work and starting to replace Venezuela as a hub for this work. And Kenya is taking over from Venezuela because of one key advantage. They speak English because of British colonization. And so as we've moved from self driving cars and image work, towards, Chatbots and text work, there actually is a linguistic element to the work that these workers need to do, and so many more companies are now going to English speaking, formerly colonized countries, like Kenya, like the Philippines, also India, to find these, this similar base of cheap labor to continue doing this. So Kenya, it was the same story. economically underdeveloped, it was in crisis when the work arrived because of the pandemic. So there was a massive unemployment breaks, especially among youths in Kenya, and it's also a highly educated population with good internet infrastructure. this, these, this photo is just, from one of the neighborhoods I went to where the workers were doing this work, And they're incredibly poor and it's, again, I saw workers tethered to their computer. waiting weeks or months for tasks. and also they're doing this work as a means to support their family. there's sometimes this, narrative that's crafted by the companies who orchestrate this work that say, Oh, workers are just doing it on the side. They earn some cash and it's great. It supplements their income. No, this is like their actual job. And the only thing that they're doing in part, because there's no other economic opportunity in part, because of this promise that they have, that they, that if they. Wait just a little bit longer, they will earn not just money, but U. S. dollars, which is also extremely precious in these countries. but the thing that changed when I went to Kenya is the nature of the labor. So again, we're no longer talking about self driving cars, we're talking about trap bots, we're talking about image generators. And that means they don't just need to have training data, they also need to be content moderated. So when you're developing a technology like ChatGBT where you can type anything you want into it and it can in theory generate anything back to you, there need to be safety filters on the chatbot to not generate things that are abusive, that are violent, that are hateful, and that are also potentially illegal in certain countries. in the EU, there are very strict laws around, hate speech. So the work for these workers started becoming much more akin to Facebook content moderation work. so in, in January, there was this Time Magazine story by Billy Perigo that broke, the scoop that OpenAI was hiring workers within Kenya to, Specifically, try to help make chatGBT less toxic, and they were earning less than 2 an hour. And what these workers were doing, they were reading hundreds of text passages a day, describing all these horrific scenarios that OpenAI didn't want chatGBT to generate, and labeling them with, a very, detailed hierarchy and syntax, is this sexual abuse? What kind of sexual abuse does it involve, children and labeling that in detail, sending it back to OpenAI. OpenAI was then designing all of these filters to make sure, okay, if it's sexual, like we want a sexual abuse filter and we want the controls on our child sexual abuse filter to be even tighter. I ended up going when I was in Kenya to meet these workers, and I want to play for you, let's see if this works, I want to play for you in their own words what their experience was like. So this is Alice Kairo, he's 28 years old, he was on the violent content team for OpenAI, which meant he was reading and labeling scenarios that involved murders, stabbing, self harm, And other things like that.
6Oh, my mental state was pretty bad. I had nightmares, if I see someone holding a fork, it's something I see through the picture, so something I need to do. That's my entire thing. I'm like, man. Then I tell my brother, okay, just come here, sit with me, for five hours, and we'll be over. I'm going to sleep because I need someone to talk to before I go to sleep, because I'm I need someone to talk If I go to sleep, I start streaming on some of it, so it brings up a new idea, a challenge.
4I talked with Alex, about one and a half years after he had finished this project, and he was still non normal. he, I remember, I was in a room with him and, someone else asked him, like, how are you doing? And he was like, Not good. Not good at all. Like I, I have become socially isolated. I no longer want to see my family. My wife complains that I no longer have, show any affection to her. and he just had completely, he used to be really extroverted and he had become extremely shy and scared of strangers. and it was the same with, this man, MoFat, who's also 28 years old, and I learned from him that it wasn't just individuals that are affected, it's entire communities that are affected because people are part of social networks and they have people that depend on them. So when one person is that deeply impacted and completely starts withdrawing on themselves and take themselves out of their immediate community, other people that depend on them are, they feel a huge loss as well. So MoFat, he was on the sexual content team and, he had to deal with content like child sex abuse. He was reading this stuff for eight hours a day for, seven days a week for weeks on end. And as he continued to be exposed to this content, he became unresponsive at home. But the issue was he couldn't even explain to his wife. What he was doing at work because he just had no way of explaining to her why he was reading child sex abuse content for hours a day. and so his wife had no idea what was going on. His stepdaughter, who he was living with also had no idea what was going on. And by the end of the project, a few months later, his wife decided to leave him, with the stepdaughter because she felt that he had fundamentally changed and she had no explanation for what was going on. so this is, after that piece was published in the Wall Street Journal, there was this, actually, Senator Josh Hawley ended up mentioning it in a Senate hearing. it strikes me that we're told that the, that AI is new, it's a whole new industry, it's glittering, almost magical, and yet it looks like it depends, in critical respects, on very old fashioned, disgusting, immoral labor exploitation, and the, it was exactly, he exactly, captured, what I had hoped to convey in the piece that there is just such a vast dichotomy between what we're told AI is and what's actually happening far away from the people in power like us that we don't realize that's actually what happens on the ground. so there are many more dimensions to this story. the story that I, the two stories that I just, presented were one of the first one was part of a series that I did for MIT Technology Review called, AI Colonialism. And, I did the other three stories look at all different dimensions of this colonial phenomenon that's happening, not just whatever exploitation. and the question is like, how do we exit out of this colonial spiral? At least when it comes. To labor, I think there's some good news. We're starting to see the beginnings of a way forward. right now we're actually seeing a remarkable moment in Kenya among the workers that are doing this kind of work, collectively organizing to demand stronger labor laws within their own So this is a photo of, Kenyan workers that are actually protesting in front of Henning Court to, demand that legislators will actually start to not just recognize, that this is, this work is even happening in their country, which was one problem. Like legislators didn't even realize that companies were coming into Kenya for this work. They thought they were, the companies were coming to set up call centers. but they're also fighting to try and get this work, get, the work. farm laws updated for the digital era because all of the laws laying out what is farmable to workers was based on physical labor, not on digital traumatic psychological labor. and it's really Incredible that they are actually rising, like some of the most vulnerable people in society are like realizing that through coalition building, they can try to be a counterweight to the, the powerful companies that are coming in and trying to, scoop up their labor. They also really want to be central to the process of building AI. And one of the things that I really learned from these workers, like people ask me all the time after I tell them these stories, it's oh, what do we do? Should we just not develop AI anymore and not use these tools? What the workers told me is these, they don't have economic opportunity in these countries and these jobs come in. There is actually an enormous potential for these jobs to just be good jobs. there's no rule book that says that these jobs have to be bad jobs. It's just that we don't have any legal framework right now for guaranteeing that companies will make them good jobs. So they turn them into really exploitative ones. but the workers they could, they want to do the, Oscarina in Venezuela, she would happily continue labeling self driving cars if it paid her it gave her benefits, and it guaranteed a regular salary and regular hours, through her day. and not just that, like these companies like OpenAI, they often say that they really want input from people all around the world to understand how these technologies. are affecting them and then help them shape the trajectory of the development and the crazy thing is they actually have a huge built in feedback mechanism. if they just ask the workers, the thousands of workers that they employ around the world. They would actually start getting an enormous amount of really valuable feedback on how these technologies interface with some of the poorest and most vulnerable communities on the ground. but these companies don't ask these workers their opinions and the workers have opinions about how they want to see the technology progress, how they want to participate in it, how they want the companies to participate in their communities as well. and. It's not just, in order to move forward from this, it's, we can't just rely on these workers in Global South countries to, to rise up and try to, set new laws and set new norms within their country. Global North countries also need to step up to the table because Global North countries are the ones that host all of these companies and that have really strong regulatory power to change these companies. Behavior and the U. S. Has traditionally not exactly been regulatory leader. When it comes to technology, it missed the boat with data privacy. but it does have an opportunity to make up for it when it comes to AI regulation, and actually implementing strong AI labor laws, where it requires companies that operate on, or sell, to the American market to have Better labor. It's the same that we've seen with, the coffee supply chain and fair trade coffee or fair trade chocolate or the fashion industry having more humane labor laws, within the fashion industry. We need to see that same kind of, cleaning up of the AI supply chain by first visibilizing these issues and then demanding regulation, within the countries that, that we are in. fortunately, there have already been some frameworks developed for the kinds of labor centers that we actually should, be implementing. so this was, developed by the Oxford Internet Institute. It's called the Fair Work Principles. And, it basically lays out some really deep, Basic things about what we should be giving these workers. And they were developed from not just interviewing data workers within the AI industry, it was also just a lot of gig workers in the digital economy and doing all kinds of gig work. And like some of the things when you read it, it's just like They should know who they're working for, which is like really basic and is not all guaranteed right now. Oscarina never knew who she was working for. MoFat and Alex didn't know that they were working for OpenAI until it was leaked to them. so there's part of the issue is they don't know even who to contest when they have problems. They don't know who they're talking to. Oscarina. didn't even have a single person that she ever talked to because she was working on a data annotation platform. It was just a website and she would try to email customer service and she told me, I swear their customer service is a bot because it's always the same man, the same image that responds with the same sentence and they never solved my problem. and with Alex and Mobat, they were in an office. But their managers would never tell them what clients they were working for. And their managers, because their managers also felt squeezed to compete, with other companies that were trying to win the same contracts from the same companies, they would never bring the complaints back up. To these, these global North companies, because they were like, be silent, do the work. Otherwise we'll lose the contract to somebody else. I also asked Estremilla what she wanted and it actually really, it like almost perfectly aligns with what the Oxford Internet Institute's framework says. Like she, she literally said, I hope in four or five years Appen can become a more traditional employer. They know we exist. They know we got sick, that we need security and health care, and ultimately, if you think about it, she's just asking for basic human dignity, and if we can really just turn some of these jobs into actual meaningful economic opportunity, we already really shore up a significant part of the exploitative practices of the AI industry. so I just wanted to end with one more thing. She took, she asked her, she asked her uncle to take a photo of us after we had finished doing an interview for four hours and, she sent it to me a few weeks later with, don't forget about us. Thank you so
5much.