Episode Five: Land

Episode 5 March 01, 2021 00:42:07
Episode Five: Land
Mirror with a Memory
Episode Five: Land

Mar 01 2021 | 00:42:07

/

Show Notes

What is the environmental impact of AI on our planet, and what colonial impulses does this technology enable? Episode Five zooms out and up with leading AI researcher Kate Crawford, technology writer Arthur Holland Michel, and photographer Richard Misrach to look at how cameras are used to divide, extract, survey, and surveil landscapes.

View Full Transcript

Episode Transcript

Speaker 0 00:00:05 Um, Speaker 1 00:00:06 Welcome back to mirror with a memory. I'm your host Martine Syms. We've spent our first few episodes focused on people on biometrics, visibility, invisibility, how we prove and assert and change and mask and communicate who we are. Now, we're going to take a moment to zoom out, weigh out and up in a lot of cases. As we shift from variations of portraiture, to variations of reading imaging and utilizing the land, there are two threads we want to pull on here. One is the environmental impact of how AI enabled systems are made, how they operate and how they quickly become obsolete. The other involves the new ways in which landscape photography and AI powered aerial surveillance relate to one another and impact how we see understand delineate. And in some cases, a sale our world Speaker 0 00:01:03 <inaudible> Speaker 2 00:01:08 My name is Kate Crawford and I study the social political and cultural implications of artificial intelligence. And I've been doing that for well, well over 10 years now. And I'm really interested in the way that the technical systems that interpolate us every day are creating particular sorts of worldviews that have far reaching impacts. So that's really what my work has focused on for a long time. Looking at the histories, the politics and the environmental implications of technical systems. Speaker 1 00:01:39 Kate often works in an art context too. She's collaborated with the artist, Trevor Paglen on a series of projects, including training humans, which revealed the collections of photographs used by governments and corporations to train AI systems and how to see and classify our world. Many of which were used without their subjects, knowledge or permission. Trevor's work was also the subject of an exhibition at Carnegie museum of art that closed in March, 2021. Kate's collaboration with the artists, Latin jeweler anatomy of an AI system was recently acquired by the museum of modern art. We'll get to that project shortly. But first I wanted to go back to Kate's earliest interest in AI, which is tied to natural disasters and early attempts to use AI to check their impact. How did you start researching artificial intelligence specifically? Speaker 2 00:02:35 I mean, I really came to this space by studying large scale data systems. And this was around 12 years ago. I was really looking at the ways in which social media platforms and systems were being used to study natural disasters. And this was being seen somehow as like a legitimate data set to understand how communities were being impacted and how governments should respond. And I was really troubled by starting to really look into these data systems and the data itself and thinking about who is really represented there, who uses these systems, you know, what kinds of groups in society are overrepresented there, which ones are underrepresented there? Of course, at that time, we knew that particularly in, on platforms like Twitter, they were, you know, very dominated by people, living in urban centers, by people who had higher disposable incomes by people who were very much associated with particular forms of privilege and access. Speaker 2 00:03:35 And using that as a type of GroundTruth struck me as, as deeply problematic and then around gosh, in 2012, it was the sort of first time I had the opportunity to work inside and industrial research lab. And that was around the time when machine learning was really going through this enormous growth curve where it was being built into so many systems like our health care, like education and policing and criminal justice. And that was when I was, you know, truly alarmed as like we are treating these systems as though they are somehow purportedly objective or neutral, or as if they have a view from nowhere, this, you know, good old heroines idea of the God trick, this idea that you could see into the world without constant. Speaker 3 00:04:20 Kate's referring to Donna Haraway here, the famous feminist scholar who wrote about the God trick in a 1988 paper called situated knowledges, the science question and feminism, and the privilege of partial perspective. She wrote that the very notion of quote, infinite vision is an illusion, a God trick in quote here's Kate again. Speaker 2 00:04:42 And it struck me that it was incredibly important that we started to study these systems as politics made material, and to see the ways in which they serve the logics of the military policing and profit. So that was how I began to ask these sorts of questions and began to study large scale AI systems. Speaker 3 00:05:01 I guess I'm wondering, like you said, this, uh, infrastructure of power in relationship with artificial intelligence. And I think that's such a like concise, clear way of thinking about what it is and with your project anatomy of an AI system, it really starts to visualize the vastness of that scale. And it's really difficult to comprehend actually. And I'm curious like with that work, but also more broadly in the work that you're doing when it comes to visualizing the systems or the impacts of them, what are you trying to help people understand? Speaker 2 00:05:41 Oh, I love this question. And it's interesting because in some ways I see visualization itself as a trap, you can try to visualize something, but particularly with these vast infrastructural systems, it's almost impossible. And actually working the limits of impossibility is something that I find really interesting as a creative project, as much as a, as a sort of scholarly project. So to tell you the backstory of anatomy of AI, this was a project with flood Anjola, who's an incredible artist and visualize it. We met at a conference and the conference was focused on voice enabled AI. So things like Siri Cortana, Alexa, and we were thinking, how would you even show people what it takes to make something like Alexa work, you know, for, for people who don't use Alexa every day it's it's Amazon's proprietary system for voice engagement and participation. So often appears in these little cylinders called an echo unit that you can put on your kitchen bench or inside your bedroom, which allows you to essentially speak to engage with and summon answers back. You could say, Alexa, you know, what's the weather today Speaker 4 00:06:51 Right now it's 38 degrees Fahrenheit with cloudy skies, Speaker 2 00:06:55 Alexa, what's the traffic like Speaker 4 00:06:57 Traffic on Forbes Avenue. It looks good. There are no incidents report. Speaker 2 00:07:01 And people see these systems as somehow magical is somehow, you know, completely a theory of living in the cloud, performing these extraordinary feats of, of search and response. But of course they're, they're extremely material technologies. And to make a single Amazon echo unit, you have to be mining rare earth minerals and lithium from the ground. You need to be working in these enormous logistical chains of smelting and extraction. And then ultimately you can talk about the technical layer where all of these voices, all of those engagements that you have with these agents are recorded, harvested, analyzed, um, and, and kept indefinitely in some cases. And then of course you have to think about the end of life of these systems. Often, you know, a unit like an echo is only kept for three or four years and then thrown away. And where does that end up? It ends up in these giant e-waste tips and places like Ghana and Pakistan. So what Ludden and I thought about was how would you show that life cycle, the birth life and death of, of, of one of these AI systems. And then how would you begin to show the tendrils, those sort of extraordinary outreaches of systems that are all implicated in making one of these things work in essentially giving us this tiny moment of convenience, where we can say, Oh, you know, Alexa, you know, what's a recipe for, you know, making dinner tonight. Speaker 4 00:08:27 Okay. For dinner, I recommend slow cooker ribs from tasting Speaker 2 00:08:32 That moment of choosing to speak to a system rather than opening a book is invoking this massive planetary system of computation and extraction. So we began to draw this essentially on the, you know, we had one piece of paper and we found that we reached the edges of the pieces of paper. So we got more and more and more until we had sort of 50 pieces of paper in front of us. And we ended creating this very large wall sized installation called anatomy of an AI system, which is accompanied with a long form essay and a newspaper that gets printed when it's, when it's shown in a gallery context. But even there, it sort of, for me, began a type of obsession with showing the systems of extraction as material tracing their stories in the earth, in places where we know commonly, we don't think about computers as being connected to, to soil and sand and oil and energy, but they profoundly, and that's precisely how they work. So that for me has become a major focus and a certainly a core theme that I'm pursuing in, in my new book, which is coming out shortly called Atlas of AI. So it became a multi-year project of really trying to think about these systems of planetary extraction. Speaker 3 00:09:47 Yeah, it's also, I love this paradox in the way that this technology is presented to us and the way that most people encounter it. Like you're saying, it's kind of magic where you go, Alexa, play new trap music and it does its best. But at the same time, it's like, there's very much a material reality around the cloud. And I always found it. So I guess, problematic that it was insisted upon this idea, this fantasy, that it's immaterial, that it isn't taking up land and it isn't using resources. I also really love this phrase, the limits of possibility. I think I find that to be in my own investigations with technology always where I'm pushing against with this more recent research you've been doing about land specifically. And how can you explain how artificial intelligence is being used to sort of image or surveil the landscape? Speaker 2 00:10:50 Absolutely. I mean, I think certainly in terms of the work that I was doing for the Atlas of AI book, I was really looking at the way in which artificial intelligence is profoundly of the land and built by, you know, everything from minerals and oil and coal and water that way in which to create a systems you're, you're literally carving out the earth to feel these highly energy intensive infrastructures, but also at the same time that these, uh, technologies of mapping. And so the way in which many leaders in the AI field will say, you know, we are using this technology to map every space of the earth, or we're using it to map the entire world of objects, or we are using it to map the internal landscapes of people's emotions, which is something that we see with the rise of Aflac detection, which is, you know, I find deeply scientifically suspect and politically, so all of these landscapes as simultaneously being used as sites to extract data, make predictions and feed those predictions back into systems of profit and control. Speaker 2 00:11:58 So thinking about the way that mapping happens in those strategic political Strat, or I think is, is certainly something that, that is really important thinking about the geographies of artificial intelligence. But now of course, that type of mapping is, is, is incredibly intimate. It's, you know, it's happening in the home space. You know, people using things like Amazon ring cameras to constantly record their doorsteps and their streets. You're seeing all of these kinds of spaces that were previously off limits to permanent surveillance, being signed into these sort of commercial networks of constant recording and extraction. So I sort of think about this, this moment as being one where we have to think about landscapes very differently. Speaker 3 00:12:47 I'm thinking of red lining too. Anytime I see maps, I'm like, why is this so benign to people like this? The only reason you would need to have this much information is if you plan to steal it in some way or take it in some way, you know, you think about colonialism, you think about explorations like that whole field to me from its inception. That's what it was about. Speaker 2 00:13:13 Yes. I mean, it's, it's something I've been thinking a lot about, obviously, I mean, in, in relation to thinking about the politics of the Atlas, I think it's this really powerful, both a metaphor and a sort of a literal activity. That's, that's, that's going on with artificial intelligence, it's a making of maps. And, and that does have this deep connection to colonialism and to maps of domination and carving up territory along the fault lines of state power. And we could think about the direct interventions of drawing borders across contested spaces in the colonial parts of empire. And you can also think about how, even in the way you might look at Google maps, you know, that where national borders are represented, say between India and Pakistan is shown differently, whether you're accessing that map in India or Pakistan, or you're accessing it from the U S and that idea that, that these, uh, shifting boundaries in order to accord with forms of state power is, is absolutely plain on the face of it when, when you're studying these systems. But it's interesting too, because I think maps can mean many things and you can actually do forms of counter mapping, you know, maps that kind of work against those forms of power. And I think in some ways, one of, one of my favorite essays on this comes from Laurene Destin, who thinks about how cloud atlases began to become part of the scientific record. Speaker 3 00:14:31 Quick context here, Lorraine Destin is a science historian, and the essay Kate's referring to is called cloud physiognomy. It draws a parallel between the historical classification of clouds by way of cloud atlases, to various forms of physiognomy, including facial recognition like humans, clouds are complicated. Speaker 1 00:14:52 And when we focus too much on only a few physical qualities, a lot of that complexity gets lost. The idea of that Speaker 2 00:14:59 Cloud Atlas is that you were trying to school the eye, or trying to teach people that this is how you would see clouds, think about them, focus the observer's attention and draw them to considering particular telling details or, or particular characteristics. So I think in some ways with counter mapping or Atlas making, you have this ability to actually engage in a creative act, you know, it's, it's a political and aesthetic intervention. You're bringing together the kind of possibility of rereading the world of having different kinds of disparate landforms, um, seeing them in new ways and editing and piecing them together. And so I think that cartographic approach, if you try to think of it outside of a sort of colonial mode of, of, of control can actually become these sorts of collective endeavors of saying, this is how we experience our, our local space, our land, our set of ideas. So, yeah, I like this idea that somehow by thinking about maps differently, by taking them away from the kind of great houses of AI and thinking about what they could do for us was sort of bridging the known and the unknown in, in new ways and, and using them really as testaments of collective knowledge and insight. Speaker 3 00:16:11 Yeah. I've been interested in that within my own work and how people interpret space or what their kind of psycho geographies are. You've talked also about how AI systems shape or influence like embodiment. And that's another big interest for me that follows, I think these mapping, like there can be a kind of cognitive distortion when I discovered that the map that I had been taught as child was not actually what things looked like and that it contained within like it's form distortions that were based on how an ideology or worldview was being taught to me. And I mean, I learned this pretty young. I think I was like in high school or something like this, but it's something I know a lot of people take for granted. And so I'm always thinking about how those distortions then play out. Speaker 1 00:17:08 We're going to consider counter mapping as a strategy for resisting some of these more predatory types of mapping. We need to know what we're up against. And when we think about mapping the landscape, as it relates to photography surveillance and artificial intelligence today, we inevitably think about drones and what that God's eye perspective is. And isn't capable of seeing, I am off of Holland. Michelle Arthur is a senior fellow at the Carnegie council for ethics and international affairs. He focuses on issues related to artificial intelligence, autonomy, and emerging surveillance technologies. I'm also a journalist and an author. My first Speaker 5 00:17:50 Book came out in 2019, it's called eyes in the sky, the secret rise of Gorgon stare and how it will watch us all to an outside observer. It might seem like the drone has only just burst onto the scene in the last few years, but the drone has existed in some way, shape or form for over a hundred years. There have been a whole range of different iterations of the technology starting around the time of the first world war as sort of more or less pretty rudimentary remote control aircraft with simple guidance systems, and then evolving from there. Another dimension that was important in all this is that the notion of aerial watching of observing your object of interest, be it a natural feature of the landscape or an adversary from the sky has existed for almost as long as photography itself and indeed in the earliest cases of soldiers, being able to actually get up into the sky. The first known instances that we have is of French soldiers in the mid 19th century using hot air balloons was a desire to use that technology, to watch the adversary and to once photography became a viable proposition in, in those kinds of circumstances to take photographs from the air, Speaker 1 00:19:30 There's a 19th century, French artists known as Nadar who made a famous lithograph of a gentle or version of this phenomenon of a photographer, soaring over Paris in a hot air balloon. His face is pressed to the camera, which is angled down toward the city. He is so absorbed in his task that he seems not to notice that his top hat is flying off. The print is from 1863 and it's called Nadar elevating photography to art. And in our contemporary context, it sort of looks like a clunky analog drone, because that's basically what a drone is. A combination of flight and imaging. There's a voice quality to the Nadar print that speaks to some of our contemporary concerns to who is allowed that God's eye view and what they're using it to discern the AI layer. And all this is even scarier as machine learning is now being used to read and analyze the images and data that drones capture. Speaker 5 00:20:27 Yes, there are some real concrete and practical concerns about using artificial intelligence to conduct this analysis, to glean information from, uh, not only aerial data, but any kind of data source because, uh, machines see the weld in black and white, and they are not open to the possibility of nuance or ambiguity or confusion or uncertainty. And that is, is dangerous in, in the context of aerial surveillance. It's dangerous in the context of manual aerial surveillance, there are so many stories of drone operators seeing what they hand on heart could swear was a group of individuals preparing for a terrorist attack on us forces in Afghanistan or Iraq, but actually turned out to be something else. The enticing nature of aerial imagery can very quickly draw you into a trap of certainty. I think almost in a way that is more extreme than traditional photography, just because the aerial view is so data rich and so empowering if humans who are actually very adept in some respects at being open to and acknowledging ambiguity and uncertainty can fall into those same traps of empowered certainty with aerial imagery, the AI is, is going to multiply that problem significantly and simply put, I mean, you cannot code an algorithm to be open to the level of uncertainty that exists in, in the world. Speaker 5 00:22:46 It is just an absolute impossibility. Speaker 1 00:22:56 We'll hear more on this topic from Arthur in our next and final episode, but before we returned to me and Kate, I want to share one more perspective and analog counterpoint to this trap of certainty that large scale AI and drones can produce in this case by way of an intimate on the ground photography practice. Because in order to assess the land and our impact on it, we need to consider all ways of seeing and look for the things that AI can miss. Speaker 7 00:23:24 My name is Richard Ms. Rack. I have been a photographer for, I guess, 50 years now, which is amazing to me, half a century. And, um, early on I discovered the desert American desert landscape. I photographed elsewhere like in Louisiana, several times in Hawaii, and I've worked in different places, but especially the American desert West has proven to be a rich terrain, both symbolically visually because civilization stands in relief against it. I always think of the desert as kind of a stage where things act out on it and you can see it. It's very visible. So it, desert has always been kind of the symbolic place to work, uh, in terms of talking about America. But also I love being the desert. I love the heat, the space, the dryness of physically being there has been a kind of an important component of my efforts over the year. Speaker 1 00:24:22 Richard is best known for his desert Cantos and ongoing series of photographs documenting this very particular place. The photographs are arresting some as large as eight by 10 feet. The scale can be intentionally overpowering. As many of the images seem to place you the viewer physically in a vast, sometimes beautiful sometimes ominous landscape. Each Canto is a variation, a new perspective on the desert landscape, a new approach. I'll let Richard explain the desert. Speaker 7 00:24:56 Qantas is something that I formally began around 1979 and over the years, I've, um, I'm now up to, uh, the 40th desert Canto. And it's a really simple idea. I was trying to read as her pounds can't dose, which was a 50 year long Epic poem. And I couldn't understand much of what was going on. It was really dense. It's written in seven languages, including Chinese idiot grams, and each chapter added a component to it. And in photography, I'd never seen that done before. So I basically, I borrowed the Cantos concept. It's a structural concept, really. It's very simple and applied it to my desert series. So I would work on one, say essay about desert fires and its impact on the environment or a manmade flood. And in Southern California desert or a bombing range in the Nevada desert that I'd put that with the other Cantos. Speaker 7 00:25:55 And suddenly you'd get a much more Epic narrative about civilization, American civilization in particular, but larger civilization and the development and changes going on in the world. And those can't dose. On one hand, they were sometimes there were documentary in nature, but then I would also look at conceptual or theoretical or metaphorical, uh, uses of photography. The first book I did called desert Kansas, the first four were of a space shuttle landing fires, floods, and these are kind of almost biblical, right? And then later Cantos, like my skies were driven more by conceptual and theoretical concerns. Speaker 7 00:26:40 Back in the day I had, I've had four or five Volkswagen campers since my college days and I would just travel the American West. I would chase the light wherever there was cloud action. I would go in that direction and I would spend two or three weeks camping in the back of my camper. And, you know, I do my eight by 10 camera in there with a tripod and coolers with film holders and film in it. And I would just spend two to three weeks just traveling the desert, looking for pictures. And I never knew what I would find. In fact often I would get a really brilliant idea for a Canto or a project and I go there and it just, it wouldn't, it wouldn't manifest. Inevitably those ended up really flat. So no predetermined project ever worked, but when I'd go on, my band just be open. But I used to call aggressively receptive, just be open, wander around and see what I could discover. I would discover again, bombing ranges or the nuclear test sites or a space shuttle landings. These were things that at first I wasn't even looking for. I only discovered them by accident. And that was part of the process was just simply getting in a Volkswagen camper and driving and looking and sleeping where you can just pull off the side of the road in the American desert. And the bottom line is, is it's really about discovery. Speaker 7 00:28:07 When I began photographing the border wall in 2009 in earnest. You know, I, I knew about the border. I knew about what was going on there general, but boy, when I drove there with my camera and I went to almost the 2000 miles drove along the wall, wherever I could get access to the, the actual border wall or the border I would get there. And I discovered that it's, it's not this homogeneous notion that everybody in America has about what the border is from Tijuana to Nicholas, to Brownsville. Those towns are radically different from one another. And there's no way you can know that except for, by being on the ground, wandering along the wall and discovering what's there. So that was a big revelation. And that probably mirrors my revelation for many of my projects going out on Bravo, 20 bombing range in the 1980s, nobody was out there to sit there and drive on this, you know, Madmax landscape and just camp there with my 90 pound German shepherd Kodak. Speaker 7 00:29:13 You know, you can see pictures of it in the newspaper. You can hear stories about it, but unless you see it yourself, you, you, you don't know what's going on. And I would say that photography is basically a license to learn about all that and share the best you can. But I know that there's no way it can really convey the full richness of what that means. One of the things I think about is that whenever photography grows, new technology evolves a new technology, which has been over the years. It keeps happening originally, there was the de Garo type, and then there was the negative. And then there was 35 millimeter. These technologies that we have, they could be used for good or for ill. And that is the key. It's not that any of these technologies are intrinsically bad. In fact, there may be great value to them. It's tricky on how we manage them and use them. And that's an issue. I think you can say the same about photography going back to its seminal uses, which you know, helped develop the West and ultimately even exploit the West. Speaker 3 00:30:21 Richard is referring to early American landscape photography here and specifically a late 19th century government sponsored survey of the American West. The resulting photographs by Timothy O'Sullivan and others are sweeping and often beautiful and did help inspire early environmental and preservation movements like the national parks. But they're part of a complicated history. Oh, Sullivan was a pioneer of geo photography. His images documenting the landscape were used as assessments of the land that helped inform American efforts of westward expansion, resource, mining, settlement, and exploitation in a way it's an earlier form of the kind of predatory mapping that Kate and I were talking about earlier, the kind that aims to exploit Speaker 7 00:31:07 Photography in itself. Isn't the problem. It's how we use things and exploit things. And I think that's true of artificial intelligence and robotics photography can be very practical, you know, functional, uh, by the lumber companies or the petrochemical companies, for sure. But I also think in other uses photography could become poetry. You know, it's remarkable what you can do with photography. And it's got the dark side to tar fee itself is neutral. It's how we use it. Simple as that same thing with artificial intelligence, there are great, great boons to be gotten from artificial intelligence and this great exploitation potential. I mean, I, I can imagine in 10 years, 20 years the advances in both those areas, and I think we need to be really careful because they can be used for ill, but they're also have great benefits for mankind. They might help us in many ways that we can't even imagine now. And I think photography has done that too. Speaker 6 00:32:08 <inaudible> Speaker 3 00:32:17 This all gets back to a term. Kate Crawford uses a lot when talking about AI extraction, the extraction of resources, the extraction of labor, the extraction of data, all of which happens simultaneously in a way, how do we use these systems? Ethically is the big question have, and how Speaker 1 00:32:40 Should we be considering with our continued use and proliferation of the environmental impacts? Because I feel like a lot of times it's not even clear, so it's hard to make considered choices. Right? Speaker 2 00:32:55 Right. I mean, it's interesting because I think rather than the word ethics, I tend to use the word politics in this context, this is sort of the politics of these systems and how we can or cannot opt out of using them. It is fundamentally a collective action problem. This is not something that individuals should make to be fused, to feel responsible for trying to make the right choices. That's the kind of neoliberal framework of individual responsibility that I think so many of the current discourses around tech have, have unfortunately fallen into that trap. There are dependencies that are built throughout these systems. So really we have to be asking about, you know, what are the sort of collective politics of refusal that are possible here? How do we make spaces and worlds that don't place these sorts of technologies at the very center? Because I think that if we're going to move towards greater justice, we have to think about these sites of extraction simultaneously because they are also deeply interlaced. Speaker 1 00:33:52 Remember at the beginning of the episode, when we said we were going to look at two different threads, as they relate to the impact that photography surveillance and AI can have on our planet, the environmental impact of the technology itself and the colonial impulses spurred by the kind of mapping and documentation this technology can enable. Well, here's where those two threads meet and potentially end. Speaker 2 00:34:17 When I really sort of think about what we might imagine, these underlying ideologies of extractions leading to, we might ask, like, what's the end point? Like what's, what's the limit to which they extend. And certainly one of the things that I've been researching in Atlas of AI is, is, is looking at why so many of the Titans of AI that the mega billionaires, but, you know, I'm thinking about here, Jeff Bezos, but also Elon Musk and others are investing heavily in space that the T loss of this sort of work, that all of the billions that have been generated from this sort of the AI platforms and tools are now being sunk into creating space infrastructures of extraction. In some cases, looking at things like asteroid mining, um, or the colonization of other planets, which is something that both Musk and Bezos talk about extensively and, and invest heavily in through their companies, space X and blue origin. Speaker 2 00:35:16 I think there, we see something that really troubles me, which is a recognition on the part of these companies. That growth is now limited. We've, we've reached the end of what we can do, and that this idea of constant growth is impossible on a planet that's undergoing such extraordinary stress. And so now the focus is on space. How can space be sort of mined and turned into the next side of, of, of profit? And to some degree, I feel like that's the sort of this profound moment of giving up on earth and saying, well, if we can't continue the growth curve here, we have to look into space rather than questioning why is growth necessary? Why is this constant obsession with growth and profit making in, in increased extraction? Why are we not critical of that as, as a, as a mode of living? And, and certainly if you look at the just staggering kinds of wealth and wealth and equality, I mean, just, just, you know, one of the things that, that we did for anatomy was really looking at how much Jeff Bezos makes per day compared to say one of the, the miners and the Congo that extract the cobalt that are used in so many of the systems that Amazon relies on. Speaker 2 00:36:24 I mean, you're talking about the fact that it would take 700,000 years for one miner to earn the amount that that Bezos would make in a single day. And those numbers have only gone up, uh, since COVID, and since of course, Amazon has, has become ever more enriched. So for me, the question is this it's like, where does this ideology end? What is the limit of extraction? And, and what we're seeing is that there is no limit, there's this mythic belief that you can infinitely extract, even if that means, you know, abandoning earth and traveling to other planets. And I think that to me is, is, is where we see that the, the true horror of that as a, as a template for living and why I think we need to be far more critical of these underlying assumptions of, of how we should be relating to each other and to the planet that we live on. Speaker 3 00:37:12 It's about how the scale is too big for individual responsibility, but I'm finding it difficult to have any optimism in terms of like government regulation. Like they're starting to do these antitrust cases, but those companies already have so much power. It seems also difficult to have a kind of grassroots rebellion with that. And I'm interested very much in like speculation and imagination and these radical possibilities. It just seems like you need so many resources and then you would need this kind of scale to have like an alt AI. Speaker 2 00:37:51 I mean, scale is, is the key issue. And then something that I've been doing a lot of work on is like, how do you intervene in scaled systems? How do you even understand a study or shows scaled systems? It's one of the key motivating questions in, in so much of what I do and the politics of scale. I think underlie many of the reasons that we are in the place that we're in, which is that we think about a lot of the large scale machine learning systems. It's about how do we do analysis and prediction at scale in ways that a much cheaper and much easier to do than say, paying people to do things you could take the example of say, contact tracing and COVID right. So we've got a very long history in public health and epidemiology of using humans to interview a person who's been exposed to a virus and to say, okay, where have you been, have you spoken to that's, you know, it's effective. Speaker 2 00:38:47 It is also resource intensive. And so you've seen this kind of will to technological, solutionism say, Hey, we can have apps that will do that, you know, at a fraction of the cost and be able to scale. And what we find is that in that shift to scale, you open up a range of, you know, serious threats and concerns in terms of what, how else is that data going to be used, who gets to use it? How long is it? Is it stored? And also how is that data itself misinterpreted the way that it sort of gathers data very differently. And then, you know, which communities are most harmed by the way, in which punitive steaks are put in relation to technology. And again, it's, it's black and Brown communities. It's low income communities who face the greatest risks from the way in which that data could be used against them. Speaker 2 00:39:32 So really when we're talking about scale, it becomes really important to think about what forms of politics scale, and one of the responses that I've seen and it's completely understandable is a real focus on localism. And I think that's, you know, under, COVID very natural in, how do you think about sort of politics and mutual aid and resistance in a local formulation? It seems like there's a bigger consciousness in terms of public acknowledgement of how these things are intertwined in our lives. But do you feel like people are becoming more aware and asking these questions and a part of like your audience? Absolutely. You know, trying to move against the traditional politics of academia is something that's always really motivated me. And I'm interested in how do we bring this? The sorts of investigations that both of us do into public spaces of discourse and critical engagement, just opening up often opaque systems to scrutiny. Speaker 2 00:40:31 I think that's, that's something that art spaces do extremely well. And I do think we are at a moment in terms of the widespread deployment of artificial intelligence in every form of life that we urgently need, those type of conversations to be public. People need to feel empowered to have this conversations and, and hopefully to push back and to also find a type of politics of refusal, if you will, the ability to say, no, we know we don't want facial recognition, you know, in our public housing building or no, we don't want predictive policing in the streets of Baltimore, you know, to try and get to a place where people feel that they have an understanding of the systems sufficiently, that they can actually act against them. Thank you for listening on our next and final episode, we'll continue on this idea of seeking a politics of refusal. We'll look at how surveillance Speaker 8 00:41:34 And AI are used to exert power by States and corporations, but also how artists and activists are using those very same tools to push back Speaker 1 00:41:43 <inaudible> Speaker 8 00:41:48 Mirror with a memory is a production of the Hilman photography initiative at Carnegie museum of art in Pittsburgh. For more information on the artists and thinkers featured in this episode, please visit <inaudible> dot org slash podcasts.

Other Episodes

Episode 0

January 22, 2021 00:01:25
Episode Cover

Podcast Trailer

The Mirror with a Memory podcast focuses on different facets of the conversation around artificial intelligence and photography—from biometrics and racial bias to the...

Listen

Episode 6

March 08, 2021 00:46:23
Episode Cover

Episode Six: Power

Do we have the power to refuse mass surveillance? In our final episode, we speak with Forensic Architecture founder Eyal Weizman, who explains how...

Listen

Episode 3

February 15, 2021 00:40:44
Episode Cover

Episode Three: Evidence

If we know that it is impossible for a photograph to be objective, then why do we rely so heavily on photography as evidence?...

Listen