Engelberg Center Live!

Art vs. AI: The Salon

Episode Summary

Throughout the course of this series, we’ve looked at how AI has been slowly, and then, very quickly, infiltrating and influencing various industries, but there’s one space where its role feels particularly complicated - the art world. We think of art as coming from a place deep inside us that is us at our most human. But what happens when art is made by computers? Is it even art making to pull the idea of art from infinite datasets and create the closest approximation? Or is that, in its own way, a whole new kind of creativity? And what does this mean for creative industries as we know them? To dive into these questions, we’re doing something a little different today. In the grand artistic tradition, we’ve assembled a small salon of thinkers - professor and author Kate Crawford, journalist and writer Hari Kunzru, playwright and performer Annie Dorsen, and art historian and audio producer Tamar Avishai - for a sprawling, searching conversation about AI art is for, what it’s responding to, how artists are both pushing back against it and embracing it, and whether or not this moment in art, AI, technology, and popular culture is unique.

Episode Notes

Music used:

The Blue Dot Sessions, “Delicates,” “Gran PKL,” “Sorry Linus"

Episode Transcription

Knowing Machines

Episode 8 - Art vs. AI

 

 

Newsclip 1: Artificial intelligence, or AI, is everywhere. It's now part of our conversations about education, politics and social media. It's also become a hot topic in the art world.

 

Newsclip 2: What is considered art and is a technology blurring the lines of creativity and pushing out human artists.

 

Newsclip 1: Programs that generate art using AI are widely available to the public and are skyrocketing in popularity with.

 

Newsclip 3: Image generators like Dall-E and stable diffusion. Almost anyone can create new art in a matter of seconds, but.

 

Newsclip 1: What goes into these programs and the work that comes out are heavily debated in the arts community.

 

Newsclip 4: Ai is a tool, just like a paintbrush is a tool. I was involved, this wouldn't exist without me. Why are you trying to discredit the person behind the technology?

 

Newsclip 2: Some say the human experience and creativity is gone with AI and the technology is a demise of artistry.

 

Newsclip 5: I'm a media artist. I use data as a pigment and paint with a thinking brush that is assisted by artificial intelligence.

 

Tamar: From the Engelberg Center on Innovation, Law and Policy at NYU School of Law and USC's Annenberg School for Communication and Journalism. This is Knowing Machines, a podcast and research project about how we train AI systems to interpret the world. Supported by the Alfred P Sloan Foundation.

 

 

Throughout the course of this series, we’ve looked at how AI has been slowly, and then, very quickly, infiltrating and influencing industries across the spectrum - the work we do, the news we read, and even how we experience the natural world.  But there’s one space where its role feels particularly complicated - the art world.  We think of art as coming from a place deep inside us that is us at our most human.  How is this something that machines could ever emulate, or, God forbid, understand?  What happens to the idea of creativity when art is made by computers?  Is it even art making to pull the idea of art from infinite datasets and create the closest approximation?  Or is that, in its own way, a whole new kind of creativity?  And what does this mean for creative industries as we know them?

 

 

To dive into these questions, we’re doing something a little different today.  In the grand artistic tradition, we’ve assembled a small salon - it’s not an interview so much as a space for sprawling, searching conversation.  We talk about who AI art is for, what it’s responding to, how artists are pushing back against it and embracing it, and whether the concern it's raising has historical precedent with every new technological wave, or if this moment is unique.

 

 

We hope you enjoy it.

 

Kate: My name is Kate Crawford. I'm a professor, a researcher, the author of Atlas of AI, and also recently opened a new exhibition called Calculating Empires, which is all about technology and time.

 

Annie: I'm Annie Dawson, and I'm a theatre maker and performer. I've been working at the intersection of artificial intelligence and performance for about 14 years, and most recently I made a piece called Prometheus Fire-bringer, which is a response to generative AI.

 

Hari: I'm Hari Kunzru, [00:04:00] I'm a novelist and essayist, and I suppose a recovering technology journalist. I have been following AI's use and language of writing for a long time, in a kind of strange, critical place at the moment.

 

Tamar: I'm Tamar Avishai. I'm an art historian, journalist, and audio producer.

 

Kate: So gosh, about a year ago now, Harry and I discussed this idea of putting together a little salon, if you will, on generative AI and inviting some of our favorite writers, artists, uh, performers, musicians to be a part of a sort of a small, ongoing set of conversations. Uh, so we've been meeting more or less monthly, uh, most of the time in New York, but occasionally in other places like Berlin. And it's been a really extraordinary year to be having these conversations. When we first met Harry, we had just seen the release of ChatGPT, and I can remember Harry saying to me, you know, I would really love to have a conversation with you about this because I'm a little worried that this thing is going to eat my lunch. And I thought that was quite a bold statement very early on in the development of this tool that back then looked quite janky to be honest. Um, and of course, since then has indeed devoured many lunches. Um, so since then we've had this set of small gatherings with writers like Ted Chiang and Scott Westerfeld and Geoff Thorpe, and visual artists like Simon Denny and Trevor Paglen and Heather Dewey-hagborg and Lynn Hershman Leeson, who, of course, has been working on AI related art since the 90s, uh, as well as musicians like Holly Herndon and Matt Dryhurst. So it's been a really interesting time to [00:06:00] to have these conversations. And of course, Annie has been a part of this as well. So this has been a really nice way to bring us together and perhaps share some of the conversations, ideas, concepts that have been coming up in those, uh, more private conversations and share them more widely.

 

Annie: Yeah, I'll say for myself, going to the salon what last month I think was fantastic. I haven't known all the time whether people are sharing my revulsion and, um, and of course, I was, uh, there in part of the conversation with Ted Chang, who's writing about this stuff in The New Yorker, has given me some new framings to think about what's going on.

 

Tamar: This idea of AI eating your lunch as a writer, as a performer. Is that the concern that I've heard of, which is that a computer can now create works on its own, and therefore we should be scared that creative fields are going to kind of be rendered irrelevant or obsolete because computers can do it for you. Or is it something that I haven't necessarily thought of that that goes a little deeper than that or a little bit? Maybe a lot more existential than that? Unpack that idea of of what this concern is, where the repulsion is coming from.

 

Hari: I mean, I, I came into this at a, at a point. I mean, it actually took me quite a long time to get access to the tools. I kind of I kept sort of sending off hopeful emails to the makers of the tools saying, I'm a novelist, you know, I'm the kind of person you might want to play with this thing. And so I, I turned up just at the point when the, the LMS, the text generating AI was, was making what I thought was was very, a very interesting kind of uncanny valley kind of of work that was visibly not [00:08:00] human, was making kind of connections and producing a kind of text that seemed in, in a way, inhuman. And I found that moment quite exciting. And it was almost like as soon as I put my hands to the keyboard, the next generation of AI came along and it was all that interest was completely flattened out because these things are being optimized, by and large, for commercial applications such as, um, customer service or whatever. And you don't want your customer service bot to be making bizarre, hallucinatory, uh, connections between things. So there was a first kind of thought that I had about how I wanted to use it, which was as a kind of interlocutor, as a kind of dream machine. And I think I might be able to go back to that at some point. But then as these things got efficient and it became clear that people were going to deploy them in, um, situations like newswire aggregation and various kinds of quasi journalistic text generations, it became clear to me that we're going to be entering a world where the cost of generating text is, to all intents and purposes, zero. So we'll have an infinite amount of text and image, of course, and other other kinds of, of media moving image music too.

 

Hari: But essentially we'll all be operating in, in a world where there's an infinite amount of media. And so where, you know, we're used to to kind of in a way subsisting, not in a way and actually subsisting on, on, uh, our ability to produce this scarce material. So what we're looking at is a kind of moment of extraordinary recalibration where we're going to have to ask ourselves, what is it that we value about about text. You know, if I can press a button and generate 2000 words in, in a second, what is what's the difference between the GPT 2000 words and, [00:10:00] I don't know, the the, um, T.S. Eliot 2000 words. They're going to name your canonical text generator of choice. Um, and, you know, it's not good enough at this point to say, oh, well, the human is, is is somehow ineffably better because I think we're on, uh, you know, we're on a path where these machines will be able to search the space of possible words in ever more, you know, nuanced and interesting ways. So, um, this is the this is the question we have to ask ourselves. And, I mean, I have a few answers that maybe I can kind of. Come back to later. But I think the landscape we're in as creative people, uh, is now one where just generating a piece of of of content, uh, you can't really see me doing air quotes over the an audio medium, but I am air quoting my little heart out. And just generating content is not good enough that we're going to have to ask ourselves what role this our work plays, what connections it makes, and who we connect with.

 

Annie: Yeah, I was thinking about, um, the writer Amy Caster, who I think writes mostly about, um, crypto, uh, as a critic of crypto. And she said something about AI, which I keep thinking about, which is that AI is not going to replace all jobs, it's going to replace some of them, and the rest, it's just going to make much more alienated. Uh, so, you know, as a, as somebody who works in performance, that's more or less my situation. Theater artists are probably not going to be the first in line to have their work, you know, uh, replaced by product of generative AI. But the culture as a whole will be incredibly impoverished by the ubiquity of these tools. And like Harry was saying, this kind of, um, churn of content, which replaces maybe some of the other, um, [00:12:00] I don't know, the other roles that human expression plays when all generated media becomes functional, uh, in, in the sense that it's, you know, it's good enough to serve a function. You know, that's a pretty hopeless situation, I think, in terms of what we I don't know, our aspirations for what our culture is all about. So I do take somewhat a more existential view of this, on top of some of the the concrete and kind of material challenges that artists are facing. It seems so lonely to me. If you think about creating art as a process of thinking. Thinking maybe alone. Thinking with other collaborators, and then finally thinking with a viewer or a reader or an audience, you know, that's the stuff that I want from culture is opportunities to, I don't know, interact with other minds. I don't want a serviceable ream of automatically generated text that follows some kind of formulaic, uh, you know, statistically optimized version of serviceability.

 

Hari: I think you've already hit on the crucial thing. It's about loneliness. And I think what we will carry on, and we all value that sense of connection. And we, we, we want to feel there's a voice on the end of the line. Um, and I always think about the kind of automated messages that you apologize to you when you're a trainer's late or you're playing is late, and how hollow they feel because there's no agent there. There's no there's no person there who's taking a moment to feel bad for you because because you're you're delayed and issuing an apology. It's it's a it feels like a con of a certain kind. And in the same way the, the experience of, of, of, of some kind of, uh, pattern [00:14:00] of, of of words or pixels or whatever it would be. Gestures. If that's just generated your emotion, I mean, maybe you would be able to emotionally invest to some extent, but my suspicion is that everybody wants what we really want is that feeling of connection, that experience of of the face of the other being, uh, you know, being revealed to us and that in this future, near, very near future world of infinite content, that's the scarcity. The scarcity is intersubjective connection. And also in a slightly different way, the scarcity is going to be the people who can pick from that flow and say, this is interesting, this is a thing that means something right now.

 

Hari: Uh, and again, that's that's part of a human subjectivity, that moment of choice, that moment of, you know, curation to use a fashionable word. And, and I think that it's going to be it's going to be a very wrenching experience for a lot of a lot of people. But I think just the fact that we do understand how hollow and lonely a certain sort of passive experience of consumption of media could be without this. The thing that we really value in art, which is, oh yes, this person made a thing and they feel the way I feel, or they have given me a tool, uh, to express myself or to express my state of being more fully. That is what we want. That is what we will still search for. And we'll have to make economic and kind of search structures around that, that will allow us to, to find those things. And I think we can do that with AI generated content in many ways. We'll be able to kind of quickly pull from the flow and assemble things and show each other things. And, um, it will be a very different kind of experience sometimes. But I also think that we will value the work that comes from another human subjectivity.

 

Annie: Yeah, [00:16:00] I was just thinking in 2012, I worked with the actor Scott Shepherd on a kind of algorithmic hamlet project, and I remember him saying to me, there was a lot of computer generated text, very stupid computer generated text, like really early forms of natural language programming. And I remember him saying, you know, it's so interesting because it's like you've put this sort of algorithmic process in between the audience and you, but you still are the person who wants me to see this. So the communication remains, and I've thought about that a lot. You know, it's a question of desire as well as a question of, um, wanting to, uh, share maybe experiences that, you know, the artist is saying, look at this. I want you to see it, fellow human, you know, and that's the mind that we're in the presence of when we're seeing something fantastic. We're seeing, you know, we're being invited by someone to live with their mind for a while. And that's irreplaceable. So I don't know if we're going to be able to approach that feeling or just through a selection process from amongst various kinds of AI generated content. It seems to me the look at this from one user to another, you know, when you when you send somebody a YouTube clip or something and you say, oh, check this out, that's a little piece of I want you as the artist to see this, but it's more like, I don't know, I was just thinking about the sort of guy debord's version of spectacle where you connect with each other through both of you, pointing and looking at a third thing that's a little bit different from the kind of connection that an audience member has with the artist through the work directly.

 

Hari: I think [00:18:00] that's true. I also think that there's a sense in which you could you could see these things as a sort of higher order palette or alphabet and, um, maybe you would then have the same experience of agency and carefulness. It's not. Yeah. I mean, me sending you a cat video and saying cute is one thing, but maybe me making a kind of series of high level connections between different kind of objects and making a making structures out of them that are specific to our our communication could be a properly artistic form.

 

Annie: Yeah, I totally agree with that. And it's funny, talking about these things, I always find myself getting caught in, um, a problem, which is that artists will always make interesting things out of the materials they have at hand. So there's, you know, is it possible for people to do something super fascinating with these tools? Of course it is. But we're dealing with a certain kind of scale. Where the vast majority of stuff that gets made with it will not be of that level. So, you know, I don't really worry about myself, for example, or like other professional artists, whatever that means, or the ones who, you know, devote their time full time to working with materials and trying to coax interesting forms from those materials. I worry a lot about the hobbyists and the amateurs and the kids who start playing around with these instead of picking up the proverbial paintbrush.

 

Speaker10: I don't know, maybe.

 

Hari: They'll use them wrong. They'll some kid is going to pick it up and just and just manage to to to break it, whatever, whatever it is and, and um, you know, I mean, the kind of work right now that's being sort of initially valorized I think of I mean, everybody dunks on Rafik Anadol, but think about Refik Anadol, the visual artist. He's [00:20:00] making these extraordinarily sort of high resolution abstractions. And, you know, it's like a very, very, very fancy lava lamp. And it's beautiful. But that's, that's that I mean, I think the and it's a very anodyne experience of beauty now. It's impressive. I mean, I think it's the perfect destination for that. Is this new is it called the sphere or the globe or the orb or the the thing in Las Vegas? I actually think that object, that kind of new screen is a very interesting thing to think about right now, because it's a kind of spectacle that is so that even dwarfs even the Vegas skyline. It's a kind of new level of of gigantism. But yeah, I mean, that with all the kind of bubbling business is, is a huge popular spectacle and everybody can Instagram it and nobody will be changed at all, or challenged or find something that is food for their soul in any, in any kind of meaningful way. So the kind of work I'm I mean, I am very drawn to sort of the uncanny and the strange and the things that are I love I love the fact that right now visual eye has trouble with hands.

 

Hari: I love every any picture of two people with seven fingers hand shaking is is all right by me. And that's going to be an aesthetic that will be very, very temporarily marked. I mean, we'll think of that as a, you know, as a kind of 2020s tag, you know, very soon, as soon as that's kind of sorted out. So there are these kind of, these sort of aesthetic things that are happening at the kind of edges that I'm hopeful about and interested in. I mean, the question of, of, of models to sustain artists and, and the kind of valuing of what we could call for want of a better word, good work, isn't that that's [00:22:00] a kind of eternal thing, you know? I mean, I mean, I can't remember what the name of the law is, that 99% of everything is rubbish and that it's always been the case. And, and, you know, maybe, uh, the job of finding our way to the, the good stuff is going to be harder. And it's certainly there's a lot of people's day job. I mean, the thing I'm actually worried about is that a lot of people's day jobs are going to, are going to go like it's the it's the repetitive grunt work that you do right now. It's the copyediting. It's the translation. It's the it's the kind of Photoshop jockey work, all that stuff that is sustaining a lot of strange and creative people going, that's what's going to be munched by, by the tech.

 

Annie: A lot of that work is also actually really good for people's craft, and it's really good for people's development as, um, excellent readers of their own and other people's work. Uh, you know, I want to come back to this thing before we move into some of the other kinds of labor issues. Um, I found it very difficult to break these things. You know, that's what I've always done. So, you know, going back to 2009 and my first project, which used a chat bot, uh, but like an old fashioned Alice type chat bot that is really stupid and works from a big database. You know, it's really easy to create, I don't know, openings in when you're dealing with that tech, right? Like that tech is dumb enough that it reveals itself, it exposes its workings, and for an audience, it's great fun because they see the illusion of, you know, conversation being constructed and they see it fall apart and they see it fall apart in really weird ways that are not like human ways to have a conversation fall apart. So you get access to this kind of algorithmic logic by working with the stupid tech. One [00:24:00] of the reasons I really don't like this new class of statistical. Prediction text generators is that they're incredibly hard to turn inside out.

 

Annie: I can't seem to do it. You know, with GPT two, an earlier release, you were able to, um, easily sort of stop the training on a given corpus at a certain point so you could see how the tool was, um, optimizing itself over many, many, many, many steps of processing that's now all hidden, uh, in current commercial releases. I'm sure people are still doing that, obviously, as they train their own models and whatever. But you have the foundation model, which is so vast and it's so, um, you know, it's just it's just off the shelf basically, uh, made available. And when you do little trainings on top of the foundation model, you don't really have access to all that weirdness. Um, it's it's very, very difficult to break it or use it in ways that allow it to expose itself. That said, I think that will get even harder because they're they're the companies are now trying to, um, you know, create, uh, more and more safety levels, layers on top of their models. So they're adding a whole bunch of blanket kind of rules and exceptions on top of the model so that they won't say hate speech, they won't say pornography, they won't say other objectionable things. I saw an example of the new Dall-E.

 

Annie: I think Dall-E is it, we're up to Dall-E three that if you don't specify an ethnicity to the character, say you want, you know, you ask for a picture of a woman on a train. If you don't specify that you want a black woman on a train or a white woman on a train or whatever, there'll be a random assignment of ethnicity to the the image that's generated. That's just a pure layer [00:26:00] right on top. So whatever you input, it's going to spin the Wheel of Fortune and pick an ethnicity. And that's obviously in response to criticisms that there's algorithmic bias and that it favors depiction of white people in certain roles, and that it has a lot of stereotypical, uh, outputs. So that makes it even harder to think about breaking it for interesting purposes, because it's becoming so controlled and so locked down to whatever Silicon Valley thinks is an acceptable median, um, or an acceptable range of expression. I think that's going to get more and more intense, because obviously they're trying to fight any idea of governmental regulation that would really impact their bottom line. So they're going to do a lot of proactive prophylactic kind of fiddling with their models to try to make sure that they stay sanitary.

 

Speaker10: I mean.

 

Hari: Two things on that. I mean, I, I spent a lot of time I mean, it's admittedly about about, you know, six months ago. So things may, may have changed, but, um, trying to do various kinds of drama and jeopardy with, uh, ChatGPT. And, and it had, it had enormous, enormous trouble with, with certain sorts of fundamental things to do with, with dramatic plot lines. Um, you know, two guys in an alley, one's got a knife. Walk towards each other, and it invariably turns out that they're just friends, or there was a mistake, or it just shuts down. Uh, that kind of thing as quickly as possible. I mean, a later iteration was a bit more aware of conventions and would actually allow you to run a fairly anodyne kind of, uh, sort of fight of, of, of some kind. But I think that maybe, I mean, you know, maybe there, you know, with all this kind of safety [00:28:00] related algorithmic bias related layers on, on top, they are, in certain ways, excluding these models from the possibility of generating certain kinds of literary art. I mean, and I think, I think maybe other kinds of art as well, in that, you know, people are interested in the thing that's edgy and risky and transgressive and, and strange. And as we've mentioned in passing, these are stochastic machines. These are machines that will find the middle way.

 

Hari: They are they are programmed to find the most likely response, you know, the most likely likely next token in the sequence. And uh, and that's almost a precise definition of what art isn't. Um, so I mean, maybe that hollows out a space for traditional art making, human centric art making in that the kind of models, the commercial models, certainly because they have to be optimized for, for various kinds of commercial uses, won't be able to to do some of the stuff that artists would want them to do. I mean, I mean, obviously then, you know, people jailbreak everything. And, and I imagine the kind of cost of, uh, I mean, I don't know, this is one for Kate, really. I mean, is it going to be an ecosystem in the future with lots and lots and lots of models, cost of of training. One goes down every kind of conceivable jailbroken, you know, no safety, you know. Musk anti-woke AI type thing is, is is around uh, or or is it a kind of situation where you have to be such an enormous corporation to do a training run, and it's so huge the costs that actually the only thing, the only games in town will be the, the big 3 or 4 companies. I mean, what's what do you think are our future landscape is going to look like?

 

Kate: I mean, I think our future landscape looks a little bit like what you just described in LA. You know, the gigantic sphere, which is this completely [00:30:00] reflective surface with no cracks in it, with a sense of just it's always going to give you back the perfect customer service. Experience is really where we are at now, and certainly in the near future. We're looking at essentially the creation of mega models, what are also known as foundational models that are being used as the basis upon which other things may be built. Uh, fine tuned models that may be designed for writers and will allow you a little bit of drama within bounds or, you know, designed for the. Yeah, within reason. You wouldn't want anything unpleasant in there.

 

Hari: You'd have a little blood as a treat.

 

Kate: Exactly. Just a little bit. Um, so I think there's something really quite evocative about this idea of the, you know, the sphere sitting there above the landscape, dwarfing everything else. Because that's certainly where we're at in terms of the AI industrial landscape. There's really only a few players, very small handful, who can do this at scale. That may not always be the case. We are certainly looking at a set of research papers that have come out just in the last month, pointing to the possibility of smaller models, less training data, etc. but it's a little bit like McDonald's, you know, once there's this kind of industrial model, which is completely based on factory farming text and images, and this is another form of lunch eating to take us back to to your original question, Tamar, the lunch that's being eaten is, of course, every form of human text, image, film, video that has ever been released in a publicly accessible form that has been, you know, mushed together in a gigantic blender and produced a remarkably homogenous set of outputs. I mean, there was the fantastic essay about the American smile that, you know, every time you generate an image, uh, in stable diffusion, it could be, [00:32:00] you know, giving an image of, you know, the Civil War.

 

Kate: You'll get people with these fabulous teeth smiling back at you because, of course, it's been trained on so many images from e-commerce sites and Pinterest and aspiration boards for people's weddings and you name it. What you get is this very weird type of super normalization. Uh, and that that to me is again, a sort of a sense of this, this perfectly smoothed superstructure that I think in some ways is antithetical. To interesting art, but it can produce a lot of stuff, and it's going to entertain a lot of people in a certain way. And I think, um. To me, the question is what will remain like, are people actually okay with that? Do we really just want the mega entertaining sphere that is going to distract us and our, you know, few hours that we have outside of our increasingly, you know, in certified to use Cory Doctorow's terms, jobs as more and more of the kind of work that felt individual and unique is, is automated from underneath us. I mean, I think that's the landscape we're looking at.

 

Tamar: But when we're talking about art, hasn't there always been this, this giant sphere capturing people's attentions? You know, since the 19th century, since before where it's like the majority of people are largely drawn to the lowest common denominator of entertainment. And on the margins of that, you have people making really novel things, making really profound things, dunking the cross in, in piss, you know, I mean, you know, because they, they aren't feeling constrained by other social mores. I mean, that's why artists have always existed kind of on the fringe of society. And I wonder how different that is than it was. I mean, we've we've seen new technologies come and [00:34:00] introduce people like the sphere. And then there are artists who look at that and say, well, what what is that thing that they're really drawn to? That's where I can make a statement. That's where I can make new art from this very kind of large subduing technology. How is this different than what's come before?

 

Annie: I think there's a bunch of different ways to respond, because you said a lot of things in there that, um, you know, one thing, I always get a little nervous when people talk about how it has always been thus, you know, um, because actually, it has not always been thus, uh, and there are phases, let's say, where the idea of what art is and what it's supposed to do change radically from one generation to another, from one century to another, uh, from one geographical location to another. You know, it's I don't think it's fair to say there's been this dominant sort of pop culture model, and then there's been this fringe, profound, avant garde model. I think that is one schematic way of describing a certain moment in history and a certain part of the world. Uh, you could think about the sort of notion of cultural capital from Bourdieu and think about how what we understand as great art is also a reflection of our class position, of our educational background, of our geography, of even the very local community that we're a part of. You know, when I think about how technology in sort of recent art history interacts with so-called, you know, avant garde, one thing we might think of is, you know, how did how did artists first respond to electric light, for example, something really basic. And you've got everything from, you know, all of a sudden there were, you know, creepy new effects in the Grand Guignol theater where you could make people look, [00:36:00] you know, crazy or monstrous or whatever to, um, you know, the futurists who saw speed and electric light as part of the coming fascist takeover of the world.

 

Annie: Right. And it was a masculinist and it was a, um, you know, hyper nationalist vision of what the future was going to be. Uh. And became, you know, entirely associated with the Italian fascist movement in the late 20s and 30s. So there's always a political element as well to how technology gets adopted, how it gets used, how it gets talked about and framed. You know, we're at a moment right now, I think, when there's a heavy ideological question about generative AI and how artists will use it. So there's a few. Again, there's like a few different ways to approach that question of ideology and AI. You could look at what Silicon Valley is doing. What are the dominant, you know, I don't know, theories of the world that are, um, popular amongst venture capital, uh, and venture capitalists and amongst Silicon Valley CEOs. Um, a lot of these companies are flirting with eugenics ideas. They're flirting with. Well, if not flirting with eugenics, per se, they're flirting with white supremacy, for sure. Um, they're flirting with this notion of sort of optimizing the human for a kind of high tech, you know, cosmic future of man and machine cyborgs. That's not a vision that I find compelling, uh, for example. But, you know, this sort of ideology is informing all the decisions these big companies are making. Other I mean, I just want to say one, maybe one more thing, uh, which is kind of related.

 

Annie: I don't know if it's directly related, but one thing that, you know, we haven't talked about is this notion of democratization and how generative AI [00:38:00] is marketed often as making available these tools to everybody. And that's the ideology that they're putting front and center, while maybe behind the scenes, they're thinking in quite different terms. Uh, I always think, how could that how it's exactly the opposite of democratization. To me, it's privatization. So instead of tools that exist in my room that I can use to make whatever and no one can stop me, you know, and maybe they're free because I found them on the street, and I hauled them up to my apartment. Uh, I now go to a, uh, like a giant tech company for my materials. I keep dipping into their, well, paying them fees, dealing with their terms, dealing with their defaults settings. And that, to me, is, you know, part of again, speaking of Cory Doctorow, right. That's part of the process of creating walled gardens where we all now have a relationship like a direct and intimate relationship with these big tech companies, which may be are replacing other kinds of relationships that we could be defining on our own terms with the people that we interact with every day. You know, which is one definition of culture, right? It's the kind of interactions you have with the people that you live and work with. So that's just a little hodgepodge of of thoughts and response to your sort of your provocation tomorrow, which was interesting.

 

Hari: Oh, there's so many different things I wanted to say to that. Um, I mean, I think, I mean, this question of sort of mass culture and the avant garde, you know, a paradigmatically kind of 19th into 20th century notion of how, you know, art happens and culture happens is interesting in that I think it's you're absolutely right, Annie, that it's highly historically specific. And in a way, we're we're living through the total breakdown of that model. And one of the things that I think generative AI [00:40:00] is bringing to a close is, is, let's say, the kind of the sharing era. There was a kind of a period of softening up between the kind of early days of the internet and recently when we were encouraged to share. We're encouraged to put our lives online. We encourage to put as much material online as possible. And now we've suddenly discovered that an enormous enclosure has happened. You know, at least as sort of significant as the kind of physical enclosure of land that happened in the Europe and in the Renaissance period. And the kind of response to that that I'm seeing is, is a sort of flight from the clear net and a flight from the, uh, open net into, I mean, what I've heard called the Dark Forest. Um, people are doing things that I'm in, like our salon is very it's not recorded.

 

Hari: It's it's not publicized. It's a it's a network of people who are not, uh, producing content for the consumption of others or trying to kind of, you know, talk to each other. I mean, you know, we're I mean, you look at how a lot of younger people use the net. They're very interested in, um, anonymity. They're things have gone from, you know, your grandpa's Facebook towards private discord channels, all sorts. People are kind of fleeing away. From a kind of enforced self-presentation that has been the hallmark of, let's call it the millennial internet. You know, the cup of coffee and blond wood table and, uh, and a kind of aspirational self-presentation for a maximal reach. That is the craziest thing one can think of right now in the internet of 2023. And I think, uh, generative AI is, you know, because it's just hoovering up all that aspirational content and splurging it out as a kind of undifferentiated goo at us. I mean, it's bringing to an end. I mean, I'm very interested in ideas [00:42:00] of, uh, decoys and incoherence and, uh, evasion and all sorts of strategies to actually avoid a certain sort of, uh, enforced self-exposure and self, uh, self-presence self-performance on online and I think we're in a, we're in a moment where a lot of our interactions are becoming fugitive and secretive and fleeting and for a few people and, you know, I mean, things like live performance, I mean, the kind of interaction of two bodies in a space that may be not recorded and not broadcast on the on the interwebs.

 

Hari: And these things are, are feeling profound and and feeling interesting in a way. You mentioned the whole kind of, you know, the underlying ideologies of our Silicon Valley overlords and, uh, I mean, so many things to say about that. I mean, super suspicious of the idea that everyone is an artist. It's the idea of the creative as a sort of, uh, uh, as a, as a, as a fundamental kind of human value, like, you know, it's, you know, is deeply, deeply suspect. I mean, in the way that the boomers had revolution taken from them and sold back to them, the exes and the millennials are having creativity taken away and sold back. Um, and I mean, I, I think that the kind of secret bet in, in Silicon Valley is a kind of bipartite division between, uh, full humans and bagmen and, uh, the full humans are the kind of ultra, ultra optimized, uh, full subjects who will have, uh, you know, extraordinary transhumanist capabilities in the real. And the bugman will not be able to have those lives because they're going to be, I don't know, a planet of 15 billion [00:44:00] people cannot support that much, uh, that, you know, that many fully optimized billionaires with enormous views and, uh, and million dollar blood transfusions.

 

Hari: So what I mean, I noticed that Mark Andreessen, who's the who's the A16z? Um, uh, what is he, director, overlord, head honcho. Panjandrum. Pharaoh. Uh, was, you know, has proposed this idea of reality privilege, uh, which I which, you know, the idea that, you know, people like us who are scorning the the suboptimal lives of the masses and saying, well, you don't want to go on the metaverse. You know, you should be, you know, because the metaverse is going to be worse than the real. So we're speaking out of a possibility of having these optimized lives, of being one of the lucky few, the elect who get to have, uh, to have a real life and the, uh, and that people like Andreessen, you know, are saying that the moral thing to do is to make a metaverse that's as great as possible for all the bugman who won't be able to, uh, ever, ever have that in the real. I think this is a terrifying vision. I think it's a but I think it is the vision that's driving this. It's it's going to be like, you know, I mean, every, everybody's worst matrix nightmare. But sold to us as everybody is an artist.

 

Kate: That's it. And honestly that manifesto, if we can call it that from Marc Andreessen really was, you know, hearing the inside voice said out loud, I mean, it really was the encapsulation of that vision of you're either one of the super mensch or you are the content cow that we will extract everything from and slowly kind of bleed you, uh, to, to, to create the matter of this, you know, this contained virtual world that you will be asked to live within while we are, you know, in our private jets and, you know, [00:46:00] extraordinary, splendid estates. And it also makes me think about, you know, we've touched on these historical resonances from. The 19th century. But I'm also thinking about the 20th century here, because there are these little echoes I'm hearing in our discussions that are really making me think of Adorno and the culture industry. You know, there was that that, you know, his critique with Horkheimer of this moment that the culture industry was creating this sort of standardization of of cultural products where everything was just like interchangeable parts, and we were being turned into passive consumers who were easily manipulated and controlled, and we'd lost authenticity. You know, this was the absolute core of Adorno's vision for what was wrong with, you know, the the first half of the 20th century. And yet here we are having a very similar conversation, you know, just just almost a hundred years later. And. And there there's critiques of Adorno also, I think, really valid that, you know, Adorno was refusing to acknowledge jazz as a form was was refusing to look at emergent forms that weren't part of his vision of high culture. Um, and I wonder, you know, as a, as a provocation to both of you, what are we missing from this, from this story? And and is there something that we are perhaps clinging to this kind of vision of the authentic, you know, fully creative human that is also a kind of elite vision, a somewhat different elite vision, of course, to Mark Andriessen's, but but an elite vision nonetheless.

 

Annie: Yeah. I mean, I was there's no, um, surprise to me that, that we're having this conversation which echoes Adorno and, um, Benjamin also at a time of rising fascism, both of those guys are writing in the context of how did we get here? Meaning, you know, 1930s Germany. And [00:48:00] I have that question daily. How did we get here? You know, to me, the question about sort of how the culture develops in the shadow of, um, rising fascism is freshly and horrifically relevant. I don't know if we're I mean, I don't have a big feeling for authenticity. Exactly. You know, I do computer art, but, um, I do have a feeling, uh, for trying to get out from under the roof of corporate control, uh, and handmade algorithms. You know, that's what I was always working with is, uh, like the slow algorithm movement, right? Um, but working with computer programmers and and through discussion and thinking and trial and error, you know, what do we want to make? And then you sit there and you kind of deal with your materials. In this case, the materials are code, um, or mathematical principles.

 

Annie: And, you know, that has been super fulfilling to me and doesn't feel quite very different from any other kind of artistic process, uh, dealing with generative AI feels super different. You know, I compared it in something I wrote to playing the slots. Like, you stick a little penny in, you pull the lever, you see what you get. You know, the penny in this case is your prompt or your literal penny when you pay for service. Uh, but, you know, and then if you don't like it, you kind of, I don't know, give your prompt a little twist and you see if that makes any difference. But it's impossible to know if it did or not, you know, was it just, uh, pure randomness that you got a different output or did you change of your prompt, make anything else happen? Uh, inside the machine? You don't really know. Uh, and, of course, you know, the other thing with gambling is that you are always in a powerless position vis a vis, you know, the casino.

 

Kate: The house always wins, the.

 

Annie: House always wins. So, you know, a person can end up with a windfall. But we're also dealing with an aggregate effect. So I was [00:50:00] super influenced by like, early computer artists, um, in the late 60s and 70s, um, who called themselves the Algorists. Uh, and a lot of them talked very explicitly about how the difference between, you know, what they're doing, which they thought was very interesting and what, like other people working with computers were doing, they thought was not interesting was whether you're making your own, whether you're writing your own code and writing your own algorithm, or you're using off the shelf software, that's a distinction that I think is kind of worth keeping in mind about what's really wrong with, you know, the big companies and their commercial products, how, uh, essentially all the interesting decisions have been made by other people before you got there. I mean, I think.

 

Hari: That the kind of landscape that we're we're sketching out here, I mean, we've just really begun to kind of broach all the different aspects of this. I mean, there are material questions about how people will subsist and make work at a, at a moment when, uh, the machines are going to be as, as we've said, eating a lot of lunches. There's a set of aesthetic questions about what we actually value from art. And I, I personally suspect that there's something to do with intersubjectivity and and care, which is a word that I think could be very important for us going on. And then there's there's the broader horizon of politics and what the kind of mass enclosure of the cultural sphere by a few tech companies, actually, uh, means for our future. But I really hope we can get to carry this conversation on again, because, uh, like both of you, I'm. I have so much more to think about. I see.

 

Tamar: You've all given me an enormous amount to think about. Thank [00:52:00] you. Um, thank you so much for taking the time to.

 

Kate: Speak with me. Thanks, Tamar. It's a pleasure.

 

Annie: Thank you, thank you.

 

 

________________

 

 

 

 

 

 

Thank you for listening to Knowing Machines.  The series was produced by me, Tamar Avishai, with support from the entire Knowing Machines team.  You can explore their incredible output, including research, publications, other episodes from this podcast series, a legal explainer, a reading list and more, at knowingmachines.org.  If you liked this series, if it changed the way you think about AI, if you learned something new, if you felt challenged or want to challenge us, let us know, at info@knowingmachines.org.  We’d love to hear from you.  And for now, this is the knowing machines podcast, signing off.