Engelberg Center Live!

GenAI & the Creativity Cycle: Fair remuneration of creators — Can AI be an answer?

Episode Summary

This episode is the “Fair remuneration of creators - can AI be an answer?” panel from the Generative AI & the Creativity Cycle Symposium hosted by Creative Commons at the Engelberg Center. It was recorded on September 13, 2023. This symposium is part of Creative Commons’ broader consultation with the cultural heritage, creative, and tech communities to support sharing knowledge and culture thoughtfully and in the public interest, in the age of generative AI. You can find the video recordings for all panels on Creative Commons’ YouTube channel, licensed openly via CC BY.

Episode Notes

Ami Bhatt (McKinsey & Co) moderating a conversation with Justin Haan (Morrison Foerster), Wade Wallerstein (Grey Area), karen darricades (CC Canada), and Carla Gannis (NYU)

Episode Transcription

Announcer  0:03  

Welcome to engelberg center live a collection of audio from events held by the engelberg center on innovation Law and Policy at NYU Law. This episode is the fair remuneration of creators can AI be the answer panel from the generative AI and the creativity cycle symposium hosted by Creative Commons at the engelberg center. It was recorded on September 13 2023. This symposium is part of Creative Commons broader consultation with the cultural heritage, creative and tech communities to support sharing knowledge and culture thoughtfully and in the public interest in the age of generative AI. You can find the video recordings for all panels on Creative Commons YouTube channel licensed openly via CC BY

 

Ami Bhatt  0:59  

All right, hi everyone. And I guess we are in the tough spot of being between you and some very good players. So we'll try to keep the conversation really engaging to make it worth it, you know, sitting in your seats a little longer. My name is AMI. I'm an intellectual property and technology attorney and I currently work for McKinsey and Company where I help manage and protect the company's IP portfolio and develop and enforce their brands. And I am joined by four incredibly thoughtful and smart folks and I can see a brief introduction. Right next to me is Dustin Hahn. Justin is a partner in the technology transactions group with Morrison and Forrester in San Francisco. And his practice focuses on transactions and counseling involving IP and technology. Next to Justin is Wade Wade Wallerstein is a digital Anthro, anthropologist and curator who specializes in simulation and software based artwork. He works as an Associate Curator at Gray Area Foundation for the Arts in San Francisco, that West Coast representation today. Next year we're just carrying here and Derek Hayes is the head of art and culture at Creative Commons, Canada, multidisciplinary artists at Digital Literacy scene educator, advocate for the arts and culture producers and speaker on tech and society. And certainly last but not least, we have Carla, Carla Gannis is a transmedia artist and industry professor at NYU Tandon School of Engineering in the integrated design and media program. And so I think we can have each of the panelists give a quick little introduction sort of their their thoughts and reason. But I think it's an incredibly important topic. And I think kind of touches on maybe one of the sensitive points as Jenny is sort of takes a larger role sort of in our lives and work in society. And so with that, maybe I can turn it over to Justin.

 

Justin Haan  2:41  

Yes, I'm adjusting, I'm already noted, I work at a large law firm. And my role, or even the perspective I think I'll be bringing today is that I work with a lot of companies that are both developing these AI tools, and also seeking to implement them, as well as the overview, but I think, to give you like a snapshot of the type of work that I do, or as it pertains to this kind of discussion, helping help companies like open AI, for example, negotiate a deal with the Associated Press to use a bunch of their archival data to access their archival data for use as training data for for machine learning models, like their large language models. Now, obviously, that's one way to solve or address this kind of problem, but it doesn't necessarily scale across the entirety of the internet, the transaction costs are just too high to try to do a deal with every, you know, potential content source. So today, you know, companies like, you know, developers with this tool, these kinds of tools are using, you know, relying on the fair use doctrine of copyright law, we can talk a little bit more about that. But also just really interested to talk a little bit about some of the other, you know, other sort of values that play here beyond merely just kind of compensation, but also things like the agency that, that we talked about a little bit earlier that creators desire and seek and as well as the importance of attribution in many different communities. And then I think the final thing that I bring to the table as a, as a corporate lawyer may come as a surprise that I'm not well, I'm not to say they're mutually exclusive, but I am not artistically talented. And so for me, it's been really fun to use some of these generative AI tools, these image generation tools in particular, to to work with my, my two kids or even five and, you know, kind of hardest creativity to just come up with fantastical ideas for what kinds of images we might think of and even if they're things that I never myself put pen to paper to create, thoroughly enjoyed sort of having that experience with them. We'd love to make sure that that's something that isn't lost.

 

Wade Wallerstein  5:08  

Hey, guys, my name is Wade, I work at Gray Area Foundation for the Arts in San Francisco. But we operate a little bit more like a community art center. We do everything very DIY, and we're very small. So in addition to running exhibitions, we also run an artist incubator. And I would say that we're like, extremely confidently that 100% of the artists work going through our incubator, going through our various education classes are all already using these tools, whatever they can find on the internet, whatever is available tonight. It's happening. So my interest in this is really about thinking through how, you know, rather than thinking about the future, what can we do right now what's implementable right now at a community or like a user level? That too, you know, because I, you know, we'll have to go back to San Francisco in a week and talk to my artists about this technology and guide them towards the right direction. So while I'm thinking a lot about on the onboarding of, you know, smaller organizations, educational institutions, how do we teach people about these things, and more so how to, you know, before we you know, and the biggest hurdle to that, to me is the fact that, like, if the technology is changing so fast that we on, you know, the educator side, can't keep up with with developments. So, you know, I'm really stuck in this place of like, trying to think through how we can develop and think about this kind of infrastructure for artists on a larger, more structural, structural scale. Because, you know, if we don't do something, yesterday, the artists are already using this technology, they're putting their images out there, and they're being affected by it now.

 

karen darricades  6:51  

Right. So yeah, I'm Karen, I work with young people and arts and culture, advocate for arts and culture producers. And we're working through Creative Commons. You know, people have talked about fair use. And of course, like aI comes in and complicates these kinds of things very much. And often the, you know, the terminology is misuse, we need we need a whole new language for some of these conversations. But there are some similarities that we find ourself in the situation of with digital rights management that YouTube was so so instrumental to the inception and creation of Creative Commons, and that at the time, Larry Lessig, one of the founders, you know, has said image digital rights management is essentially a businessman solution to a technology issue. And I think that we are in a situation where we're missing the forest for the trees, and we're kind of in a lawyer solution environment, to a to a to a much more complicated technology issue. Again, and for me, it's it's not about property, and IP and these kinds of rights, but really about labor and contract law, and power. Right. So what at what point when when creatives are negotiating their contract, you know, what is the power differential from which they are negotiating that? Now? Could it could it could it liberate and expand income for creatives, if I were to able to use my digital self and my avatar, and license it out and have it do doing three performances at a time? Wonderful, right, that's three times the gigs. But, you know, if I'm in a place of desperation, where at the point where I'm signing a contract with a studio, I'm saying, you know, sure, I'll take $40,000 and give over all my rights to my voice and my avatar, because maybe I could put a down payment on a home. Meanwhile, they know very well, that there's millions of millions and millions and millions of dollars that can be earned. We're perpetually in that. So of course, like how do we, you know, how do we stay away? How do we, how do we knowing how power works, it's kind of hard to be optimistic about where things are going. But you know, we how do we set the you know, change some of the parameters around contracts and labor and practice some new ways of being and new models and, you know, exercise some new muscles that might involve technology, like smart contracts and things that can be in place of, you know, start to start to build the language and the and the practice of paying artists, you know, not only them having the ability to to have pay rent and to have normally livelihood when they're creating the work, but also to benefit from the market value or, you know, residuals or whatever it is that that that that product actually continues to create value in the world and that they they are they are included. that in that in, in receiving money from that value, you know, creation and and then in the larger picture, you know, I'm always still hoping for the the the larger promise of you know, the internet and global communities and web monetization and pennies from the many instead of, you know, hundreds of 1000s or millions from collectors and wealthy patrons of the arts. So, you know, how can technology help us receive, you know, little bit of money affordability for the audience's for the masses and, and still help creatives create.

 

Carla Gannis  10:45  

Alright, hi, everyone, it's really great to see you and then see I'm happy to be amongst all of these esteemed individuals. One thing I'd like AI to do is help with like nervousness when you're first starting to talk on a panel. And also I am an artist, and also an educator at NYU. And I work with many avatars. So I was also thinking about ways to harness my avatars to get up here and do some of these talks. One other thing, I want to kind of start this kind of my position, which is a quote from Sasha Stiles, and I'm so happy she's here, I went to an event, the other night that Sasha styles of poet and artists and creative technologists was hosting. And something she said about AI is that they are not anti human, or machine learning algorithms. They're actually hyper human. And so how can we harness this as artists, as educators, in a way that actually helps us grow as a culture helps us actually think and know more about ourselves as humans. So when I'm working with my students, I teach classes like ideation and prototyping, for example, oh, my sweet, as Wade was saying, are starting to use these tools or have been using them for some time and it actually expedites processes. And so there's so much we still don't know about ourselves or relationship to nature. So maybe there's one, like fundamental win in this is that we get time to know more of ourselves. And then in working with these emerging technologies, I've been working as an artist with emerging technologies for over 25 years, is that in working or collaborating, we are looking at resolutions or ways to solve problems from different perspectives. And for more, you know, rhizomatic perspectives, perhaps. And so that's something that's really exciting about exciting to me about all of this, I think that we live in a time where it has become very popular to posit all of this through a dystopic lens. And as an educator, and as an artist who fundamentally believes that, you know, my approach to all of this is thinking of it poetically, creatively, philosophically, that there are abundant possibilities here. And we have to still hold on to our optimism about what's possible. And I'll just stop there for him.

 

Ami Bhatt  13:01  

This is excellent already. And I think we've developed the conversation. So far, we've touched on the like, opportunities, right, that he presented, sort of enabling a more efficient work, sort of allowing for new things to be scored, that maybe, you know, sort of weren't available before, and also some of the trepidation, right, instead of thinking about power dynamics, and other things that may just need to be thought through a bit more, more carefully, I think. And so, maybe to level set, though, we can just talk about sort of the different models of remuneration that exists right now. And, you know, just as you sort of alluded to these one to one, sort of like licensing deals, but also acknowledge that it's not sustainable. But that may be one model of sort of payment that could be considered, I don't know, if there's other models that are worth like, just sort of putting out and having the part of the conversation that you're either seeing, or that maybe you would like to

 

Carla Gannis  13:49  

see. Well, one thing you know, I was talking about ideation and prototyping. And for some of these low paying tech jobs, a lot of times a lot of work is put into the ideation process. And so already, again, it's class I teach with my students, this is a way to expedite that labor. And so again, it provides them time to maybe creatively problem solve more because they already have some tools that will generate these images at a quicker pace. And I think that that is just more efficient.

 

karen darricades  14:22  

I think there's something to be said to for like, a much, much larger vision of you know, if all of our all of the data you know, if the AI systems are creating economic value, you know, be problematic or otherwise in terms of the way that they're calculating GDP and such. But you know, this kind of like liberation from labor you know, argument right so, the machines can do all the work you know, we can we can just sit back right and then this becomes a you know, conversation more about culture and less about artists because, of course, all of us are feeding information into these data sets. But if We can create value within these economic systems from all of that information that have been trained on that it's all you know, arts and culture, you know, it has a massive potential, not only to, you know, give us a certain kind of universal basic income, where we're basically like we're, you know, we're harvesting, it's harvesting information, and we're harvesting, you know, some of the value that is producing, but also to like massively shift, you know, economic and global inequities in terms of our relationship, you know, the dichotomy between the first and the third world a bit of, you know, Global South, in terms of who has actually been feeding a lot more information into these data systems, because they can't pay for privacy. And our and so, you know, a lot of these systems are a lot more based on on folks who are not, can't afford the paywall and to to participate in these in, you know, in the internet, in the world, are giving are paying with their, with their information and their cultural capital. And yes, some of that, you know, certainly is like overly Western and overly simplified in terms of like, well, we'll take, we'll take your story, and we'll give you some, we're gonna deposit some money in your bank. You know, there's a lot of cultures who were like, we don't see that that way, you know, and that's not how we, we see ideas and property and copyright, right, we see ideas as a suit, or we don't see it as ownership, or we don't as somebody who's talked about in the last panel, or we just don't feel we don't want to consent to to that being used in that way. That doesn't feel right, that said, that changes the space and the nature of the collaboration in which the culture was, you know, fertilized by the brainstorm.

 

Wade Wallerstein  16:43  

I mean, don't you diverge too much from like remuneration specifically, but a concern that I hear all the time more so than getting paid is, knowing where my images are going and what's happening to them. Or alternatively, understanding why the output of a particular imaging model has produced that output, I've heard of a number of different suggestions for ways that, you know, there could be a data breakdown, showcasing some of the sources for a produce image and kind of explaining that back. And I think that, you know, in a future model, the obscurity around source is a little bit more transparent. That does make sense of security being transparent. And you know, that we would have a little bit more insight into what's happening behind behind it. Because, you know, as we kind of were talking about the other day, in a pre, in a pre chat, you know, remix culture is such a crucial part of our contemporary culture, contemporary internet culture. And so I don't see that going away, or anybody wanting to use AI to limit that. But we want I feel people are craving this kind of understanding of the delineation of these different image objects that are continuing to shape and change our culture and how we interact with or understand each other. And so I hope that in any future remuneration model that like I think is a baseline, we start to get a bigger picture of like, how this is being made, where it's coming from, so that when we look at an image, we can maybe understand, oh, this is, you know, this looks like this because of x. And that will help us to, I don't know, use this more effectively. And I'll be there going back to the remix culture, you know, comment, I think that like, as an artist, you in remix culture have demonstrated, it's not about like taking an appropriating something or whatever artists do that there's always like some connection to the history or to the lineage. There's a historical point being made. And I hope that yeah, we can bring that into into new contemporary models for for this technology.

 

Justin Haan  18:41  

I mean, I think one thing I'm, I wrestled with a bit is when I was in law school, Creative Commons was in existence, but I have my wife at the last election, and she wrote a paper about the role of fan fiction, and its reliance on fair use to be able to, without permission, use, you know, elements of the dominant culture you will have in the mainstream mass media culture, and repurpose them for their own purposes with, you know, in particular, for women and non binary or gender non conforming individuals. And I've always thought that was incredibly powerful and really felt very strongly about the use of fair use in that way about the the ability to use things without having to seek permission. And I'm often just wrestling my own head with how to, you know how to square that with what we have here, you know, where you've got these large language models, where you know, permission was not thought initially recurrent, you know, in this generation of models for the most part of the consultative sale. There's been I think there are a number of steps companies are taking to begin to respect at least an opt out so you see, like, opening I'm releasing so that it will sort of use this GPT bot which a user, which a website host can use it's robots dot txt standard to indicate that it does not want to be scraped by this bot and it will be respected. Yeah, it's just the tension that I'm just flagging. Because I've often struggled with the role of copyright. And I've thought that, you know, one of the few instances where it works best is when there's something like a compulsory license in the music space, for example. So if someone publishes a piece of music, I can get up on this stage, and wouldn't want this, but I could get up and, and play that song for you. And I think I can even record it and some copies that record and I don't have to ask permission to do that. And there is a system on the back end where I'd have to pay statutory royalties to the, to the publisher of that music. And I've thought a bit about whether, you know, how would that work in this kind of space? Would it work? Well, unfortunately, the technology today isn't there to support that, because these things are somewhat of a black box, it's not really possible to sort of, you can ask the Chatbot, you know, show me your work? How did you do this? And we'll make something up. But it's not. It's not actually telling you how it did that. But that I don't know, I think about the question of if you were to be able to develop that technology, how would that be a good thing? You know, we don't do that today in the real world, right? So if I paint in the style of someone that I've seen before, we don't, there's no mechanism by which that person automatically gets compensated. So growing up, again, for brainstorming session ideas, but these are the interesting tensions that I think exist in this space. Where

 

Ami Bhatt  21:41  

I think this is, it's actually really helpful, I think, to send a step away from just pure like payment generation, I think we've touched on things like consented to knowledge and sort of even being able to understand as users, right how this technology is built, and think that sort of ties into I mean, just the user experience that even as a creator will, okay, I think I've been scraped, but do I actually fit into this generated piece of content? And so I think, you know, appreciating that there's other parts that go into the conversation about payment, or you know, how to sort of compensate for patient doesn't involve sort of touching on these other elements. And, you know, wait, I think you brought this up with your roles of an educator with these young artists, right, and sort of being able to maybe upskill, folks, you would be able to touch on sort of maybe the types of things that you think should be talking about to your artists, specifically, or more broadly, but you know, do you think there's an education piece that needs to be brought in to allow for a better conversation? Maybe? Yeah, absolutely.

 

Wade Wallerstein  22:37  

I think that a lot of folks are using these models without having just a basic understanding of how machine learning works in general, you know, or the way that, you know, machine learning processors are very iterative. But I also, you know, and I think that, that that's a huge learning curve for a lot of people, and also for educators themselves, like I have a specialty knowledge, but like, I have so many colleagues who are like, What the hell do I do with this, you know, they don't even know where to start, right. And, you know, because as we've seen, in his credit, you know, these applications have uses across just like the digital arts, they're like, there's so many different kinds of creative drafting, that this technology can support and bolster. I also think that, you know, for what I'm starting to see our artists do is start to find little bits and pieces of different different algorithms, different codes, different things that they can find open source online, and they're starting to patch them together and make their own models. And so what and I think that what we're going to see is a proliferation of projects models that we really can't see, I totally understand what they're doing. The artists made them don't totally understand what they're doing. And there could be all different sorts of unintended consequences. So again, going back to a baseline of like, what this technology is, how it works, and empowering artists to understand it and utilize these different components to, you know, build and make their own models that suit them best.

 

Carla Gannis  24:11  

Because that's a conversation we were all having earlier to, in terms of using these proprietary applications. Artists have been doing that for some time I have I work in the XR space. And I've done lots of AR projects and VR projects using proprietary apps. And then it makes me recall the early days of like teaching HTML to students and showing them revealed source on a web page. Right. And you know, that was always eye opening to a lot of students seeing under the hood. And I do think, one more skilling because there has been at least in the arts over the past hundreds of years of kind of de skilling that we've seen rise with the rise of conceptual art and the ideas and I think that's something that that we can continue to foreground obviously, and that's very important. That's what we can kind of generate through prompts or through our datasets, but also have, you know some kind of fundamental understanding or comprehension of kind of what's going in and what to expect coming out. Also, I really am excited about these hybrid models where artists or creative technologists are kind of working with one platform and then mixing it with another and mixing it with another, which makes me think of this Josh Jasper Johns, quote, this amazing, famous American artists take something, do something, do something else to it. And I think that that's always been an abiding principle for my own work working as a net artist working with remix culture. And now working with these machine learning algorithms and platforms is to take something and then do something and do something else with it, putting it through these different filters of thoughts or applications, whether it's technological or conceptual, or some kind of really great cross pollination between the two.

 

karen darricades  25:51  

And I think it's important to kind of like, you know, have more conversations to help people understand that that's, that's what the artists that we know are doing with it, like the stuff that's like just some straight up prompt, and post of that version, like, that's blast, that's wonderful that people playing with the new tool, you know, but that is not like the stuff that you're seeing. That is a you know, I mean, I think we also need to stop saying AI. I mean, it's an absurd, you know, you mean, like, there's so many different kinds of AI. And when we're talking about how content has been affected by that, and streaming and stuff we're talking about, you know, we're already living in algorithm, like the AI algorithm, then there's data mining, then there's GaNS, and there's prompts, you know, text to, you know, and we really need to, you know, find the language that an eight year old or that, you know, that anybody can understand, because we need to, we cannot build and shape these things in any kind of democratic or equitable sense. If we can't include many in the conversation, if we want to include many in the conversation, we need to have the language to explain to them these are nuanced things. This is not just some, you know, like, some crazy dystopian, you know, weird thing, but like, oh, you know, right now, you know, it's fine. But in two weeks, we're all gonna die. Because I mean, that's what a lot of people think, right? Like, I mean, we laugh, but like, that is the conversation. A lot of people say, you know, okay, like, Fine, you're telling me, it's not that big of a deal right now. But the things I'm reading and the things I'm thinking about, and the, you know, the dichotomy that they're setting up with that, like artists against AI and stuff, which is super problematic, creating all kinds of like lawsuits, like all this stuff is distractions from actually educating and then shaping it right? And being like, okay, no, so these are all these, like, actual layers of things that are happening. These are all different ways of, of using the digital excess, right? Like we have an a digital abundance. And we are now if you know, what not till we've been doing for last 2020 to 30 years is, you know, filing and archiving and naming, upgrading, updating all of this digital abundance our digital lives. And now we need to come together and really have thoughtful, inclusive conversations about what do we want to do with all this stuff? You know, other than just like, ruining the planet, like putting it in a in a cloud? You know what I mean? Like, what, like, how do we

 

Carla Gannis  28:05  

Yeah, yeah, we should be proactive in participating. Absolutely.

 

Wade Wallerstein  28:09  

I totally agree. And just, you know, to add a little bit of color, I think that like, what the models that we're kind of talking about, like text to image are such, like, early baselines, like the artists that, you know, I see artists starting to build, like, what I think they call like a recursive learning model. So models that are teaching themselves rewriting themselves and changing, it's like another lake. It's a totally different kind of cognitive headship. And when thinking about the model, when you have a typical individual, and I think that like, we're just going to exponentially increase in terms of degrees of legibility and complexity over time. And so I don't know, like within the I don't necessarily have a like a great way forward. But there has to be some kind of like straight line course that we can take, as we like, know that these different steps are going to be coming our way. Yeah, predicting and then working backwards. Or them, I

 

Justin Haan  29:09  

feel like I can go home and report back that I'm pretty great dad, because I've been educating my kids in this stuff. And I thought it was just sort of helping them get ready for a life of being a prompt engineer to be skilling up work. But one of the things I guess that they've learned is, you know, one of the tools we use has some sort of content moderation there. And of course, if you ask an eight year old or a five year old really what they want to draw, it usually involves a bug or who realize that that can't, not permitted. And so that's an interesting discussion. I don't know that I actually fully talk that through with them in a way that I probably should have the conversation although, what are their best efforts? Everyone's a picture that sort of bits of the earth party. And it's like, maybe, yeah, I'll show anyone wants to see.

 

Ami Bhatt  29:58  

Well, I think 10 You know, the idea of like touch sitting in combination? And I think you're aware, right, like, you're having these discussions today, they're inviting what appears to be a large, maybe not fully representative, but I think still like a large group to try to start this more formal conversation in education. Right. And so I guess I would be interested to hear whether the panel has a sense of like, is any law, you know, a helpful tool here, I think some other kinds of talks about how like, kind of couple of ways, what their policy, what their sort of individual company approaches may be, and sort of self regulation to the extent that, you know, a way forward, but just curious to hear the weather, you know, law, policy, self regulation, what are combinations, maybe the way to think about this?

 

Justin Haan  30:45  

I mean, one thing I've jumped in here, would be to give some credit to your seen from the discussion yesterday, just sort of highlighting, you know, the difference or the importance between thinking about whether things should be required by law, or, you know, promoted by culture by law. And I think that's I personally, you know, think that that's a great, you know, framing of things, I think, if you look at a lot of these companies, for example, almost all of them that make, you know, you mentioned generation tool, will respect artists requests, I mean, in addition to now, having maybe like the opt out of not training on your data for future models, they'll also, you know, respect an artist request, if a user says, you know, give me something in the style of blank living artists, if that living artist writes in, they can generally opt out so that that input is basically not processed by the algorithm, he could do so. So I think I find something like that very interesting. I think. I've been thinking a lot about just the fan, the fan fiction piece, one of the things that another one of the prior speakers, Stacy said yesterday, which I thought was really interesting was, you know, what, what would the word be, you know, in some ways, we'd have a much less robust sort of generative AI model, if it didn't take into account any of the alternative constructs proposed by somebody's unfinished work. So a superhero was always a guy in a cave, couldn't be someone non binary, or, you know, someone other than sort of the dominant paradigm. So So I guess from my perspective, I'd be concerned about law jumping in a little too early and shutting down those kinds of opportunities.

 

Wade Wallerstein  32:34  

From I mean, I this is so random, but prior to working in gray area, I worked for the Consul General of Canada, on technology diplomacy, so I think a lot of policy and like the diplomacy that's involved here as like, nations come together to, like, do negotiation with, with, with private companies over this stuff. And the fact of the matter is that like, policy will never be able to keep up with the pace of the technological development. Like, there's just no way like, you know, they're like, you know, I call my colleagues back to the consulate, like, they're still not, they still don't know what crypto is. And so it's like, you know, we're all that's always going to be like, 10 years behind. But at the same time, the number one thing from all my conversations, tech companies with they all were would were begging for policy from the government, they're like, Please legislate us, which sounds kind of weird, but they also don't want to be liable for things that haven't because there was no rule, and then there suddenly is a rule. And so it's a little bit like finicky in there. But at the end of the day, like, I mean, that's where I go to all of this is like, hopefully, there's a policy that can, you know, at least help to guide or shepherd or set a benchmark so that if something bad does happen, there is a protocol to follow, because I think that the stickiest part can get into is, you know, an artist will get to a weird situation, and they will be used, and there's no legal precedent, there's no legislation that they can use as like a foundation to take any kind of action. So even if it's not perfect, like somebody earlier, they were talking about, you know, adjustments on either end, you know, as we go, and to me, that feels like a really reasonable way to move forward. Because we don't know what the intended consequences might be. We don't know the extreme top of cases of harm that might happen. But hopefully, we have some kind of thing so that at least we can take action on that when it does happen.

 

Carla Gannis  34:31  

Yeah, and as much as policies as conversations like this, I mean, this is really crucial that we are gathering and we are, you know, as a human species, getting together and talking through these things and talking about the ethics of desk talking about diversity and inclusion, because you know, that's another issue that arises as you were saying that, you know, okay, we want to legislate what data is being used or what images are being used. But if it's really homogeneous, that's incredibly problematic. And so in terms of education, you know, you're right. Also, oftentimes policy moves in corporations and also in academia at a Paleolithic pace. And so we already are grappling with the fact that, you know, students are turning in papers using large language models. I've turned to humor, for example, I teach this class humor makes us better storytellers. And I have my students make avatars and work with ll M's, and come up with comedy routines that they perform in Metaverse space. So we can look at these things with a sense of humor, too, because we sometimes are approaching all of this, and we kind of set these parameters for what the ethics are or what this is, and then there's no room for the creativity or the subversion. And oftentimes, artists are really good at subverting these systems. And then we learn from that, where that trajectory forward is.

 

karen darricades  35:50  

Yeah, I think the you know, the conversations that we were having yesterday will give us a lot of talk about and interdisciplinary, anti, this has been a multidisciplinary, and having these conversations together, and it's true, it's kind of impossible at the speed work so differently, right, like public education in terms of like coming up to speed, you know, even you know, three years ago, pre pandemic, like nobody wanted to talk about, you know, I was taught doing search and find stuff, which now seems ancient, and they were like, This is not important, you know, their last bastions of literature, you know, it's Dewey decimal, and it's the librarian, and you know, how dare you take them on to Google and you want to make YouTube videos with them? And then, you know, two weeks later, March, you know, 2020, they're like, Oh, my God, you can teach us at a computer combat. So but, you know, and yeah, the speed of which, you know, it's very hard to keep up the problem with the the other problem that compounds within tech is that not only does it move faster, but it's not inclusive, it's not representative. Right. This is why we want policy, which, you know, I don't, you know, again, I think it's an impossible battle speed wise, but the policy at least, is somewhat more representative of the populace, you know, they mean, so it's like, some of the policies shouldn't necessarily address how AI is used, but just how these companies, you know, like, you know, because there's just not, these companies wouldn't be asking these questions along the way, if they look different from inside, and they don't, and so then we always have to come in afterwards and say, but the people and it's like, Well, maybe if your company looked like the people a little bit more, or you know, anything was more inclusive of the people like, then even even though you move at that speed, there would be people at the table that you know, like that there would be some kind of like, match between that and the policy wouldn't seem like you're in to two vacuums, you know, like kind of yelling at each other way too late and at the wrong at the wrong moment in time. And, you know, all of these are like much, much, much larger issues that are important, like, you know, consent culture, you know, they're wonderful opportunities to really dive deeper into. Yeah, what it means, you know, to live in a world that's so easily shareable and so fast, and so, you know, viral and duplicatable, you know, like that these ideas can grow so fast. And, you know, that wonderful and exciting. But we still have so much work to do in terms of like ways of being with each other. And

 

Carla Gannis  38:11  

I want to just interject there is this VC, Scott Hartley. He wrote this book a few years ago, because I was in conversation with him on AI a few years ago, and it was called the fuzzy and the techie. And I love that title. And I love what the premise is, to your point, Karen, that we need these people at the table. And when I say these people, that's all of us here, right? But we need the fuzzy people to the people who are the poets, or the anthropologists or the philosophers, and I teach in an engineering school at NYU, but I teach in a department technology, culture and society. And one thing that sustains me being in that milieu is that I am having conversations with poets and writers and engineers and creative technologists and incredible programmers, some of my students blow my mind, right? But we're all at the table together, whether it's on Zoom sometimes or in a Metaverse space, but we're all together and having these tough conversations. And so we need the fuzzy

 

karen darricades  39:08  

does the world needs more philosophers? And absolutely,

 

Ami Bhatt  39:13  

I could probably ask 1000 more questions, but I want to make sure I give a few minutes to see if there's any questions from the audience. Yeah,

 

Justin Haan  39:25  

we don't want to bias towards that side.

 

Speaker 7  39:29  

So I guess I heard certain things around all this call for policy being like told what to do. I was working with a large tech company on like responsible innovation principles. And something that somebody said stood out to me, which was standards of policy should be like the minimum bar and we should seek to you know, step above it. So I'm curious. What's maybe one step above that you would like to see for regeneration and creators that you would maybe want companies or people developing things to do?

 

karen darricades  40:05  

If you use the technology to react, you know, to some of the stuff you were talking about, or like all of the things that, you know, which I don't know, because I'm not a technologist, but I'm assuming it's possible. It's just not where the energy is. But But like, just making it more transparent to like, giving the context and giving the client consent. No people not necessary. You know, if you're making a, you know, we're talking about this last night, like you're making a five second gift, you're not going to have, you know, because I do I do those things. And I use images. I'm not an image maker. So I use that from the comments in the public domain. I'm not going to put you know, it's not the nature of the internet and the shareable format for me to have like a thing that accredits that's as long as so it's very difficult because it's not like bibliography you know, there's no kind of colloquial shorthand, but you know, different the fruit for, for actually practicing the consent culture that we might be able to create. But certainly the technologists like, Yeah, could we have a map? Can we see the roots? Can we see where it came from? Could we see behind the hood, could we see how it's done? That would be nice if they would spend some time on that.

 

Wade Wallerstein  41:04  

And just just to build on it, I think too, like, I am always curious about who like wrote the models who worked on them, like, I'd love to see some of the labor going into building these things that are being used to be more available or public or noted, or maybe for a particular kind of like I don't know, but I feel like there's a little bit of attribution that is maybe even like, outside of like the nitty gritty that like we could do as a baseline.

 

Carla Gannis  41:31  

I think that's a great idea. When I first started working with digital technologies, I studied painting, or BFA and MFA. And then I moved to New York, and I grew away all my paintings and began a new practice. And I remember when I first started making these works with these different programs, every time I'd make a work, I wouldn't just sign it with my name, I'd sign it with, you know, the different applications that use because I was really thinking about attribution. And I was also trying to challenge this genius myth that definitely is perpetuated in the art world, I don't know if you know, other context, and that it's just this one sole Creator, who has this genius. And we all speak of, you know, the hive mind today and thinking about the different people who are involved in our creative processes, and being more transparent about it. And it's something else as an educator is just thinking about when I work with my students, I often will talk to them about creating titles for themselves that don't exist yet. And I work in the realm of speculative design. And so thinking speculatively about, you know, what is next. And it's very hard to parse that right now. It's very confusing, but having that kind of innovative thinking process to think about what role could I produce for myself? And how could companies actually encourage that more. And those roles could be things that extend more into education, for example, or now that we are collaborating with these AI, you know, these machine learning models, thinking about problem solving, and instead of just always kind of thinking of entertainment, and that kind of quotient? What about we platform, the people who are collaborating with these models, and really celebrate, but also pay them for doing things for our environment and saving our planet and saving your species? Because I think the planet is going to be just fine. But what about our species and all this? You know, creativity, we have amassed, and why don't we take that seriously?

 

Unknown Speaker  43:32  

Hi, there, um,

 

Speaker 8  43:33  

I suppose it's kind of hot off the press. So yeah, apologies if it's, if you guys haven't heard about this yet, but I figured I'd ask. So I know, this past week, Microsoft came out and just kind of like policy liability, accountability question. So Microsoft, I think, who owns GitHub is taking liability or like owning all the liability for any copyright infringement? For commercial clients, there's a lot of kind of, like qualifiers here. And I'm wondering if you guys think that's a good model. If you've heard about it, I think this a cynical part of me was like, Are they just doing this? Because through GitHub, they already would have been liable, or they kind of did cost benefit. And it's unlikely, like there's very little copyright infringement in the output for GitHub, commercial clients anyways. So anyways, I'm just wondering, because any thoughts about that? Or if you, in general thought this was like a good model? I know, this is like for coding output, I think for GitHub, but clearly this would be used, I think, for any kind of pipeline where you know, with chat, GBT, you're producing text or with kind of image or text image artists, even in school or whatever, are producing images, and they might be kind of, you know, there might be copyright infringement there so

 

Justin Haan  44:50  

I can weigh in on that one. Yes, I think you're referring to the Microsoft. Just not like my mind. It's like to speak really on Microsoft, it's called the copyright compatibility commitment, I think and it's for co pilot, which they've labeled everything is co pilot. So it's not just for code. I think they're doing it because this is something I've seen in the, in the business world enterprise world is that businesses that are risk averse, are very fearful of using these tools, because of the risk and the uncertainty around the copyright landscape. And whether or not this fair use of this fair use that we've been talking about will apply. There are even more caveats than you mentioned. Just to be clear. So one of them is you have to enable a filter that will check Microsoft will check the outputs that are generated by something like co pilot against the training data set. And if there's a magic won't surface that to you. So you have to enable that you also have to not deliberately put in an input that is, if you note or requesting infringing information, or you get an output that you know, to be infringing, they also won't indemnify you for that. And then on top of all that, in the full policy hasn't been released yet. I understand it goes live October 1, but memorize all this. I'm not showing phising on it. But what's the final piece? Oh, it's just it may be subject to some sort of dollar liability cap. So if you get all that up to $500, that's not very helpful. It may be more likely, you know, the cost of the services you procured from them. But I think they're really doing it just because of a commercial necessity, because people are, they're incredibly powerful tools. And but there's just an incredible amount of fear, uncertainty and doubt about that. And so they're trying to verify.

 

Speaker 9  46:52  

Firstly, thank you for an insightful discussion. So misinformation and disinformation, even without AI has been a huge issue, and AI only serves to amplify it. And the advancement of AI is going to hyper amplify it. So we all know, you know, the more you say something, it becomes a truth. And I don't know if I shared it, right. But But as it relates to art, it becomes very easy to replicate things you could like one of the panelists discussed, you could pretty much say, generate a sunrise in the style of Leonardo da Vinci, and he got one. And then you could replicate that by basically reading the art and converting it to a text and then regenerating it into another image. And nobody owns a copyright for any of this stuff. So in general, how do you deal with misinformation disinformation, and specifically about art? Do you can you even regulate it? How do you even go about grappling how you even regulate something of this nature?

 

Carla Gannis  47:59  

Well, one thing, art sometimes playfully, deals with misinformation. And and let me explain them. In terms of art. Oftentimes, we see several movements beginning with data, you know, in the turn of the 20th century, or the beginning of the 20th century, and these artists didn't know how to really parse what was going on in the world, it was our first World War. And so they turned to the absurd. And they turned to kind of misinformation in the sense not in something that was nefarious, but in terms of looking at the world from a different lens. So sometimes there can be ways to harness that for us to wake up. And instead of always seeing the reality we have all co constructed in one particular kind of point of view, or didn't want in one frame of reference to look at it from a different angle. So sometimes these things if done in such a way that it is to kind of open up our eyes to new perspectives, I think that that can be really exciting. And that often happens in the domain of the arts. But in terms of how we can regulate disinformation campaigns that like, affect elections, that stuff gets scarier, of course, and we've seen that, you know, happening across social media platforms for several years now. But one thing I will point out is that, you know, in the early ages of photography and film, you know, there's this anecdote about people being in a theater and the train is coming towards them. That was a black and white grainy piece of film, but they thought that was reality. And one thing we have the capacity to do is, we would look at that today, and we would know that wasn't real. And I think we have a canny sensibility to start to learn and detect, or at least I'm hoping that and we will be able to detect more easily than we do now. When we see something that is fake or untrue. Maybe I'm overly optimistic,

 

karen darricades  49:55  

but Kenya media, media literacy, media literacy, digital literacy. Now you No, because again, and this is so what's so often, I mean, people over 65 are more likely to share fake news and stuff for you know, my mother's been worried lately that Elon Musk is trying to get her to do some shopping online. And then I think because of the Internet of Things, my phone is now sending me the videos that she's been seeing. And there's so much more absurd than I thought like, there's very obviously deep fakes. And I don't know if it's because I'm in the business of media literacy, or what exactly it is she's she's aware that the reality of a deep fake exists, but doesn't seem to see that it's like it's not synced properly. It's not seen properly. But again, yes, of course, these things become better and better and more difficult to detect. But they are, we can also get better and better at understanding. And again, it's all about transparency, like understanding how these systems work, understanding how these media are constructs, so that we can be able to distinguish it and I don't know if you are talking about in terms of like provenance of like actual artwork, like how do you know if it's the real bando? When and, you know, like, if that's what was the question more. I don't know, I'm not sure if future generations not knowing the real versus though,

 

Wade Wallerstein  51:11  

I can speak to that a little bit, I know that we actually have a couple of ways of dealing with that. In the traditional art world, we write a paper contract that accompanies the work of art, there are special stamp seals, physical marks, we do things to make sure that the paintings or the prints or whatever they are, are authenticated as such. But now folks like Mario Quan card, who's one is working with Blockchain to do different kinds of verification and provenance and authenticity tracking. So now even like, Forget NFT sales or like crypto market, there, were using different kinds of methods for creating a provenance tokens and authenticity tokens that are attached to physical objects or to digital assets. And so I mean, they're imperfect. And there's ways around different systems. But there are a lot of folks, particularly in the museological, like authentication space that are actively using these new technologies to further those ads. And I also assume that I mean, that's also not a technologist here. But I assume that if there are technologies that are good enough to determine those fakes, there are also going to be technologies that are good enough to read those ads fake too. So I'm hoping that in addition to the tools of deception, we also have the tools of detecting that are at pace.

 

Justin Haan  52:33  

But the ideally, responsible tool makers might build some of that functionality in terms of watermarking that might not be acceptable to the human eye or might not be you know, audible here, but is there nonetheless.

 

Wade Wallerstein  52:44  

Have you ever tried to open a locked room Adobe Acrobat?

 

karen darricades  52:54  

Technology exists

 

Unknown Speaker  52:55  

authenticity and all application like metadata.

 

Carla Gannis  52:59  

Right, right. Right.

 

Ami Bhatt  53:03  

Okay, well, I think with that we are probably ready for lunch. And so I want to thank the panelists for an incredibly illuminating and thoughtful discussion and we'll maybe give everybody

 

Announcer  53:18  

the engelberg center live podcast is a production of the engelberg center on innovation Law and Policy at NYU Law is released under a Creative Commons Attribution 4.0 International license. Our theme music is by Jessica Batke and is licensed under a Creative Commons Attribution 4.0 International license